The Third Offset Must Update Asimov's Laws of Robotics

JG Randall for The National Interest:  Things tend to happen in threes. An unlikely triumvirate on the surface, it would appear that Asimov’s laws on robotics and the UN Convention on Conventional Weapons (CCW) will outflank the Third Offset—the nation’s search for its next silver bullet in war fighting is robotics—knowing that many nations will agree on moral grounds. These nations will reject Asimov based on semantics, and though the debate might be perceived as strictly academic, or even rhetorical, it is worth discussing for the sake of a good cautionary tale. Because, whether we like it or not, killer bots are coming to a theater of operation near you.

Before we get deep in the weeds, let’s get some clarity. First, let’s outline Asimov’s robotic laws. The Three Laws of Robotics are a set of rules devised by the science fiction author Isaac Asimov. They were introduced in his 1942 short story “Runaround,” although they had been foreshadowed in earlier stories.  Cont'd...

Robotics Gone Wild: 8 Animal-Inspired Machines

Thomas Claburn for InformationWeek:  Among programmers, there's a principle called DRY, which stands for "Don't repeat yourself." It's an attempt to avoid writing code that duplicates the function of other code.

DRY embodies the same resistance to needless repetition as the more common idiom, "Don't reinvent the wheel."

Among those making robots, a group that includes software and hardware engineers attempts to adhere to these principles, as can be seen in designs that borrow from nature, from the evolved forms of life on Earth.

Biomimicry and bioinspired design provide a way to avoid reinventing the wheel. The biological systems of living things have been honed through eons of Darwinian user testing.

Borrowing aspects of animal physiology isn't the only option or necessarily the best option for robot designers. For some purposes, something new may be necessary. For others, biomechanically systems can't be easily duplicated.  Cont'd...

Cozmo Is an Artificially Intelligent Toy Truck That's Also the Future of Robotics

CADE METZ for WIRED:  HANNS TAPPEINER TYPES a few lines of code into his laptop and hits “return.” A tiny robot sits beside the laptop, looking like one of those anthropomorphic automobiles that show up in Pixar’s Cars movies. Almost instantly, it wakes up, rolls down the table, and counts to four. This is Cozmo—an artificially intelligent toy robot unveiled late last month by San Francisco startup Anki—and Tappeiner, one of the company’s founders, is programming the little automaton to do new things.
The programs are simple—he also teaches Cozmo to stack blocks—but they’re supposed to be simple. Tappeiner is using Anki’s newly unveiled software development kit—an SDK, in coder parlance—that he says even the greenest of coders can use to tweak the behavior of the toy robot. And that’s a big deal, at least according to Anki. The company claims the SDK is the first of its kind: a kit that lets anyone program such an intelligent robot, a robot that recognizes faces and navigates new environments and even mimics emotions. With the kit, Tappeiner says, “we’re trying to advance the field of robotics.” He compares the move to Apple letting people build apps for the iPhone.  Cont'd...

The Tiny Radar Chip Revolutionizing Gesture Recognition: Google ATAP's Project Soli

From All About Circuits:  Google ATAP is bringing touchless interfaces to the market using a miniaturized radar chip no bigger than a dime. This is Project Soli.

Soli’s radar sensor is a marvel in many respects. For one thing, it solves a long-lived issue when it comes to gesture-recognition technology. Previous forays into the topic yielded almost-answers such as stereo cameras (which have difficulty understanding the overlap of fingers, e.g.) and capacitive touch sensing(which struggles to interpret motion in a 3D context).

Google ATAP’s answer is radar.

Radar is capable of interpreting objects’ position and motion even through other objects, making it perfect for developing a sensor that can be embedded in different kinds of devices like smartphones... (full article)

Toyota's U.S. Robotics Boss Promises Results Within 5 Years

Yuri Kageyama for News Factor:  The U.S. robotics expert tapped to head Toyota's Silicon Valley research company says the $1 billion investment by the giant Japanese automaker will start showing results within five years.

Gill Pratt [pictured above] told reporters that the Toyota Research Institute is also looking ahead into the distant future when there will be cars that anyone, including children and the elderly, can ride in on their own, as well as robots that help out in homes.

Pratt, a former program manager at the U.S. military's Defense Advanced Research Projects Agency, joined Toyota Motor Corp. first as a technical adviser when it set up its artificial intelligence research effort at Stanford University and MIT.

He said safety features will be the first types of AI applications to appear in Toyota vehicles. Such features are already offered on some models now being sold, such as sensors that help cars brake or warn drivers before a possible crash, and cars that drive themselves automatically into parking spaces or on certain roads.

"I expect something to come out during those five years," Pratt told reporters recently at Toyota's Tokyo office of the timeframe seen for the investment.  Cont'd...

Artificial Intelligence Produces Realistic Sounds That Fool Humans

From MIT News:  Video-trained system from MIT’s Computer Science and Artificial Intelligence Lab could help robots understand how objects interact with the world.  Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have demonstrated an algorithm that has effectively learned how to predict sound: When shown a silent video clip of an object being hit, the algorithm can produce a sound for the hit that is realistic enough to fool human viewers.

This “Turing Test for sound” represents much more than just a clever computer trick: Researchers envision future versions of similar algorithms being used to automatically produce sound effects for movies and TV shows, as well as to help robots better understand objects’ properties... (full article)(full paper)

 

Wal-Mart Experimenting With Robotic Shopping Cart for Stores

Spencer Soper & Shannon Pettypiece for Bloomberg:  Wal-Mart Stores Inc. is working with a robotics company to develop a shopping cart that helps customers find items on their lists and saves them from pushing a heavy cart through a sprawling store and parking lot, according to a person familiar with the matter.

Such carts are an emerging opportunity for robotics companies as brick-and-mortar stores look for innovative ways to match the convenience of Amazon.com Inc. and other online retailers, said Wendy Roberts, founder and chief executive officer of Five Elements Robotics.

Roberts, who spoke Tuesday on a robotics panel at the Bloomberg Technology Conference 2016, said her company was working with the “world’s largest retailer” on such a shopping cart.

That retailer is Wal-Mart, which is evaluating a prototype in its lab and giving feedback to the New Jersey robotics company, a person familiar said. Wal-Mart spokesman Ravi Jariwala said he couldn’t immediately comment on the robotic shopping cart.  Cont'd...

Google's developing its own version of the Laws of Robotics

Graham Templeton for ExtremeTech:  Google’s artificial intelligence researchers are starting to have to code around their own code, writing patches that limit a robot’s abilities so that it continues to develop down the path desired by the researchers — not by the robot itself. It’s the beginning of a long-term trend in robotics and AI in general: once we’ve put in all this work to increase the insight of an artificial intelligence, how can we make sure that insight will only be applied in the ways we would like?

That’s why researchers from Google’s DeepMind and the Future of Humanity Institute have published a paper outlining a software “killswitch” they claim can stop those instances of learning that could make an AI less useful — or, in the future, less safe. It’s really less a killswitch than a blind spot, removing from the AI the ability to learn the wrong lessons.  Cont'd...

OpenAI Gym Beta

From the OpenAI team:

We're releasing the public beta of OpenAI Gym, a toolkit for developing and comparing reinforcement learning(RL) algorithms. It consists of a growing suite ofenvironments (from simulated robots to Atari games), and a site for comparing and reproducing results.

OpenAI Gym is compatible with algorithms written in any framework, such asTensorflow and Theano. The environments are written in Python, but we'll soon make them easy to use from any language. We originally built OpenAI Gym as a tool to accelerate our own RL research. We hope it will be just as useful for the broader community.  Getting started:  If you'd like to dive in right away, you can work through our tutorial... (full intro post)

​Forget self-driving cars: What about self-flying drones?

Tina Amirtha for Benelux:  In 2014, three software engineers decided to create a drone company in Wavre, Belgium, just outside Brussels. All were licensed pilots and trained in NATO security techniques.

But rather than build drones themselves, they decided they would upgrade existing radio-controlled civilian drones with an ultra-secure software layer to allow the devices to fly autonomously.

Their company, EagleEye Systems, would manufacture the onboard computer and design the software, while existing manufacturers would provide the drone body and sensors.

Fast-forward to the end of March this year, when the company received a Section 333 exemption from the US Federal Aviation Administration to operate and sell its brand of autonomous drones in the US. The decision came amid expectations that the FAA will loosen its restrictions on legal drone operations and issue new rules to allow drones to fly above crowds.  Cont'd...

SUNSPRING by 32 Tesla K80 GPUs

From Ross Goodwin on Medium:

To call the film above surreal would be a dramatic understatement. Watching it for the first time, I almost couldn’t believe what I was seeing—actors taking something without any objective meaning, and breathing semantic life into it with their emotion, inflection, and movement. 

After further consideration, I realized that actors do this all the time. Take any obscure line of Shakespearean dialogue and consider that 99.5% of the audience who hears that line in 2016 would not understand its meaning if they read it in on paper. However, in a play, they do understand it based on its context and the actor’s delivery. 

As Modern English speakers, when we watch Shakespeare, we rely on actors to imbue the dialogue with meaning. And that’s exactly what happened inSunspring, because the script itself has no objective meaning.

On watching the film, many of my friends did not realize that the action descriptions as well as the dialogue were computer generated. After examining the output from the computer, the production team made an effort to choose only action descriptions that realistically could be filmed, although the sequences themselves remained bizarre and surreal... (medium article with technical details)

Here is the stage direction that led to Middleditch’s character vomiting an eyeball early in the film:

C
(smiles)
I don’t know anything about any of this.

H
(to Hauk, taking his eyes from his mouth)
Then what?

H2
There’s no answer.

Scientists develop bee model that will impact the development of aerial robotics

Phys.org:  Scientists have built a computer model that shows how bees use vision to detect the movement of the world around them and avoid crashing. This research, published in PLOS Computational Biology, is an important step in understanding how the bee brain processes the visual world and will aid the development of robotics.

The study led by Alexander Cope and his coauthors at the University of Sheffield shows how bees estimate the speed of motion, or optic flow, of the visual world around them and use this to control their flight. The model is based on Honeybees as they are excellent navigators and explorers, and use vision extensively in these tasks, despite having a brain of only one million neurons (in comparison to the human brain's 100 billion).

The model shows how bees are capable of navigating complex environments by using a simple extension to the known neural circuits, within the environment of a virtual world. The model then reproduces the detailed behaviour of real bees by using optic flow to fly down a corridor, and also matches up with how their neurons respond.  Cont'd...

Billions Are Being Invested in a Robot That Americans Don't Want

Keith Naughton for Bloomberg Technology:  Brian Lesko and Dan Sherman hate the idea of driverless cars, but for very different reasons.  Lesko, 46, a business-development executive in Atlanta, doesn’t trust a robot to keep him out of harm’s way. “It scares the bejeebers out of me,” he says.

Sherman, 21, a mechanical-engineering student at the University of Minnesota, Twin Cities, trusts the technology and sees these vehicles eventually taking over the road. But he dreads the change because his passion is working on cars to make them faster.

“It’s something I’ve loved to do my entire life and it’s kind of on its way out,” he says. “That’s the sad truth.”

The driverless revolution is racing forward, as inventors overcome technical challenges such as navigating at night and regulators craft new rules. Yet the rush to robot cars faces a big roadblock: People aren’t ready to give up the wheel. Recent surveys by J.D. Power, consulting company EY, the Texas A&M Transportation Institute, Canadian Automobile Association, researcher Kelley Blue Book and auto supplier Robert Bosch LLC all show that half to three-quarters of respondents don’t want anything to do with these models.  Cont'd...

Zero Zero Hover Camera drone uses face tracking tech to follow you

Lee Mathews for Geek:  Camera-toting drones that can follow a subject while they’re recording aren’t a new thing, but a company called Zero Zero is putting a very different spin on them. It’s all about how they track what’s being filmed.

Zero Zero’s new Hover Camera doesn’t require you to wear a special wristband like AirDog. There’s no “pod” to stuff in your pocket like the one that comes with Lily, and it doesn’t rely on GPS either. Instead, the Hover Camera uses its “eyes” to follow along.

Unlike some drones that use visual sensors to lock on to a moving subject, the Hover Camera uses them in conjunction with face and body recognition algorithms to ensure that it’s actually following the person you want it to follow.

For now, it can only track the person you initially select. By the time the Hover Camera goes up for sale, however, Zero Zero says it will be able to scan the entire surrounding area for faces.  Cont'd...

Bring 3D printed robots to life with 'Ziro' hand-controlled robotics kit

Benedict for 3Ders.org:  Tech startup ZeroUI, based in San Jose, California, has launched an Indiegogo campaign for Ziro, the “world’s first hand-controlled robotics kit”. The modular kit has been designed to bring 3D printed creations to life, and has already surpassed its $30,000 campaign goal.
It would be fair to say that the phenomenon of gesture recognition, throughout the wide variety of consumer electronics to which it has been introduced, has been a mixed success. The huge popularity of the Nintendo Wii showed that—for the right product—users were happy to use their hands and bodies as controllers, but for every Wii, there are a million useless webcam or smartphone functions, lying dormant, unused, and destined for the technology recycle bin.  Full Article:
 

Records 16 to 30 of 113

First | Previous | Next | Last

Unmanned & Other Topics - Featured Product

Advertise With Us to Reach this Targeted Market

Advertise With Us to Reach this Targeted Market

Reach thousands of focused Robotics Industry Professionals. Leaderboards, ROS Banners, Video Ads, Text Ads and Product Listings.