The basic idea of AI problem-solving is very simple, but its implementation is not so simple. The AI robot collects information about a situation through sensors. The computer compares this information to stored data and decides what the information suggests. The computer runs through various possible actions and predicts which action will be most successful based on the collected information.

How Will Robots Learn?

Len Calderone for | RoboticsTomorrow

A truly functional robot must have the ability to learn on its own from interactions with the physical and social environment. It should not rely on human programming but be trainable.

It’s well known that a robot can work on an assembly line, but can a robot ever be intelligent? Artificial intelligence (AI) is hard to define. Ultimate AI would be a recreation of the human thought process. This means that a robot would be able to learn, reason, use language and to formulate original ideas.

The basic idea of AI problem-solving is very simple, but its implementation is not so simple. The AI robot collects information about a situation through sensors. The computer compares this information to stored data and decides what the information suggests. The computer runs through various possible actions and predicts which action will be most successful based on the collected information.

Modern robots have the ability to learn in a limited capacity. Learning robots recognize that a certain action accomplished an anticipated result. The robot stores this information and attempts the successful action the next time it is confronted by the same situation.

We know that the brain contains billions of neurons, and that we think and learn by establishing electrical connections between different neurons. What we don’t know is exactly how these connections combine to produce higher reasoning. The complex circuitry seems to be beyond understanding.

AI can be categorized in two ways. A weak or narrow AI is a system that is designed and trained for a certain task. Virtual personal assistants are a form of weak AI.

Artificial general intelligence is an AI system with human cognitive facilities so that the robot has sufficient intelligence to find a solution when given an unfamiliar task.

Training AI uses reinforcement learning, such as used by animal behavior. Using this approach, a robot can solve how to navigate a maze by trial and error and then associate the positive outcome with the activities that led up to it. Combining trial and error methods with large neural networks delivers the power required to make it work on complex problems.

Neural Networks have made great progress, as robots can now recognize images and voice comparable to humans; and they can understand natural language accurately. It is difficult to automate everyday tasks, but Generative Adversarial Networks (GANs) are making these tasks possible.

If an animal wants to catch its prey, it would try a maneuver.  As it attacked its prey and lost, it would explore what it did wrong, and what the quarry did right. It would then consider what it could have done to beat its prey. The animal would repeat this over and over until it caught its prey. This concept can be incorporated into AI.

There are two main components of a GAN—a Generator Neural Network—and a Discriminator Neural Network. The Generator Neural Network takes random input and tries to generate a sample of data. The task of a Discriminator Neural Network is to take input either from the real data or from the generator and try to predict whether the input is real or generated. This establishes a contest between generator and discriminator.

Stability is the most important obstruction to training a GAN. If the discriminator part is more powerful that its generator counterpart, the generator will fail to train effectively. In contrast, if the discriminator is too forgiving; it would let any image be generated. And this will mean that your GAN is useless.

Another approach to AI is imitation learning, which is related to observational learning, a behavior exhibited by infants and toddlers. Imitation learning is another term for reinforcement learning, or the challenge of getting a robot to act in the world to maximize its rewards. Imitation learning has become essential in the field of robotics, in which characteristics of mobility in settings like construction, agriculture, search and rescue, military, and others, make it challenging to manually program robotic solutions.

Tesla CEO Elon Musk's $1Billion non-profit, OpenAI, has unveiled a new program to train robots to realize a specific task after watching a person demonstrate it just once. The artificial intelligence research company developed a new algorithm called one-shot imitation learning that lets researchers communicate a task to an AI by performing it first in virtual reality; and then, the robot is taught to duplicate the physical action.

A robotic vision network uses information from a camera to elucidate the environment and determine the positions of objects. This network is trained with hundreds of simulated images with different lighting, textures and objects. After training, the network can find a set of blocks in the physical world.

A second robotic imitation system then takes over, observes the displayed task, deduces the intent of the action and the stages that a human would have taken in the same circumstances to complete the task. After learning from the demonstration, the robot stacks a real set of blocks even though the blocks were positioned differently than in the demo.

Scientists are seeking new ways to structure networks for robots to learn a specific task. For a neural network, function is structure.  This structure is not hard-coded but the results of atomic computational units initially connected between inputs and outputs that are able to modify their structure and connections. It is by modifying the overall structure of the network that it learns a specific function.

AI researchers and cognitive scientists have a tangible definition of transfer learning. it is the process that allows an AI system to use the knowledge acquired in a certain task to perform another task, while sharing a common structure. Cognitive science has a notion of near and far transfer, depending on how the two tasks seem to differ. But from a more intangible perspective—such as a noisy and complex environment—all learning is a form of transfer learning; and the difference between very near and very far transfer is only a matter of shared information — a matter of scale not of nature.

In the future, humans might be integrated with machines by loading their minds into a sturdy robot and live for thousands of years. Science fiction comes first—then, real life follows.

 

The content & opinions in this article are the author’s and do not necessarily represent the views of RoboticsTomorrow
Len Calderone - Contributing Author

Len Calderone - Contributing Author

Len contributes to this publication on a regular basis. Past articles can be found with an Article Search and are listed below. He also writes short stories that always have a surprise ending. He has also written a book on wedding photography on a budget. These can be found at http://www.smashwords.com/profile/view/Megalen

Other Articles

Is Your Company Ready for Artificial Intelligence?
The future of AI will require facing rapid change, vagueness, and difficulty. We need to be prepared for different adaptations of the future. There is no way to know what path the development of AI will take.
XPONENTIAL 2018 is Just Around the Corner
725 exhibitors will showcase a full range of technologies, products and solutions in more than 370,000 square feet of space filled with hands-on exhibits, interactive demos, and new products.
Artificial Skin for Robots
If a robot is dealing with electronics, it needs to know whether its hand is sliding along a wire or pulling on it. If the robot needs to hold a medical instrument, it needs to know if the object is slipping or in a firm grip.
More about Len Calderone - Contributing Author

Comments (0)

This post does not have any comments. Be the first to leave a comment below.


Post A Comment

You must be logged in before you can post a comment. Login now.

Featured Product

Universal Robots - Collaborative Robot Solutions

Universal Robots - Collaborative Robot Solutions

Universal Robots is a result of many years of intensive research in robotics. The product portfolio includes the UR5 and UR10 models that handle payloads of up to 11.3 lbs. and 22.6 lbs. respectively. The six-axis robot arms weigh as little as 40 lbs. with reach capabilities of up to 51 inches. Repeatability of +/- .004" allows quick precision handling of even microscopically small parts. After initial risk assessment, the collaborative Universal Robots can operate alongside human operators without cumbersome and expensive safety guarding. This makes it simple and easy to move the light-weight robot around the production, addressing the needs of agile manufacturing even within small- and medium sized companies regarding automation as costly and complex. If the robots come into contact with an employee, the built-in force control limits the forces at contact, adhering to the current safety requirements on force and torque limitations. Intuitively programmed by non-technical users, the robot arms go from box to operation in less than an hour, and typically pay for themselves within 195 days. Since the first UR robot entered the market in 2009, the company has seen substantial growth with the robotic arms now being sold in more than 50 countries worldwide.