The basic idea of AI problem-solving is very simple, but its implementation is not so simple. The AI robot collects information about a situation through sensors. The computer compares this information to stored data and decides what the information suggests. The computer runs through various possible actions and predicts which action will be most successful based on the collected information.

How Will Robots Learn?

Len Calderone for | RoboticsTomorrow

A truly functional robot must have the ability to learn on its own from interactions with the physical and social environment. It should not rely on human programming but be trainable.

 

It’s well known that a robot can work on an assembly line, but can a robot ever be intelligent? Artificial intelligence (AI) is hard to define. Ultimate AI would be a recreation of the human thought process. This means that a robot would be able to learn, reason, use language and to formulate original ideas.

The basic idea of AI problem-solving is very simple, but its implementation is not so simple. The AI robot collects information about a situation through sensors. The computer compares this information to stored data and decides what the information suggests. The computer runs through various possible actions and predicts which action will be most successful based on the collected information.

Modern robots have the ability to learn in a limited capacity. Learning robots recognize that a certain action accomplished an anticipated result. The robot stores this information and attempts the successful action the next time it is confronted by the same situation.

We know that the brain contains billions of neurons, and that we think and learn by establishing electrical connections between different neurons. What we don’t know is exactly how these connections combine to produce higher reasoning. The complex circuitry seems to be beyond understanding.

AI can be categorized in two ways. A weak or narrow AI is a system that is designed and trained for a certain task. Virtual personal assistants are a form of weak AI.

Artificial general intelligence is an AI system with human cognitive facilities so that the robot has sufficient intelligence to find a solution when given an unfamiliar task.

 

Training AI uses reinforcement learning, such as used by animal behavior. Using this approach, a robot can solve how to navigate a maze by trial and error and then associate the positive outcome with the activities that led up to it. Combining trial and error methods with large neural networks delivers the power required to make it work on complex problems.

Neural Networks have made great progress, as robots can now recognize images and voice comparable to humans; and they can understand natural language accurately. It is difficult to automate everyday tasks, but Generative Adversarial Networks (GANs) are making these tasks possible.

If an animal wants to catch its prey, it would try a maneuver.  As it attacked its prey and lost, it would explore what it did wrong, and what the quarry did right. It would then consider what it could have done to beat its prey. The animal would repeat this over and over until it caught its prey. This concept can be incorporated into AI.

 

There are two main components of a GAN—a Generator Neural Network—and a Discriminator Neural Network. The Generator Neural Network takes random input and tries to generate a sample of data. The task of a Discriminator Neural Network is to take input either from the real data or from the generator and try to predict whether the input is real or generated. This establishes a contest between generator and discriminator.

Stability is the most important obstruction to training a GAN. If the discriminator part is more powerful that its generator counterpart, the generator will fail to train effectively. In contrast, if the discriminator is too forgiving; it would let any image be generated. And this will mean that your GAN is useless.

Another approach to AI is imitation learning, which is related to observational learning, a behavior exhibited by infants and toddlers. Imitation learning is another term for reinforcement learning, or the challenge of getting a robot to act in the world to maximize its rewards. Imitation learning has become essential in the field of robotics, in which characteristics of mobility in settings like construction, agriculture, search and rescue, military, and others, make it challenging to manually program robotic solutions.

 

Tesla CEO Elon Musk's $1Billion non-profit, OpenAI, has unveiled a new program to train robots to realize a specific task after watching a person demonstrate it just once. The artificial intelligence research company developed a new algorithm called one-shot imitation learning that lets researchers communicate a task to an AI by performing it first in virtual reality; and then, the robot is taught to duplicate the physical action.

A robotic vision network uses information from a camera to elucidate the environment and determine the positions of objects. This network is trained with hundreds of simulated images with different lighting, textures and objects. After training, the network can find a set of blocks in the physical world.

A second robotic imitation system then takes over, observes the displayed task, deduces the intent of the action and the stages that a human would have taken in the same circumstances to complete the task. After learning from the demonstration, the robot stacks a real set of blocks even though the blocks were positioned differently than in the demo.

Scientists are seeking new ways to structure networks for robots to learn a specific task. For a neural network, function is structure.  This structure is not hard-coded but the results of atomic computational units initially connected between inputs and outputs that are able to modify their structure and connections. It is by modifying the overall structure of the network that it learns a specific function.

AI researchers and cognitive scientists have a tangible definition of transfer learning. it is the process that allows an AI system to use the knowledge acquired in a certain task to perform another task, while sharing a common structure. Cognitive science has a notion of near and far transfer, depending on how the two tasks seem to differ. But from a more intangible perspective—such as a noisy and complex environment—all learning is a form of transfer learning; and the difference between very near and very far transfer is only a matter of shared information — a matter of scale not of nature.

In the future, humans might be integrated with machines by loading their minds into a sturdy robot and live for thousands of years. Science fiction comes first—then, real life follows.

 

The content & opinions in this article are the author’s and do not necessarily represent the views of RoboticsTomorrow

Comments (0)

This post does not have any comments. Be the first to leave a comment below.


Post A Comment

You must be logged in before you can post a comment. Login now.

Featured Product

High Performance Servo Drives for localized and distributed control applications from Servo2Go.com

High Performance Servo Drives for localized and distributed control applications from Servo2Go.com

Engineered to drive brushless and brush servomotors in torque, velocity or position mode, Servo2Go.com offers a broad selection of servo drives in a wide range of input voltages and output power levels.