Researchers at the University of Birmingham Develop a Way of Teaching Robots to Grasp Objects Without Dropping Them
A way of 'teaching' robots to pick up unfamiliar objects without dropping or breaking them has been developed by researchers at the University of Birmingham. The research paves the way for robots to be used in more flexible ways and in more complex environments.
These could include manufacturing and packaging industries where a wide variety of different tasks have to be undertaken, and especially where humans and robots need to be able to work together.
It is already fairly commonplace to programme robots to pick up particular objects and move them around - factory production lines are a good example of this. But when those objects vary in size or shape, robots tend to get clumsy.
In the University's School of Computer Science, researchers have produced a solution to this problem. They have designed a way of programming a robotic hand to be able to pick up an object and then use information learned in that first grip to grasp and move a whole range of similar objects.
The researchers taught the robot a specific grasp type, for example, a power grip, using the whole hand to curve around an object, or a pinch grip, which uses two or three fingers. The robot was then able to generalise the grip and adapt it to other objects.
Alta Innovations, the University of Birmingham's technology commercialisation office, is currently looking for partners interested in licensing the technology. The University is already working with several companies which are keen to incorporate the technology into their processes.
"Current robot manipulation relies on the robot knowing the exact shape of the object," explains Jeremy Wyatt, Professor of Robotics and Artificial Intelligence at the University of Birmingham. "If you put that robot into an unstructured environment, for example if it is trying to pick up an object amongst clutter, or an object for which it doesn't already have an exact model, it will struggle.
"The programming we have developed allows the robot to assess the object and generate around 1000 different grasp options in about 5 seconds. That means the robot is able to make choices in real time about the best grasp for the object it has been told to pick up and it doesn't need to be continually retrained each time the object changes."
The robotic hands used by the team look very similar to human hands, with five jointed fingers, however, the programming would also work with robots that had other types of hand, such as pincer grips.
Professor Wyatt's research will be presented at the International Conference of Robotics and Automation, organised by the Institute of Electronics and Electrical Engineers, in May 2014. It was carried out within the PaCMan (Probabilistic and Compositional Representations for Object Manipulation) Consortium, funded by the European Union. The consortium is led by Birmingham and also includes the Università di Pisa, in Italy, and the Universität Innsbruck, in Austria.
For further information please contact:
Beck Lockwood, Campus PR, email: beck@campuspr.co.uk; tel: 0121 451 1321; mobile: 0778 3802318.
Featured Product

3D Vision: Ensenso B now also available as a mono version!
This compact 3D camera series combines a very short working distance, a large field of view and a high depth of field - perfect for bin picking applications. With its ability to capture multiple objects over a large area, it can help robots empty containers more efficiently. Now available from IDS Imaging Development Systems. In the color version of the Ensenso B, the stereo system is equipped with two RGB image sensors. This saves additional sensors and reduces installation space and hardware costs. Now, you can also choose your model to be equipped with two 5 MP mono sensors, achieving impressively high spatial precision. With enhanced sharpness and accuracy, you can tackle applications where absolute precision is essential. The great strength of the Ensenso B lies in the very precise detection of objects at close range. It offers a wide field of view and an impressively high depth of field. This means that the area in which an object is in focus is unusually large. At a distance of 30 centimetres between the camera and the object, the Z-accuracy is approx. 0.1 millimetres. The maximum working distance is 2 meters. This 3D camera series complies with protection class IP65/67 and is ideal for use in industrial environments.