Capturing and processing camera and sensor data and recognizing various shapes to determine a set of robotic actions is conceptually easy. Yet Amazon challenged the industry to do a selecting and picking task robotically and 28 teams from around the world rose to the competition.

Amazon Challenges Robotics' Hot Topic: Perception

Frank Tobe | The Robot Report

 

Perception isn't just about cameras and sensors. Software has to convert the data and infer as to what it "sees". In the case of the Amazon Picking Challenge held last week at the IEEE International Conference on Robotics & Automation (ICRA), each team’s robot was to pick from a shopping list of consumer items of varying shapes and sizes - from pencils, to toys, tennis balls, cookies and cereal boxes - which were haphazardly stored on shelves, and then place their selected items in a bin. They could use any robot, mobile or not, and any arm and end-of-arm grasping tool or tools to accomplish the task.

It’s tricky for robots using sensors to identify and locate objects that can be confused by plastic packaging within the shelf or storage area. Rodney Brooks, of iRobot, MIT and Rethink Robotics fame, often speaks of an industry-wide aspirational goal regarding perception in robotics: "If we were only able to provide the visual capabilities of a 2-year old child, robots would quickly get a lot better." That is what this contest is all about.

Software has to first identify the item to pick and then figure out the best way to grab it and move it out of the storage area. Amazon, with its acquisition of Kiva Systems, has mastered bringing goods to the picker/packer and now wants to automate the remaining process of picking the correct goods from the shelves and placing them in the packing box, hence their Amazon Picking Challenge.

The top three winners were the teams from Technical University (TU) - Berlin, with 148 points; MIT in 2nd place with 88 points; and the 3rd place finisher (Oakland U and Dataspeed) which only got 35 points. Teams were scored on how many items were correctly selected, picked and placed. 

Many commercial companies with proprietary software for just this type of application (such as Tenessee-based Universal Robotics and their Neocortex Vision System, Silicon Valley startup Fetch Robotics who were premiering their new Fetch and Freight system at the same ICRA conference, and some other startups less far along like Harvard spin-out RightHand Robotics) chose not to enter because the terms of the challenge included that the software be open sourced. 

The winning group, Team RBO from TU-Berlin, chose a mobile manipulator as their approach to the contest's solution. They wrote their own vision system software and will soon be working on a paper on the subject. They used a Barrett WAM arm because it was the most flexible device to provide a combination of responsiveness, geometry, agility, dexterity and backdriveability in conjunction with their end-of-arm tool. They used a vacuum cleaner tool augmented with a suction cup and a vacuum cleaner to power the suction. For a base, they used an old Nomadic Technologies mobile platform which they upgraded to fit the needs of the contest. [Nomadic no longer exists. It was acquired by 3Com in 2000.] TU photo at right shows robot's camera view and random placement of items in cubby holes.

Noriko Takiguchi, a Japanese reporter for RoboNews.net, who was at the contest, observed the TU team and said that their approach was torque-based plus position control of the arm and the mobile base, consequently they had good torque control which enabled them to have flexibility in choosing where to place the suction cup and how much suction to apply.

Team RBO received a 1st place prize of $20,000 plus travel costs for the equipment and team members.

 

 

About Frank Tobe

Frank Tobe is the owner and publisher of The Robot Report. After selling his business and retiring from 25+ years as computer direct marketing and materials and consulting to the Democratic National Committee and major presidential, senatorial, congressional, mayoral campaigns and initiatives all across the U.S., Canada and internationally, he has energetically pursued a new career in researching and investing in robotics.

 

The content & opinions in this article are the author’s and do not necessarily represent the views of RoboticsTomorrow

Comments (0)

This post does not have any comments. Be the first to leave a comment below.


Post A Comment

You must be logged in before you can post a comment. Login now.

Featured Product

BitFlow Introduces 6th Generation Camera Link Frame Grabber: The Axion

BitFlow Introduces 6th Generation Camera Link Frame Grabber: The Axion

BitFlow has offered a Camera Link frame grabbers for almost 15 years. This latest offering, our 6th generation combines the power of CoaXPress with the requirements of Camera Link 2.0. Enabling a single or two camera system to operate at up to 850 MB/S per camera, the Axion-CL family is the best choice for CL frame grabber. Like the Cyton-CXP frame grabber, the Axion-CL leverages features such as the new StreamSync system, a highly optimized DMA engine, and expanded I/O capabilities that provide unprecedented flexibility in routing. There are two options available; Axion 1xE & Axion 2xE. The Axion 1xE is compatible with one base, medium, full or 80-bit camera offering PoCL, Power over Camera Link, on both connectors. The Axion 2xE is compatible with two base, medium, full or 80-bit cameras offering PoCL on both connectors for both cameras. The Axion-CL is a culmination of the continuous improvements and updates BitFlow has made to Camera Link frame grabbers.