The app forms the second phase of the project and - generally speaking - is intended to enable players to play against a simulated computer opponent. It builds on the trained AI and uses its results further.

Robot plays "Rock, Paper, Scissors" - Part 2/3

Case Study from | IDS Imaging

Read Part 1/3 here.

 

A vision app as a simulated computer opponent

How do you get a robot to play "Rock, Paper, Scissors"? Sebastian Trella - robotics fan and  blogger - has now come a decisive step closer to solving the puzzle. On the camera side, he used IDS NXT, a complete system for the use of intelligent cameras. The system covers the entire workflow from capturing and labelling training images to training the networks, creating apps for evaluation and actually running applications. In part 1 of our follow-up story, he had already implemented gesture recognition using AI-based image processing and thus also trained the neural networks. The further processing of the recognized gestures could be carried out by means of a specially created vision app.

 

Further processing of the analyzed image data

The app forms the second phase of the project and - generally speaking - is intended to enable players to play against a simulated computer opponent. It builds on the trained AI and uses its results further. Additionally, it places the AI opponent, who randomly "outputs" one of the three predefined hand movements and compares it with that of the player. She then decides who has won or whether there is a draw. The vision app is therefore the interface to the player on the computer monitor, while the camera is the interface for capturing the player's gestures.

The app creation took place in the cloud-based AI vision studio IDS lighthouse, as did the training of the neural networks. The block-based code editor, which is similar to the free, graphical programming environment Scratch, among others, made it easy for Sebastian Trella: "I was already familiar with vision app programming with Scratch/Blockly from LEGO® MINDSTORMS® and various other robotics products and I found my way around immediately. The programming interface is practically identical and I was therefore already familiar with the required way of thinking. Because whether I'm developing an AI-supported vision app on an IDS NXT camera or a motion sequence for a robot, the programming works in exactly the same way."

 

"Fine-tuning" directly on the camera

However, Trella was new to displaying text on images: "The robots I have programmed so far have only ever delivered output via a console. Integrating the output of the vision app directly into the camera image was a new approach for me." He was particularly surprised by the ability to edit the vision app both in the cloud and on the camera itself - but ultimately also by how convenient it was to develop on the device and how well the camera hardware performed: "Small changes to the program code can be tested directly on the camera without having to recompile everything in the cloud. The programming environment runs very smoothly and stably." However, he still sees room for improvement when debugging errors on the embedded device - especially with regard to the synchronization of the embedded device and cloud system following adjustments on the camera.

Trella discovered a real plus point, which he said he found "great", on the camera's website interface. This is where you will find the Swagger UI - a collection of open source tools for documenting and testing the integrated REST interface - including examples. That made his work easier. In this context, he also formulates some suggestions for future developments of the IDS NXT system: "It would be great to have switchable modules for communication with 3rd party robot systems so that the robot arm can be "co-programmed" directly from the vision app programming environment. This saves cabling between the robot and camera and simplifies development. Apart from that, I missed the import of image files directly via the programming environment - so far, this has only been possible via FTP. In my app, for example, I would then have displayed a picture of a trophy for the winner."

 

What's next?

"Building the vision app was great fun and I would like to thank you very much for the great opportunity to "play" with such interesting hardware," says Sebastian Trella. The next step is to take a closer look at the options for communicating between the vision app and the robot arm and try them out. The virtual computer opponent should not only display his gesture on the screen - i.e. in the camera image - but also in real life through the robot arm. This step is also the last step towards the finished "rock, paper, scissors" game: The robot is brought to life.

 

To be continued...

 

The content & opinions in this article are the author’s and do not necessarily represent the views of RoboticsTomorrow
IDS Imaging Development Systems Inc.

IDS Imaging Development Systems Inc.

IDS is a leading manufacturer of industrial cameras "Made in Germany" with USB or GigE interfaces. Equipped with state of the art CMOS sensors, the extensive camera portfolio ranges from low-cost project cameras, to small, powerful models with PoE functionality or robust cameras with housings that fulfill the prerequisites for protection code IP65/67. For quick, easy and precise 3D machine vision tasks IDS offers the Ensenso series. With the novel vision app-based sensors and cameras of IDS NXT the company opens up a new dimension in image processing. Whether in an industrial or non-industrial setting: IDS cameras and sensors assist companies worldwide in optimizing processes, ensuring quality, driving research, conserving raw materials, and serving people. They provide reliability, efficiency and flexibility for your application.

Other Articles

2D cameras for positioning and inspecting ultra-fine wires in semiconductor production
Wire bonding is a key process in semiconductor production. Extremely fine wires with diameters of 15 to 75 micrometers are used to create tiny electrical connections between a semiconductor chip and other components.
Picking the right chart - (Semi-)Autonomous surface and underwater mapping for rivers and lakes
On the camera side, the Fraunhofer Institute relies on two uEye FA industrial cameras from IDS. The robust and resilient models with PoE are ideal for demanding environments.
Pallet by pallet - Intelligent robotic vision system destacks up to 800 objects per hour
One of the locations where RODE adds value is for DHL eCommerce in Rotterdam. In this machine, two Ensenso 3D cameras from IDS Imaging Development Systems GmbH are implemented to provide the required image data.
More about IDS Imaging Development Systems Inc.

Comments (0)

This post does not have any comments. Be the first to leave a comment below.


Post A Comment

You must be logged in before you can post a comment. Login now.

Featured Product

REIKU's Cable Saver™ - The Most Versatile Modular Robotic Cable Management Solution

REIKU's Cable Saver™ - The Most Versatile Modular Robotic Cable Management Solution

REIKU's Cable Saver™ Solution eliminates downtime, loss of revenue, expensive cable and hose replacement costs, maintenance labor costs. It's available in three sizes 36, 52 and 70 mm. All of the robots cables and hoses are protected when routed through the Cable Saver™ corrugated tubing.The Cable Saver™ uses a spring retraction system housed inside the Energy Tube™ to keep this service loop out of harms way in safe location at the rear of the Robot when not required. The Cable Saver™ is a COMPLETE solution for any make or model of robot. It installs quickly-on either side of the robot and has been tested to resist over 15 million repetitive cycles. REIKU is committed to providing the most modular, effective options for ensuring your robotic components operate without downtime due to cable management. www.CableSaver.com