The app forms the second phase of the project and - generally speaking - is intended to enable players to play against a simulated computer opponent. It builds on the trained AI and uses its results further.

Robot plays "Rock, Paper, Scissors" - Part 2/3

Case Study from | IDS Imaging

Read Part 1/3 here.

 

A vision app as a simulated computer opponent

How do you get a robot to play "Rock, Paper, Scissors"? Sebastian Trella - robotics fan and  blogger - has now come a decisive step closer to solving the puzzle. On the camera side, he used IDS NXT, a complete system for the use of intelligent cameras. The system covers the entire workflow from capturing and labelling training images to training the networks, creating apps for evaluation and actually running applications. In part 1 of our follow-up story, he had already implemented gesture recognition using AI-based image processing and thus also trained the neural networks. The further processing of the recognized gestures could be carried out by means of a specially created vision app.

 

Further processing of the analyzed image data

The app forms the second phase of the project and - generally speaking - is intended to enable players to play against a simulated computer opponent. It builds on the trained AI and uses its results further. Additionally, it places the AI opponent, who randomly "outputs" one of the three predefined hand movements and compares it with that of the player. She then decides who has won or whether there is a draw. The vision app is therefore the interface to the player on the computer monitor, while the camera is the interface for capturing the player's gestures.

The app creation took place in the cloud-based AI vision studio IDS lighthouse, as did the training of the neural networks. The block-based code editor, which is similar to the free, graphical programming environment Scratch, among others, made it easy for Sebastian Trella: "I was already familiar with vision app programming with Scratch/Blockly from LEGO® MINDSTORMS® and various other robotics products and I found my way around immediately. The programming interface is practically identical and I was therefore already familiar with the required way of thinking. Because whether I'm developing an AI-supported vision app on an IDS NXT camera or a motion sequence for a robot, the programming works in exactly the same way."

 

"Fine-tuning" directly on the camera

However, Trella was new to displaying text on images: "The robots I have programmed so far have only ever delivered output via a console. Integrating the output of the vision app directly into the camera image was a new approach for me." He was particularly surprised by the ability to edit the vision app both in the cloud and on the camera itself - but ultimately also by how convenient it was to develop on the device and how well the camera hardware performed: "Small changes to the program code can be tested directly on the camera without having to recompile everything in the cloud. The programming environment runs very smoothly and stably." However, he still sees room for improvement when debugging errors on the embedded device - especially with regard to the synchronization of the embedded device and cloud system following adjustments on the camera.

Trella discovered a real plus point, which he said he found "great", on the camera's website interface. This is where you will find the Swagger UI - a collection of open source tools for documenting and testing the integrated REST interface - including examples. That made his work easier. In this context, he also formulates some suggestions for future developments of the IDS NXT system: "It would be great to have switchable modules for communication with 3rd party robot systems so that the robot arm can be "co-programmed" directly from the vision app programming environment. This saves cabling between the robot and camera and simplifies development. Apart from that, I missed the import of image files directly via the programming environment - so far, this has only been possible via FTP. In my app, for example, I would then have displayed a picture of a trophy for the winner."

 

What's next?

"Building the vision app was great fun and I would like to thank you very much for the great opportunity to "play" with such interesting hardware," says Sebastian Trella. The next step is to take a closer look at the options for communicating between the vision app and the robot arm and try them out. The virtual computer opponent should not only display his gesture on the screen - i.e. in the camera image - but also in real life through the robot arm. This step is also the last step towards the finished "rock, paper, scissors" game: The robot is brought to life.

 

To be continued...

 

The content & opinions in this article are the author’s and do not necessarily represent the views of RoboticsTomorrow
IDS Imaging Development Systems Inc.

IDS Imaging Development Systems Inc.

IDS is a leading manufacturer of industrial cameras "Made in Germany" with USB or GigE interfaces. Equipped with state of the art CMOS sensors, the extensive camera portfolio ranges from low-cost project cameras, to small, powerful models with PoE functionality or robust cameras with housings that fulfill the prerequisites for protection code IP65/67. For quick, easy and precise 3D machine vision tasks IDS offers the Ensenso series. With the novel vision app-based sensors and cameras of IDS NXT the company opens up a new dimension in image processing. Whether in an industrial or non-industrial setting: IDS cameras and sensors assist companies worldwide in optimizing processes, ensuring quality, driving research, conserving raw materials, and serving people. They provide reliability, efficiency and flexibility for your application.

Other Articles

Pallet by pallet - Intelligent robotic vision system destacks up to 800 objects per hour
One of the locations where RODE adds value is for DHL eCommerce in Rotterdam. In this machine, two Ensenso 3D cameras from IDS Imaging Development Systems GmbH are implemented to provide the required image data.
Trained quickly, inspected swiftly - Handling and defect inspection of indexable inserts
The German medium-sized company has developed a fully automated handling and inspection system in which the DENKnet solution for AI-based image evaluation plays a decisive role and sets new standards in terms of performance, zero-defect production and speed.
Robot plays "Rock, Paper, Scissors" - Part 1/3
In his search for a suitable camera, he came across IDS NXT - a complete system for the use of intelligent image processing. It fulfilled all requirements and, thanks to artificial intelligence, much more besides pure gesture recognition.
More about IDS Imaging Development Systems Inc.

Comments (0)

This post does not have any comments. Be the first to leave a comment below.


Post A Comment

You must be logged in before you can post a comment. Login now.

Featured Product

Knight Optical - Unlocking Precision for the Robotics and Automation Revolution

Knight Optical - Unlocking Precision for the Robotics and Automation Revolution

In the era of Industry 4.0, automation and machine learning drive the future. Knight Optical stands at the forefront, collaborating globally to supply precision optical components that power the robotics and automation sector. From UAVs, drones, and robotics to machine vision, LiDAR-driven car sensors, and renewable energy, our optics empower innovation. Optical components play an important role within the industry, including: Optical windows and domes safeguard systems in unmanned vehicles while preserving the field of view for cameras. Infrared lenses with aspheric surfaces elevate thermal imaging. Optical filters excel in machine vision, with colour glass, interference, and dichroic filters in our stock. Knight Optical provides stock components and custom solutions made to your exact specs. With every component undergoing rigorous metrology and QA checks before shipment, you are sure to experience true precision, innovation, and assurance.