We solve the motion platform problem with a combination of innovative techniques. These include: an extremely accurate sensor system, high speed computing, optimized power to weight ratio, and 3D printed parts.

Using FPGA Supercomputers to Build a High Precision Robot
Using FPGA Supercomputers to Build a High Precision Robot

Kent Gilson | Haddington Dynamics Inc.

Tell us about Haddington Dynamics Inc.

Haddington is the classic neighborhood garage start-up, literally.  The original "aha" moment was using FPGA supercomputers to build high precision, signal processing based robots. We went through the typical start-up phases of excitement, injection of a little bit of money, development of prototypes, progress, fear and greed, etc.  Eventually we did a Kickstarter. After reaching our target goal we settled down to fulfilling our robot kit orders.  A year and a half later we are going to Germany (automatica-Munich) to show the world that high precision parts are not needed to have precision robots.  Removing the need for precision parts greatly reduces the cost of the robot, allowing us to sell a precise haptic robot for under $5,000. We can do this because of experience and expertise in FPGA supercomputers, which is our background.  We’ve been making award-winning FPGA supercomputers since the late 80’s. In fact, we invented the concept and coined the term reconfigurable computing.

 

What are the key differentiators between Dexter/HD and other robots/cobots in the market today?

We solve the motion platform problem with a combination of innovative techniques. These include: an extremely accurate sensor system, high speed computing, optimized power to weight ratio, and 3D printed parts.

The robot actually has an ultra-high resolution sense of touch. It’s as good as, if not better than, the human sense of touch. It gives you an entire new range of possibilities like trainability. Also, it’s such high fidelity that it approaches human dexterity in its ability to move in speed, precision, and force, which is where Dexter got its name, human DEXTERity. Because we can capture all that information at very high resolutions, we can train, rather than just program, Dexter to perform to the best of its capability.  We’ve solved the programming problem.

 

Given that Dexter/HD is still fairly new to the market, how has your product been received and how do you feel about the future of Dexter/HD?

We were actually concerned about that when we launched the Kickstarter. We didn’t know how we were going to be received. We hoped that people would understand what we were doing and get involved and not think it was too early. When we released, however, we were amazed at the quality of people that joined up and supported us.  It was really an out of the box: a $2,000 robot kit, hope this company can deliver on a huge promise of a robot to cure scarcity. The productive collaboration with our Kickstarter backers started even before we reached the Kickstarter goal.  They were coming out and wanting to set up deals. Now we have manufacturing and license deals out of it. It’s been overwhelming.

 

Why use FPGAs for a control system?

The reason is simple: Speed, unlimited speed.  FPGAs present an enormous amount of computational density. People don’t realize how much you can get done inside of an FPGA in just a few clock cycles. It’s gigantic. So we have a control challenge and we want to leverage that speed to get higher and higher fidelity of our control system. We do it by measuring faster and more precisely, and that becomes our vision into how the robot moves. It’s just a simple optical encoder system where we turn a 3D printed code disk into an analog sine cosine wave to get a circle and do interpolation from it. We record a table and then compare against that table to normalize, 2 million samples per second. Which spread over 10 converters--2 at a time--equals 200,000 samples per second per LED/phototransistor which gives us 100,000 (100kHz) frequency bandwidth per axis. From that we get loads of data. We can then use signal processing to get process gain. And all of this is done in real time.  FPGAs are completely parallel. It’s completely different from a processor where you have one thread running or multiple threads running. Everything is happening all the time simultaneously; you have to be explicit about things that are synchronized because it’s all happening at once.

Because of this structure we can add numerous sensors to the FPGA.  It becomes a whole computational and sensor framework for doing any kind of robot brain processing. We’ve really invented the robot brain. Now you can tie that robot brain into lots of actuators and more inputs.

 

How are you getting sub 50 micron repeatability and 5 micron stepping using belts and pulleys and 3D printed parts?

That’s the beauty of this thing. It’s all about the measuring system. So we have 3D printed code disks that have a couple hundred slits on them, and we get 8,000 points of resolution between each of those slits. This gives us an enormous amount of information about what angular position each joint is in. Since we are measuring those directly on each of those 5 joints, we use that information to close the servo loops so we’re controlling right at that point. That is important because we don’t have transmission error or any traditional types of robot error.

There’s a whole other set of inventions in the mechanical system as well. To make it 3D printable we had to invent this whole idea of reinforcing the Z dimension with carbon fiber strakes, and that became the latus work we built everything on top of. So it’s really metal to carbon fiber to metal to carbon fiber, all the way through the robot. The 3D printed parts act as an external scaffolding system. Using the FPGA, we are measuring out the mechanical errors of the system. We have total visibility of where the angular displacement of our robot is and we have that at 100kHz per axis.

 

Your robot can run on solar power. Tell us how this works.

That’s one of the other things we can optimize. Because we don’t have to optimize around the mechanical systems, we can now optimize power consumption. We have been able to put all the mass of the robot down at the bottom of the robot. That acts as a counter balance so that every joint is actually contributing its portion to the end effector and all the energy of the motor goes into the payload. Because of this we have very small NEMA 17 stepper motors that we’re running at about 20 watts a piece in order to lift 3 kilograms. So 100 watts gives us 3 kilograms of payload, which is pretty efficient. A 6 kilogram robot can carry a 3 kilogram payload

Tell us about digital compliance and how you are able to achieve it.

Digital compliance is a term that we came up with to describe the confluence of the high precision measurements that we have in our motor system. Most robots, these cobots, have series-elastic components, some kind of spring between the actuator and the end effector, and they measure the deflection of that spring. The problem is you can’t have both precision and flexibility in your actuators. In order to get precision actuators you need very rigid materials and in order to get compliance you need very elastic resonators. What we call digital compliance is our ability to measure half the wavelength of blue light and our angular resolution of the encoder system, so we are able to measure the flexing of the steel in the transmissions. Just the little elasticity in the transmission system gives us our series compliant mechanism. We get 8 bits of resolution throughout that whole transmission.

 

You claim to be “unhackable.” Tell us more about that claim and your security protocols.

Well, unhackable is a bold term. What we have is a security system that creates a ground truth. We have a key store and a PUF--a physically unclonable function--that can uniquely identify the robot and a way to encrypt that into a piece of hardware that’s not even connected to the processor. It’s connected though the FPGA. We have a hardware link between the processor and our key store and PUF. We have a very secure architecture for creating a ground truth for a device. You can authenticate and rely on the fact that this signal is coming from this robot. Our robot can never lie to you. It’s about trusted systems. You have to have trust as your basic mechanism of interacting. It’s no different with machines.

 

Where do you see collaborative robots in the next 5-10 years and where does Haddington Dynamics play a role?

This is probably the most exciting time for the whole transition we are going through. There is going to be a whole new definition for cobotics. Around the world, we are interacting with robots and are transmitting our dexterity, craft, and art. It is exciting to be able to reproduce and have our work effort spread around the world. Our technology is going to result in a new capability for people to physically interact.

 

KENT L. GILSON - INVENTOR

Kent Gilson is a prolific inventor, serial entrepreneur and pioneer in reconfigurable computing (RC). After programming his first computer game at age 12, Kent has launched eight entrepreneurial ventures, won multiple awards for his innovations, and created products and applications used in numerous industries across the globe. His reconfigurable recording studio, Digital Wings, won Best of Show at the National Association of Music Merchants and received the Xilinx Best Consumer Reconfigurable Computing Product Award. Kent is also the creator of Viva, an object-oriented programming language and operating environment that for the first time harnessed the power of field programmable gate arrays (FPGAs) into general-purpose supercomputing. With Haddington Dynamics and Dexter, Kent brings his globally unique expertise in reconfigurable computing to his passion of robotics, with a commitment to make robots more powerful and empowering than ever for their human users.

 

About Haddington Dynamics

Haddington Dynamics is an advanced research and development company applying the most advanced  reconfigurable supercomputing technology to focus on low-cost, ultra precision motion control for robotics. To cut costs while increasing functionality, modern manufacturing is becoming increasingly automated. This presents problems of keeping the cost of the automated equipment down while improving on its precision and flexibility. Making accurate, fast hardware is only half of the story. To get really precise, rapid feedback between vision and touch sensors is necessary. Furthermore, this feedback must be instantly applied to recomputing real-time motion control. This taxes conventional software architecture, its customization and its user interface.  Haddington Dynamics is uniquely qualified to address these challenges. Our expertise in programming FPGAs gives us supercomputer performance without supercomputer costs.

Currently robotic manufacturing gravitates toward two poles: high-cost, complex specialized systems; or low-cost, low-performance systems. Haddington Dynamics was created to empower a growing user/community base with robotic solutions that simultaneously break cost and performance barriers. The company accelerates adoption and collaboration by open-sourcing its core technology.

 

The content & opinions in this article are the author’s and do not necessarily represent the views of RoboticsTomorrow

Comments (0)

This post does not have any comments. Be the first to leave a comment below.


Post A Comment

You must be logged in before you can post a comment. Login now.

Featured Product

Helios™2 Ray Time-of-Flight Camera Designed for Unmatched Performance in Outdoor Lighting Conditions

Helios™2 Ray Time-of-Flight Camera Designed for Unmatched Performance in Outdoor Lighting Conditions

The Helios2 Ray camera is powered by Sony's DepthSense IMX556PLR ToF image sensor and is specifically engineered for exceptional performance in challenging outdoor lighting environments. Equipped with 940nm VCSEL laser diodes, the Helios2 Ray generates real-time 3D point clouds, even in direct sunlight, making it suitable for a wide range of outdoor applications. The Helios2 Ray offers the same IP67 and Factory Tough™ design as the standard Helios2 camera featuring a 640 x 480 depth resolution at distances of up to 8.3 meters and a frame rate of 30 fps.