From the OpenAI team:
We're releasing the public beta of OpenAI Gym, a toolkit for developing and comparing reinforcement learning(RL) algorithms. It consists of a growing suite ofenvironments (from simulated robots to Atari games), and a site for comparing and reproducing results.
OpenAI Gym is compatible with algorithms written in any framework, such asTensorflow and Theano. The environments are written in Python, but we'll soon make them easy to use from any language. We originally built OpenAI Gym as a tool to accelerate our own RL research. We hope it will be just as useful for the broader community. Getting started: If you'd like to dive in right away, you can work through our tutorial... (full intro post)
Jon Excell for The Engineer: Designed by a team at the Max Planck Institute for Intelligent Systems in Stuttgart, the new device is claimed to have considerable advantages over existing pneumatically-powered soft actuators as it doesn’t require a tether.
The device consists of a dielectric elastomer actuator (DEA): a membrane made of hyperelastic material like a latex balloon, with flexible (or ‘compliant’) electrodes attached to each side.
The stretching of the membrane is regulated by means of an electric field between the electrodes, as the electrodes attract each other and squeeze the membrane when voltage is applied. By attaching multiple such membranes, the place of deformation can be shifted controllably in the system. Air is displaced between two chambers.
The membrane material has two stable states. In other words, it can have two different volume configurations at a given pressure without the need to minimize the larger volume. Thanks to this bi-stable state, the researchers are able to move air between a more highly inflated chamber and a less inflated one. They do this by applying an electric current to the membrane of the smaller chamber which responds by stretching and sucking air out of the other bubble. Cont'd...
From Vikash Kumar at University of Washington:
Dexterous hand manipulation is one of the most complex types of biological movement, and has proven very difficult to replicate in robots. The usual approaches to robotic control - following pre-defined trajectories or planning online with reduced models - are both inapplicable. Dexterous manipulation is so sensitive to small variations in contact force and object location that it seems to require online planning without any simplifications. Here we demonstrate for the first time online planning (or model-predictive control) with a full physics model of a humanoid hand, with 28 degrees of freedom and 48 pneumatic actuators. We augment the actuation space with motor synergies which speed up optimization without removing flexibility. Most of our results are in simulation, showing nonprehensile object manipulation as well as typing. In both cases the input to the system is a high level task description, while all details of the hand movement emerge online from fully automated numerical optimization. We also show preliminary results on a hardware platform we have developed "ADROIT" - a ShadowHand skeleton equipped with faster and more compliant actuation... (website)
MIT News via Larry Hardesty for RoboHub: In experiments involving a simulation of the human esophagus and stomach, researchers at MIT, the University of Sheffield, and the Tokyo Institute of Technology have demonstrated a tiny origami robot that can unfold itself from a swallowed capsule and, steered by external magnetic fields, crawl across the stomach wall to remove a swallowed button battery or patch a wound.
The new work, which the researchers are presenting this week at the International Conference on Robotics and Automation, builds on a long sequence of papers on origamirobots from the research group of Daniela Rus, the Andrew and Erna Viterbi Professor in MIT’s Department of Electrical Engineering and Computer Science. Cont'd...
Phys.org: Scientists have built a computer model that shows how bees use vision to detect the movement of the world around them and avoid crashing. This research, published in PLOS Computational Biology, is an important step in understanding how the bee brain processes the visual world and will aid the development of robotics.
The study led by Alexander Cope and his coauthors at the University of Sheffield shows how bees estimate the speed of motion, or optic flow, of the visual world around them and use this to control their flight. The model is based on Honeybees as they are excellent navigators and explorers, and use vision extensively in these tasks, despite having a brain of only one million neurons (in comparison to the human brain's 100 billion).
The model shows how bees are capable of navigating complex environments by using a simple extension to the known neural circuits, within the environment of a virtual world. The model then reproduces the detailed behaviour of real bees by using optic flow to fly down a corridor, and also matches up with how their neurons respond. Cont'd...
Keith Naughton for Bloomberg Technology: Brian Lesko and Dan Sherman hate the idea of driverless cars, but for very different reasons. Lesko, 46, a business-development executive in Atlanta, doesn’t trust a robot to keep him out of harm’s way. “It scares the bejeebers out of me,” he says.
Sherman, 21, a mechanical-engineering student at the University of Minnesota, Twin Cities, trusts the technology and sees these vehicles eventually taking over the road. But he dreads the change because his passion is working on cars to make them faster.
“It’s something I’ve loved to do my entire life and it’s kind of on its way out,” he says. “That’s the sad truth.”
The driverless revolution is racing forward, as inventors overcome technical challenges such as navigating at night and regulators craft new rules. Yet the rush to robot cars faces a big roadblock: People aren’t ready to give up the wheel. Recent surveys by J.D. Power, consulting company EY, the Texas A&M Transportation Institute, Canadian Automobile Association, researcher Kelley Blue Book and auto supplier Robert Bosch LLC all show that half to three-quarters of respondents don’t want anything to do with these models. Cont'd...
Sam Fleming for Financial Times: When Andy Puzder, chief executive of restaurant chains Carl’s Jr and Hardee’s, said in March that rising employment costs could drive the spread of automation in the fast-food sector, he tapped into a growing anxiety in the US.
From touchscreen ordering systems to burger-flipping robots and self-driving trucks, automation is stalking an increasing number of professions in the country’s service sector, which employs the vast majority of the workforce.
Two-fifths of US employees are in occupations where at least half their time is spent doing activities that could be automated by adapting technology already available, according to research from the McKinsey Global Institute. These include the three biggest occupations in the country: retail salespeople, store cashiers and workers preparing and serving food, collectively totalling well over 10m people.
Yet evidence of human obsolescence is conspicuous by its absence in the US’s economic statistics. The country is in the midst of its longest private-sector hiring spree on record, adding 14.4m jobs over 73 straight months, and productivity grew only 1.4 per cent a year from 2007 to 2014, compared with 2.2 per cent from 1953 to 2007. Those three big occupations all grew 1-3 per cent from 2014 to 2015. Cont'd...
Innovators offered chance to develop their ideas with world leading robotics manufacturer ABB Robotics
Full Press Release: The IdeaHub, is once again recruiting robotics and software innovators worldwide to take on the challenge of improving the way we work and interact with the next generation of industrial robots. Working on behalf of ABB Robotics, IdeaHub will help successful applicants pitch their ideas and secure uniquely tailored support packages to maximise their venture's commercial potential, including investment, mentoring and access to cutting edge hardware.
The IdeaHub is a cross sector, open innovation platform that connects visionaries worldwide with funding and support from global corporations. In 2015 they ran their first programme for ABB Robotics, attracting over 130 applicants with 12 finalists selected for a pitch day in London, with 6 entrepreneurs receiving an offer of support. For 2016 they are partnering with ABB Robotics once again to bring more solutions to solve three core challenges in the world collaborative industrial robotics:
1.) Simplicity: How to simplify robotics
2.) Intelligence: How to enable robots to learn and apply that learning
3.) Digitalization: How smart
connectivity will enhance digital factories.
Benedict for 3Ders.org: Tech startup ZeroUI, based in San Jose, California, has launched an Indiegogo campaign for Ziro, the “world’s first hand-controlled robotics kit”. The modular kit has been designed to bring 3D printed creations to life, and has already surpassed its $30,000 campaign goal.
It would be fair to say that the phenomenon of gesture recognition, throughout the wide variety of consumer electronics to which it has been introduced, has been a mixed success. The huge popularity of the Nintendo Wii showed that—for the right product—users were happy to use their hands and bodies as controllers, but for every Wii, there are a million useless webcam or smartphone functions, lying dormant, unused, and destined for the technology recycle bin. Full Article:
From Justus Thies, Michael Zollhöfer, Marc Stamminger, Christian Theobalt and Matthias Nießner:
We present a novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video). The source sequence is also a monocular video stream, captured live with a commodity webcam. Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion. To this end, we first address the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling. At run time, we track facial expressions of both source and target video using a dense photometric consistency measure. Reenactment is then achieved by fast and efficient deformation transfer between source and target. The mouth interior that best matches the re-targeted expression is retrieved from the target sequence and warped to produce an accurate fit. Finally, we convincingly re-render the synthesized target face on top of the corresponding video stream such that it seamlessly blends with the real-world illumination... (full paper)
Erico Guizzo for IEEE Spectrum: Nearly four years ago, Dmitry Grishin launched a US $25 million fund to invest exclusively in consumer robots. Grishin, the co-founder, chairman, and CEO of Mail.ru, the Russian Internet giant, believed that robotics was going to be one of the next big technology revolutions, and he was willing to put his money where his mouth was.
Now the Russian investor is ready to double down on his vision. Or actuallydouble double down. Grishin Robotics has recently announced a second fund four times as large as the original one. The new $100 million fund will seek Series A and B deals and expand its focus to include startups in markets like connected devices, collaborative and material-handling robots, AI and data analytics, and industrial Internet of Things. Cont'd...
VEX Worlds 2016 kicks off this week! Presented by the Robotics Education and Competition (REC) Foundation and the Northrop Grumman Foundation, this culminating event brings together the top 1,000 teams from around the world in one city and under one roof for one incredible celebration of robotics engineering, featuring the world's largest and fastest growing international robotics programs - the VEX IQ Challenge, the VEX Robotics Competition and VEX U. On April 20-23, at the Kentucky Exposition Center in Louisville, Ky., over 16,000 participants from 37 nations will come together to put their engineering expertise to the test as they seek to be crowned the Champions of VEX Worlds. Follow the competition here:
Jason Baker for OpenSource: Open source isn't just changing the way we interact with the world, it's changing the way the world interacts back with us. Case in point: open source robotics.
Robots are playing an increasing role in our world, and while we perhaps haven't reached the utopian future with robotic housekeepers imagined for us in the Jetsons, robotics are making advances in fields that fifty years ago would have been completely unimaginable.
While undoubtedly manufacturing has been one of the biggest beneficiaries of the robot renaissance, we are seeing robots enter the mainstream as well. Many of us have robots that clean our floors, clear our gutters, mow our grass, and more.
And now, with the advances of self driving cars, drones, and other transport technologies, the line between what is a robot and what is a vehicle is steadily blurring.
But let's be honest: a lot of us have an interest in robotics simply because it's fun! And the good news is you don't need to be an electrical engineer to enjoy robotics as a hobby. Fortunately, there are a number of open source projects out there that can help even the most novice beginner get started. Full Article:
Evan Ackerman for IEEE Spectrum: According to Chinese newspaper Workers’ Daily, two restaurants in Guangzhou, China, that gained some amount of notoriety for their use of robotic waiters have now been forced to close down. One employee said, “the robots weren’t able to carry soup or other food steady and they would frequently break down. The boss has decided never to use them again.” Yeah, we can’t say we’re surprised.
As far as I can tell, all of these waiter robots can do essentially one thing: travel along a set path while holding food. They can probably stop at specific tables, and maybe turn or sense when something has been taken from them, but that seems to be about it. “Their skills are somewhat limited,” a robot restaurant employee told Workers’ Daily. “They can’t take orders or pour hot water for customers.” Those are just two of the many, many more skills that human servers have, because it’s necessary to have many, many more skills than this to be a good server. Cont'd...
From Evan Ackerman at IEEE Spectrum: Right now, the New Economic Summit (NEST) 2016 conference is going on in Tokyo, Japan. One of the keynote speakers is Andy Rubin. Rubin was in charge of Google’s robotics program in 2013, when the company (now Alphabet) acquired a fistful of some of the most capable and interesting robotics companies in the world. One of those companies was SCHAFT, which originated at the JSK Robotics Laboratory at the University of Tokyo...
... SCHAFT co-founder and CEO Yuto Nakanishi climbed onstage to introduce his company’s new bipedal robot. He explains that the robot can climb stairs, carry a 60-kg payload, and step on a pipe and keep its balance. It can also move in tight spaces, and the video shows the robot climbing a narrow staircase by positioning its legs behind its body (1:22). In a curious part of the demo (1:36), the robot is shown cleaning a set of stairs with a spinning brush and what appears to be a vacuum attached to its feet... (article)
Records 46 to 60 of 147