From Ross Goodwin on Medium:
To call the film above surreal would be a dramatic understatement. Watching it for the first time, I almost couldn’t believe what I was seeing—actors taking something without any objective meaning, and breathing semantic life into it with their emotion, inflection, and movement.
After further consideration, I realized that actors do this all the time. Take any obscure line of Shakespearean dialogue and consider that 99.5% of the audience who hears that line in 2016 would not understand its meaning if they read it in on paper. However, in a play, they do understand it based on its context and the actor’s delivery.
As Modern English speakers, when we watch Shakespeare, we rely on actors to imbue the dialogue with meaning. And that’s exactly what happened inSunspring, because the script itself has no objective meaning.
On watching the film, many of my friends did not realize that the action descriptions as well as the dialogue were computer generated. After examining the output from the computer, the production team made an effort to choose only action descriptions that realistically could be filmed, although the sequences themselves remained bizarre and surreal... (medium article with technical details)
Here is the stage direction that led to Middleditch’s character vomiting an eyeball early in the film:
I don’t know anything about any of this.
(to Hauk, taking his eyes from his mouth)
There’s no answer.
Klaus E. Meyer for Forbes: Midea, the Chinese household appliances (“white goods”) manufacturer just made what analysts called an ‘incredibly high’ bid for German robot maker Kuka. This acquisition would take the Chinese investor right to the heart of Industry 4.0 : Kuka is a leading manufacturer of multifunctional robots that represent an important building block for enterprises upgrading their factories with full automation, the latest human-machine interface functionality, and machine-to-machine communication. Midea want a 30% stake in Kuka and have offered €115 per share. Kuka’s shares traded at €84 the day before and had already increased 60% since the beginning of the year. This offer values Kuka at €4.6 billion, which means Midea’s 30% stake would be worth €1.4 billion – on par with Beijing Enterprise’s February 2016 takeover of recycling company EEW which was the largest Chinese acquisition of a German firm to-date.
Midea’s takeover bid underscores Chinese interest in German Industry 4.0 technology; in January 2016, ChemChina paid €925 million for Munich-based KraussMaffei machine tools, in part because of their advances into Industry 4.0. Recent smaller Chinese acquisitions in the German machine tool industry, which include the partial acquisitions of H.Stoll by the ShangGong Group and of Manz by the Shanghai Electric Group are, in part, motivated by the objective to partake in the latest Industry 4.0 developments. Cont'd...
Jon Excell for The Engineer: Designed by a team at the Max Planck Institute for Intelligent Systems in Stuttgart, the new device is claimed to have considerable advantages over existing pneumatically-powered soft actuators as it doesn’t require a tether.
The device consists of a dielectric elastomer actuator (DEA): a membrane made of hyperelastic material like a latex balloon, with flexible (or ‘compliant’) electrodes attached to each side.
The stretching of the membrane is regulated by means of an electric field between the electrodes, as the electrodes attract each other and squeeze the membrane when voltage is applied. By attaching multiple such membranes, the place of deformation can be shifted controllably in the system. Air is displaced between two chambers.
The membrane material has two stable states. In other words, it can have two different volume configurations at a given pressure without the need to minimize the larger volume. Thanks to this bi-stable state, the researchers are able to move air between a more highly inflated chamber and a less inflated one. They do this by applying an electric current to the membrane of the smaller chamber which responds by stretching and sucking air out of the other bubble. Cont'd...
From Vikash Kumar at University of Washington:
Dexterous hand manipulation is one of the most complex types of biological movement, and has proven very difficult to replicate in robots. The usual approaches to robotic control - following pre-defined trajectories or planning online with reduced models - are both inapplicable. Dexterous manipulation is so sensitive to small variations in contact force and object location that it seems to require online planning without any simplifications. Here we demonstrate for the first time online planning (or model-predictive control) with a full physics model of a humanoid hand, with 28 degrees of freedom and 48 pneumatic actuators. We augment the actuation space with motor synergies which speed up optimization without removing flexibility. Most of our results are in simulation, showing nonprehensile object manipulation as well as typing. In both cases the input to the system is a high level task description, while all details of the hand movement emerge online from fully automated numerical optimization. We also show preliminary results on a hardware platform we have developed "ADROIT" - a ShadowHand skeleton equipped with faster and more compliant actuation... (website)
MIT News via Larry Hardesty for RoboHub: In experiments involving a simulation of the human esophagus and stomach, researchers at MIT, the University of Sheffield, and the Tokyo Institute of Technology have demonstrated a tiny origami robot that can unfold itself from a swallowed capsule and, steered by external magnetic fields, crawl across the stomach wall to remove a swallowed button battery or patch a wound.
The new work, which the researchers are presenting this week at the International Conference on Robotics and Automation, builds on a long sequence of papers on origamirobots from the research group of Daniela Rus, the Andrew and Erna Viterbi Professor in MIT’s Department of Electrical Engineering and Computer Science. Cont'd...
Alison E. Berman for Singularity Hub: If you've been staying on top of artificial intelligence news lately, you may know that the games of chess and Go were two of the grand challenges for AI. But do you know what the equivalent is for robotics? It's table tennis. Just think about how the game requires razor sharp perception and movement, a tall order for a machine.
As entertaining as human vs. robot games can be, what they actually demonstrate is much more important. They test the technology's readiness for practical applications in the real world—like self-driving cars that can navigate around unexpected people in a street.
Though we used to think of robots as clunky machines for repetitive factory tasks, a slew of new technologies are making robots faster, stronger, cheaper, and even perceptive, so that they can understand and engage with their surrounding environments. Consider Boston Dynamic’s Atlas Robot, which can walk through snow, move boxes, endure a hefty blow with a hockey stick by an aggressive colleague, and even regain its feet when knocked down. Not too long ago, such tasks were unthinkable for a robot.
At the Exponential Manufacturing conference, robotics expert and director of Columbia University’s Creative Machine Labs, Hod Lipson, examined five exponential trends shaping and accelerating the future of the robotics industry. Cont'd...
From Manuel Ruder, Alexey Dosovitskiy, Thomas Brox of the University of Freiburg:
In the past, manually re-drawing an image in a certain artistic style required a professional artist and a long time. Doing this for a video sequence single-handed was beyond imagination. Nowadays computers provide new possibilities. We present an approach that transfers the style from one image (for example, a painting) to a whole video sequence. We make use of recent advances in style transfer in still images and propose new initializations and loss functions applicable to videos. This allows us to generate consistent and stable stylized video sequences, even in cases with large motion and strong occlusion. We show that the proposed method clearly outperforms simpler baselines both qualitatively and quantitatively... (pdf paper)
Phys.org: Scientists have built a computer model that shows how bees use vision to detect the movement of the world around them and avoid crashing. This research, published in PLOS Computational Biology, is an important step in understanding how the bee brain processes the visual world and will aid the development of robotics.
The study led by Alexander Cope and his coauthors at the University of Sheffield shows how bees estimate the speed of motion, or optic flow, of the visual world around them and use this to control their flight. The model is based on Honeybees as they are excellent navigators and explorers, and use vision extensively in these tasks, despite having a brain of only one million neurons (in comparison to the human brain's 100 billion).
The model shows how bees are capable of navigating complex environments by using a simple extension to the known neural circuits, within the environment of a virtual world. The model then reproduces the detailed behaviour of real bees by using optic flow to fly down a corridor, and also matches up with how their neurons respond. Cont'd...
Keith Naughton for Bloomberg Technology: Brian Lesko and Dan Sherman hate the idea of driverless cars, but for very different reasons. Lesko, 46, a business-development executive in Atlanta, doesn’t trust a robot to keep him out of harm’s way. “It scares the bejeebers out of me,” he says.
Sherman, 21, a mechanical-engineering student at the University of Minnesota, Twin Cities, trusts the technology and sees these vehicles eventually taking over the road. But he dreads the change because his passion is working on cars to make them faster.
“It’s something I’ve loved to do my entire life and it’s kind of on its way out,” he says. “That’s the sad truth.”
The driverless revolution is racing forward, as inventors overcome technical challenges such as navigating at night and regulators craft new rules. Yet the rush to robot cars faces a big roadblock: People aren’t ready to give up the wheel. Recent surveys by J.D. Power, consulting company EY, the Texas A&M Transportation Institute, Canadian Automobile Association, researcher Kelley Blue Book and auto supplier Robert Bosch LLC all show that half to three-quarters of respondents don’t want anything to do with these models. Cont'd...
Sam Fleming for Financial Times: When Andy Puzder, chief executive of restaurant chains Carl’s Jr and Hardee’s, said in March that rising employment costs could drive the spread of automation in the fast-food sector, he tapped into a growing anxiety in the US.
From touchscreen ordering systems to burger-flipping robots and self-driving trucks, automation is stalking an increasing number of professions in the country’s service sector, which employs the vast majority of the workforce.
Two-fifths of US employees are in occupations where at least half their time is spent doing activities that could be automated by adapting technology already available, according to research from the McKinsey Global Institute. These include the three biggest occupations in the country: retail salespeople, store cashiers and workers preparing and serving food, collectively totalling well over 10m people.
Yet evidence of human obsolescence is conspicuous by its absence in the US’s economic statistics. The country is in the midst of its longest private-sector hiring spree on record, adding 14.4m jobs over 73 straight months, and productivity grew only 1.4 per cent a year from 2007 to 2014, compared with 2.2 per cent from 1953 to 2007. Those three big occupations all grew 1-3 per cent from 2014 to 2015. Cont'd...
Innovators offered chance to develop their ideas with world leading robotics manufacturer ABB Robotics
Full Press Release: The IdeaHub, is once again recruiting robotics and software innovators worldwide to take on the challenge of improving the way we work and interact with the next generation of industrial robots. Working on behalf of ABB Robotics, IdeaHub will help successful applicants pitch their ideas and secure uniquely tailored support packages to maximise their venture's commercial potential, including investment, mentoring and access to cutting edge hardware.
The IdeaHub is a cross sector, open innovation platform that connects visionaries worldwide with funding and support from global corporations. In 2015 they ran their first programme for ABB Robotics, attracting over 130 applicants with 12 finalists selected for a pitch day in London, with 6 entrepreneurs receiving an offer of support. For 2016 they are partnering with ABB Robotics once again to bring more solutions to solve three core challenges in the world collaborative industrial robotics:
1.) Simplicity: How to simplify robotics
2.) Intelligence: How to enable robots to learn and apply that learning
3.) Digitalization: How smart
connectivity will enhance digital factories.
Lee Mathews for Geek: Camera-toting drones that can follow a subject while they’re recording aren’t a new thing, but a company called Zero Zero is putting a very different spin on them. It’s all about how they track what’s being filmed.
Zero Zero’s new Hover Camera doesn’t require you to wear a special wristband like AirDog. There’s no “pod” to stuff in your pocket like the one that comes with Lily, and it doesn’t rely on GPS either. Instead, the Hover Camera uses its “eyes” to follow along.
Unlike some drones that use visual sensors to lock on to a moving subject, the Hover Camera uses them in conjunction with face and body recognition algorithms to ensure that it’s actually following the person you want it to follow.
For now, it can only track the person you initially select. By the time the Hover Camera goes up for sale, however, Zero Zero says it will be able to scan the entire surrounding area for faces. Cont'd...
Benedict for 3Ders.org: Tech startup ZeroUI, based in San Jose, California, has launched an Indiegogo campaign for Ziro, the “world’s first hand-controlled robotics kit”. The modular kit has been designed to bring 3D printed creations to life, and has already surpassed its $30,000 campaign goal.
It would be fair to say that the phenomenon of gesture recognition, throughout the wide variety of consumer electronics to which it has been introduced, has been a mixed success. The huge popularity of the Nintendo Wii showed that—for the right product—users were happy to use their hands and bodies as controllers, but for every Wii, there are a million useless webcam or smartphone functions, lying dormant, unused, and destined for the technology recycle bin. Full Article:
From Justus Thies, Michael Zollhöfer, Marc Stamminger, Christian Theobalt and Matthias Nießner:
We present a novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video). The source sequence is also a monocular video stream, captured live with a commodity webcam. Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion. To this end, we first address the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling. At run time, we track facial expressions of both source and target video using a dense photometric consistency measure. Reenactment is then achieved by fast and efficient deformation transfer between source and target. The mouth interior that best matches the re-targeted expression is retrieved from the target sequence and warped to produce an accurate fit. Finally, we convincingly re-render the synthesized target face on top of the corresponding video stream such that it seamlessly blends with the real-world illumination... (full paper)
"We want to build on the spirit of innovation in the USA," said POTUS Barack Obama in his opening speech. This spirit has been driven by Germany and HANNOVER MESSE, especially over the past 70 years. Obama added that the USA has now created new production facilities, subsidy schemes and jobs in recent years to help reach this goal.
In what is likely his last visit to Germany as President, Obama spoke in particular about the TTIP free trade agreement. He believes that there are too many obstacles restricting trade between the EU and the USA. Different regulations and standards lead to higher costs. Therefore, one of TTIP's aims is to establish harmonized high standards.
Obama also promoted the USA as a production location for European companies. Angela Merkel gladly took the opportunity to respond: "We love competition. But we also like to win," replied the German Chancellor.
A challenge with a smile. In her speech, Merkel emphasized that cooperation is essential for the future of industrial production - in a transatlantic partnership. "We in the EU want to lead the way, together with the USA," said the Chancellor, referring above all to the development of global communication and IT standards for integrated industry.
However, the opening ceremony at HANNOVER MESSE 2016 was more than a meeting of Heads of State. Amidst musical numbers and dance performances by humans and machines, German Minister for Education and Research, Prof. Dr. Johanna Wanka, presented the coveted HERMES AWARD for industrial innovation. This year's winner is the Harting Group with its intelligent communication module, MICA. Cont'd...
Records 151 to 165 of 697