Yuri Kageyama for News Factor: The U.S. robotics expert tapped to head Toyota's Silicon Valley research company says the $1 billion investment by the giant Japanese automaker will start showing results within five years.
Gill Pratt [pictured above] told reporters that the Toyota Research Institute is also looking ahead into the distant future when there will be cars that anyone, including children and the elderly, can ride in on their own, as well as robots that help out in homes.
Pratt, a former program manager at the U.S. military's Defense Advanced Research Projects Agency, joined Toyota Motor Corp. first as a technical adviser when it set up its artificial intelligence research effort at Stanford University and MIT.
He said safety features will be the first types of AI applications to appear in Toyota vehicles. Such features are already offered on some models now being sold, such as sensors that help cars brake or warn drivers before a possible crash, and cars that drive themselves automatically into parking spaces or on certain roads.
"I expect something to come out during those five years," Pratt told reporters recently at Toyota's Tokyo office of the timeframe seen for the investment. Cont'd...
From MIT News: Video-trained system from MIT’s Computer Science and Artificial Intelligence Lab could help robots understand how objects interact with the world. Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have demonstrated an algorithm that has effectively learned how to predict sound: When shown a silent video clip of an object being hit, the algorithm can produce a sound for the hit that is realistic enough to fool human viewers.
This “Turing Test for sound” represents much more than just a clever computer trick: Researchers envision future versions of similar algorithms being used to automatically produce sound effects for movies and TV shows, as well as to help robots better understand objects’ properties... (full article)(full paper)
Graham Templeton for ExtremeTech: Google’s artificial intelligence researchers are starting to have to code around their own code, writing patches that limit a robot’s abilities so that it continues to develop down the path desired by the researchers — not by the robot itself. It’s the beginning of a long-term trend in robotics and AI in general: once we’ve put in all this work to increase the insight of an artificial intelligence, how can we make sure that insight will only be applied in the ways we would like?
That’s why researchers from Google’s DeepMind and the Future of Humanity Institute have published a paper outlining a software “killswitch” they claim can stop those instances of learning that could make an AI less useful — or, in the future, less safe. It’s really less a killswitch than a blind spot, removing from the AI the ability to learn the wrong lessons. Cont'd...
Mary-Ann Russon for International Business Times: When former Boston Dynamics employees released video of humanoid robot Atlas – walking unassisted over difficult terrain, such as rocks and snow – Google was reportedly displeased; despite the research receiving high praise from roboticists while wowing the public.
And the real reason Google is selling off Boston Dynamics is, by and large, due to insiders telling Tech Insider that the robotics firm was unwilling to fall in line with the internet giant's vision of a consumer robot for the home.
Google reportedly envisioned the firm as one of nine in a division called Replicant. Initially, under the guidance of Android co-founder Andy Rubin, the firms would continue with existing research and Google would see what ideas and innovations they came up with. Cont'd...
Sony Joins Forces with Cogitai to Conduct Research and Development for the Next Wave of Artificial Intelligence
Alison E. Berman for Singularity Hub: If you've been staying on top of artificial intelligence news lately, you may know that the games of chess and Go were two of the grand challenges for AI. But do you know what the equivalent is for robotics? It's table tennis. Just think about how the game requires razor sharp perception and movement, a tall order for a machine.
As entertaining as human vs. robot games can be, what they actually demonstrate is much more important. They test the technology's readiness for practical applications in the real world—like self-driving cars that can navigate around unexpected people in a street.
Though we used to think of robots as clunky machines for repetitive factory tasks, a slew of new technologies are making robots faster, stronger, cheaper, and even perceptive, so that they can understand and engage with their surrounding environments. Consider Boston Dynamic’s Atlas Robot, which can walk through snow, move boxes, endure a hefty blow with a hockey stick by an aggressive colleague, and even regain its feet when knocked down. Not too long ago, such tasks were unthinkable for a robot.
At the Exponential Manufacturing conference, robotics expert and director of Columbia University’s Creative Machine Labs, Hod Lipson, examined five exponential trends shaping and accelerating the future of the robotics industry. Cont'd...
HANNOVER MESSE - President Obama and Chancellor Angela Merkel Experience Virtual Reality at ifm's Exhibit at Hannover Messe 2016, Germany
Benedict for 3Ders.org: Tech startup ZeroUI, based in San Jose, California, has launched an Indiegogo campaign for Ziro, the “world’s first hand-controlled robotics kit”. The modular kit has been designed to bring 3D printed creations to life, and has already surpassed its $30,000 campaign goal.
It would be fair to say that the phenomenon of gesture recognition, throughout the wide variety of consumer electronics to which it has been introduced, has been a mixed success. The huge popularity of the Nintendo Wii showed that—for the right product—users were happy to use their hands and bodies as controllers, but for every Wii, there are a million useless webcam or smartphone functions, lying dormant, unused, and destined for the technology recycle bin. Full Article:
Kirsten Korosec for Fortune: Toyota will expand the footprint of its artificial intelligence and robotics research center by adding a third facility in Ann Arbor, Mich.
Gill Pratt, CEO of the Toyota Research Institute, made the announcement on Thursday during his keynote speech at Nvidia’s GPU Technology Conference in San Jose. The Ann Arbor facility will be located near the University of Michigan, where it will fund research in artificial intelligence, robotics, and materials science.
Last year, the world’s largest automaker said it would invest $1 billion over the next five years in a research center for artificial intelligence to be based in Palo Alto, Calif. The institute aims to bridge the gap between research in AI and robotics in order to bring this technology to market. The technology is largely being developed for self-driving cars, but the institute is also researching and developing AI products for the home. Cont'd...
Records 16 to 30 of 53