ASU Interactive Robotics Lab: The video shows a bi-manual robot that learns to throw a ball into the hoop using reinforcement learning. A novel reinforcement learning algorithm "Sparse Latent Space Policy Search" allows the robot to learn the task within only about 2 hours.
The robot repeatedly throws the ball and receives a reward based on the distance of the ball to the center of the hoop. Algorithmic details about the method can be found here:
From Phys.org: A new U.S. Robotics Roadmap released Oct. 31 calls for better policy frameworks to safely integrate new technologies, such as self-driving cars and commercial drones, into everyday life. The document also advocates for increased research efforts in the field of human-robot interaction to develop intelligent machines that will empower people to stay in their homes as they age. It calls for increased education efforts in the STEM fields from elementary school to adult learners
The roadmap's authors, more than 150 researchers from around the nation, also call for research to create more flexible robotics systems to accommodate the need for increased customization in manufacturing, for everything from cars to consumer electronics
The goal of the U.S. Robotics Roadmap is to determine how researchers can make a difference and solve societal problems in the United States. The document provides an overview of robotics in a wide range of areas, from manufacturing to consumer services, healthcare, autonomous vehicles and defense. The roadmap's authors make recommendation to ensure that the United States will continue to lead in the field of robotics, both in terms of research innovation, technology and policies. Cont'd...
Evan Ackerman for IEEE Spectrum: One of the biggest challenges with swarms of robots is manufacturing and deploying the swarm itself. Even if the robots are relatively small and relatively simple, you’re still dealing with a whole bunch of them, and every step in building the robots or letting them loose is multiplied over the entire number of bots in the swarm. If you’ve got more than a few robots to handle, it starts to get all kinds of tedious.
The dream for swarm robotics is to be able to do away with all of that, and just push a button and have your swarm somehow magically appear. We’re not there yet, but we’re getting close: At IROS this month, researchers from the Wyss Institute for Biologically Inspired Engineering at Harvard presented a paper demonstrating an autonomous collective robotic swarm that can be manufactured in a single flat composite sheet. On command, they’ll rip themselves apart from each other, fold themselves up into origami structures, and head off on a mission en masse. Cont'd...
Liquid Robotics and Boeing Demonstrated Groundbreaking Autonomous Maritime Warfare Capabilities at the British Royal Navy's Unmanned Warrior Demonstration
From James Charles, Derek Magee, David Hogg:
The objective of this work is to build virtual talking avatars of characters fully automatically from TV shows. From this unconstrained data, we show how to capture a character’s style of speech, visual appearance and language in an effort to construct an interactive avatar of the person and effectively immortalize them in a computational model. We make three contributions (i) a complete framework for producing a generative model of the audiovisual and language of characters from TV shows; (ii) a novel method for aligning transcripts to video using the audio; and (iii) a fast audio segmentation system for silencing nonspoken audio from TV shows. Our framework is demonstrated using all 236 episodes from the TV series Friends (≈ 97hrs of video) and shown to generate novel sentences as well as character specific speech and video... (full paper)
Records 106 to 120 of 1471