HEBOCON is a robot contest for the technically ungifted. They held the first tournament in Tokyo in July 19,2014... (Facebook page)
Evan Ackerman for IEEE Spectrum: A group of researchers including Michal Luria, Guy Hoffman, Benny Megidish, Oren Zuckerman, Roberto Aimi, and Sung Park from IDC Herzliya, Cornell, and SK Telecom have developed a prototype social robot called Vyo. Vyo is “a personal assistant serving as a centralized interface for smart home devices.” Nothing new there, but what sets Vyo apart is how you interact with it: it combines non-anthropomorphic design with anthropomorphic expressiveness and a tactile object-based control system into a social robot that’s totally, adorably different. But is it practical? Full Article:
Tina Amirtha for Benelux: In 2014, three software engineers decided to create a drone company in Wavre, Belgium, just outside Brussels. All were licensed pilots and trained in NATO security techniques. But rather than build drones themselves, they decided they would upgrade existing radio-controlled civilian drones with an ultra-secure software layer to allow the devices to fly autonomously. Their company, EagleEye Systems, would manufacture the onboard computer and design the software, while existing manufacturers would provide the drone body and sensors. Fast-forward to the end of March this year, when the company received a Section 333 exemption from the US Federal Aviation Administration to operate and sell its brand of autonomous drones in the US. The decision came amid expectations that the FAA will loosen its restrictions on legal drone operations and issue new rules to allow drones to fly above crowds. Cont'd...
From Ross Goodwin on Medium: To call the film above surreal would be a dramatic understatement. Watching it for the first time, I almost couldn’t believe what I was seeing—actors taking something without any objective meaning, and breathing semantic life into it with their emotion, inflection, and movement. After further consideration, I realized that actors do this all the time. Take any obscure line of Shakespearean dialogue and consider that 99.5% of the audience who hears that line in 2016 would not understand its meaning if they read it in on paper. However, in a play, they do understand it based on its context and the actor’s delivery. As Modern English speakers, when we watch Shakespeare, we rely on actors to imbue the dialogue with meaning. And that’s exactly what happened inSunspring, because the script itself has no objective meaning. On watching the film, many of my friends did not realize that the action descriptions as well as the dialogue were computer generated. After examining the output from the computer, the production team made an effort to choose only action descriptions that realistically could be filmed, although the sequences themselves remained bizarre and surreal... (medium article with technical details) Here is the stage direction that led to Middleditch’s character vomiting an eyeball early in the film: C (smiles) I don’t know anything about any of this. H (to Hauk, taking his eyes from his mouth) Then what? H2 There’s no answer.
From Vikash Kumar at University of Washington: Dexterous hand manipulation is one of the most complex types of biological movement, and has proven very difficult to replicate in robots. The usual approaches to robotic control - following pre-defined trajectories or planning online with reduced models - are both inapplicable. Dexterous manipulation is so sensitive to small variations in contact force and object location that it seems to require online planning without any simplifications. Here we demonstrate for the first time online planning (or model-predictive control) with a full physics model of a humanoid hand, with 28 degrees of freedom and 48 pneumatic actuators. We augment the actuation space with motor synergies which speed up optimization without removing flexibility. Most of our results are in simulation, showing nonprehensile object manipulation as well as typing. In both cases the input to the system is a high level task description, while all details of the hand movement emerge online from fully automated numerical optimization. We also show preliminary results on a hardware platform we have developed "ADROIT" - a ShadowHand skeleton equipped with faster and more compliant actuation... (website)
MIT News via Larry Hardesty for RoboHub: In experiments involving a simulation of the human esophagus and stomach, researchers at MIT, the University of Sheffield, and the Tokyo Institute of Technology have demonstrated a tiny origami robot that can unfold itself from a swallowed capsule and, steered by external magnetic fields, crawl across the stomach wall to remove a swallowed button battery or patch a wound. The new work, which the researchers are presenting this week at the International Conference on Robotics and Automation, builds on a long sequence of papers on origamirobots from the research group of Daniela Rus, the Andrew and Erna Viterbi Professor in MIT’s Department of Electrical Engineering and Computer Science. Cont'd...
From Manuel Ruder, Alexey Dosovitskiy, Thomas Brox of the University of Freiburg: In the past, manually re-drawing an image in a certain artistic style required a professional artist and a long time. Doing this for a video sequence single-handed was beyond imagination. Nowadays computers provide new possibilities. We present an approach that transfers the style from one image (for example, a painting) to a whole video sequence. We make use of recent advances in style transfer in still images and propose new initializations and loss functions applicable to videos. This allows us to generate consistent and stable stylized video sequences, even in cases with large motion and strong occlusion. We show that the proposed method clearly outperforms simpler baselines both qualitatively and quantitatively... (pdf paper)
Lee Mathews for Geek: Camera-toting drones that can follow a subject while they’re recording aren’t a new thing, but a company called Zero Zero is putting a very different spin on them. It’s all about how they track what’s being filmed. Zero Zero’s new Hover Camera doesn’t require you to wear a special wristband like AirDog. There’s no “pod” to stuff in your pocket like the one that comes with Lily, and it doesn’t rely on GPS either. Instead, the Hover Camera uses its “eyes” to follow along. Unlike some drones that use visual sensors to lock on to a moving subject, the Hover Camera uses them in conjunction with face and body recognition algorithms to ensure that it’s actually following the person you want it to follow. For now, it can only track the person you initially select. By the time the Hover Camera goes up for sale, however, Zero Zero says it will be able to scan the entire surrounding area for faces. Cont'd...
From Justus Thies, Michael Zollhöfer, Marc Stamminger, Christian Theobalt and Matthias Nießner: We present a novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video). The source sequence is also a monocular video stream, captured live with a commodity webcam. Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion. To this end, we first address the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling. At run time, we track facial expressions of both source and target video using a dense photometric consistency measure. Reenactment is then achieved by fast and efficient deformation transfer between source and target. The mouth interior that best matches the re-targeted expression is retrieved from the target sequence and warped to produce an accurate fit. Finally, we convincingly re-render the synthesized target face on top of the corresponding video stream such that it seamlessly blends with the real-world illumination... (full paper)
Geoff Dyer for CNBC: As it watches China build up its presence in the South China Sea, one reclaimed island at a time, the US military is betting on a new technology to help retain its edge — submarine drones. During the past six months, the Pentagon has started to talk publicly about a once-secret program to develop unmanned undersea vehicles, the term given to the drone subs that are becoming part of its plan to deter China from trying to dominate the region. Ashton Carter, US defense secretary, made special mention of drone subs in a speech about military strategy in Asia and hinted at their potential use in the South China Sea, which has large areas of shallower water. The Pentagon's investment in subs "includes new undersea drones in multiple sizes and diverse payloads that can, importantly, operate in shallow water, where manned submarines cannot", said Mr Carter, who visited a US warship in the South China Sea on Friday. Cont'd...
From Evan Ackerman at IEEE Spectrum: Right now, the New Economic Summit (NEST) 2016 conference is going on in Tokyo, Japan. One of the keynote speakers is Andy Rubin. Rubin was in charge of Google’s robotics program in 2013, when the company (now Alphabet) acquired a fistful of some of the most capable and interesting robotics companies in the world. One of those companies was SCHAFT, which originated at the JSK Robotics Laboratory at the University of Tokyo... ... SCHAFT co-founder and CEO Yuto Nakanishi climbed onstage to introduce his company’s new bipedal robot. He explains that the robot can climb stairs, carry a 60-kg payload, and step on a pipe and keep its balance. It can also move in tight spaces, and the video shows the robot climbing a narrow staircase by positioning its legs behind its body (1:22). In a curious part of the demo (1:36), the robot is shown cleaning a set of stairs with a spinning brush and what appears to be a vacuum attached to its feet... ( article )
Efficient 3D Object Segmentation from Densely Sampled Light Fields with Applications to 3D Reconstruction
From Kaan Yücer, Alexander Sorkine-Hornung, Oliver Wang, Olga Sorkine-Hornung: Precise object segmentation in image data is a fundamental problem with various applications, including 3D object reconstruction. We present an efficient algorithm to automatically segment a static foreground object from highly cluttered background in light fields. A key insight and contribution of our paper is that a significant increase of the available input data can enable the design of novel, highly efficient approaches. In particular, the central idea of our method is to exploit high spatio-angular sampling on the order of thousands of input frames, e.g. captured as a hand-held video, such that new structures are revealed due to the increased coherence in the data. We first show how purely local gradient information contained in slices of such a dense light field can be combined with information about the camera trajectory to make efficient estimates of the foreground and background. These estimates are then propagated to textureless regions using edge-aware filtering in the epipolar volume. Finally, we enforce global consistency in a gathering step to derive a precise object segmentation both in 2D and 3D space, which captures fine geometric details even in very cluttered scenes. The design of each of these steps is motivated by efficiency and scalability, allowing us to handle large, real-world video datasets on a standard desktop computer... ( paper )
From LAUNCH Festival 2016: CafeX unveils fully automated robotic cafe at Launch Festival; companion iOS & Android app will allow users to order drinks prior to arrival; works w/ local coffee growers in ea mkt; cafe is ~60 sq ft & is open 24 hrs/day.
From OpenROV: OpenROV Trident Features: Depth: Capable of 100m (will ship with a 25m tether - longer tethers will be sold separately) Mass: 2.9 kg Top Speed: 2 m/s Run Time: 3 hours Connectivity The data connection to Trident is a major evolution from the connection setup of the original OpenROV kit. It uses a neutrally buoyant tether to communicate to a towable buoy on the surface (radio waves don't travel well in water) and the buoy connects to the pilot using a long range WiFi signal. Using a wireless towable buoy greatly increases the practical range of the vehicle while doing transects and search patterns since a physical connection between the vehicle and the pilot doesn't need to be maintained. You can connect to the buoy and control Trident using a tablet or laptop from a boat or from the shore... ( preorder $1,199.00 )
Missy Cummings for Wired: Drones are a big business and getting bigger, a reality that comes with both economic opportunities and risks. The UAV market is set to jump from $5.2 billion in 2013 to $11.6 billion in 2023. Opportunities for delivery services, cinematography, and even flying cell towers could introduce thousands of jobs and reinvigorate an ailing aerospace market. At the same time, drone sales to hobbyists have exploded. Registered drone operators in the US now outnumber registered manned aircraft. In tandem with that growth, close calls with commercial aircraft have more than doubled in the past two years. An analysis of FAA reports by Bard College’s Center for the Study of the Drone counts 28 instances in which pilots changed course in order to avoid a collision. Cont'd...
Records 46 to 60 of 125
Hexapod micro-motion robots are based on very flexible concept that can easily solve complex motion and alignment problems in fields including Optics, Photonics, Precision Automation, Automotive, and Medical Engineering.