Real-time behaviour synthesis for dynamic Hand-Manipulation

From Vikash Kumar at University of Washington: Dexterous hand manipulation is one of the most complex types of biological movement, and has proven very difficult to replicate in robots. The usual approaches to robotic control - following pre-defined trajectories or planning online with reduced models - are both inapplicable. Dexterous manipulation is so sensitive to small variations in contact force and object location that it seems to require online planning without any simplifications. Here we demonstrate for the first time online planning (or model-predictive control) with a full physics model of a humanoid hand, with 28 degrees of freedom and 48 pneumatic actuators. We augment the actuation space with motor synergies which speed up optimization without removing flexibility. Most of our results are in simulation, showing nonprehensile object manipulation as well as typing. In both cases the input to the system is a high level task description, while all details of the hand movement emerge online from fully automated numerical optimization. We also show preliminary results on a hardware platform we have developed "ADROIT" - a ShadowHand skeleton equipped with faster and more compliant actuation... (website)

Ingestible origami robot

MIT News via Larry Hardesty for RoboHub:  In experiments involving a simulation of the human esophagus and stomach, researchers at MIT, the University of Sheffield, and the Tokyo Institute of Technology have demonstrated a tiny origami robot that can unfold itself from a swallowed capsule and, steered by external magnetic fields, crawl across the stomach wall to remove a swallowed button battery or patch a wound. The new work, which the researchers are presenting this week at the International Conference on Robotics and Automation, builds on a long sequence of papers on origamirobots from the research group of Daniela Rus, the Andrew and Erna Viterbi Professor in MIT’s Department of Electrical Engineering and Computer Science.   Cont'd...

Artistic Style Transfer for Videos

From Manuel Ruder, Alexey Dosovitskiy, Thomas Brox of the University of Freiburg: In the past, manually re-drawing an image in a certain artistic style required a professional artist and a long time. Doing this for a video sequence single-handed was beyond imagination. Nowadays computers provide new possibilities. We present an approach that transfers the style from one image (for example, a painting) to a whole video sequence. We make use of recent advances in style transfer in still images and propose new initializations and loss functions applicable to videos. This allows us to generate consistent and stable stylized video sequences, even in cases with large motion and strong occlusion. We show that the proposed method clearly outperforms simpler baselines both qualitatively and quantitatively... (pdf paper)  

Zero Zero Hover Camera drone uses face tracking tech to follow you

Lee Mathews for Geek:  Camera-toting drones that can follow a subject while they’re recording aren’t a new thing, but a company called Zero Zero is putting a very different spin on them. It’s all about how they track what’s being filmed. Zero Zero’s new Hover Camera doesn’t require you to wear a special wristband like AirDog. There’s no “pod” to stuff in your pocket like the one that comes with Lily, and it doesn’t rely on GPS either. Instead, the Hover Camera uses its “eyes” to follow along. Unlike some drones that use visual sensors to lock on to a moving subject, the Hover Camera uses them in conjunction with face and body recognition algorithms to ensure that it’s actually following the person you want it to follow. For now, it can only track the person you initially select. By the time the Hover Camera goes up for sale, however, Zero Zero says it will be able to scan the entire surrounding area for faces.   Cont'd...

Face2Face: Real-time Face Capture and Reenactment of RGB Videos

From Justus Thies, Michael Zollhöfer, Marc Stamminger, Christian Theobalt and Matthias Nießner: We present a novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video). The source sequence is also a monocular video stream, captured live with a commodity webcam. Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion. To this end, we first address the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling. At run time, we track facial expressions of both source and target video using a dense photometric consistency measure. Reenactment is then achieved by fast and efficient deformation transfer between source and target. The mouth interior that best matches the re-targeted expression is retrieved from the target sequence and warped to produce an accurate fit. Finally, we convincingly re-render the synthesized target face on top of the corresponding video stream such that it seamlessly blends with the real-world illumination... (full paper)  

US to sail submarine drones in South China Sea

Geoff Dyer for CNBC:  As it watches China build up its presence in the South China Sea, one reclaimed island at a time, the US military is betting on a new technology to help retain its edge — submarine drones. During the past six months, the Pentagon has started to talk publicly about a once-secret program to develop unmanned undersea vehicles, the term given to the drone subs that are becoming part of its plan to deter China from trying to dominate the region. Ashton Carter, US defense secretary, made special mention of drone subs in a speech about military strategy in Asia and hinted at their potential use in the South China Sea, which has large areas of shallower water. The Pentagon's investment in subs "includes new undersea drones in multiple sizes and diverse payloads that can, importantly, operate in shallow water, where manned submarines cannot", said Mr Carter, who visited a US warship in the South China Sea on Friday.   Cont'd...

SCHAFT Unveils Awesome New Bipedal Robot at Japan Conference

From Evan Ackerman at IEEE Spectrum:   Right now, the New Economic Summit (NEST) 2016 conference is going on in Tokyo, Japan. One of the keynote speakers is Andy Rubin. Rubin was in charge of Google’s robotics program in 2013, when the company (now Alphabet) acquired a fistful of some of the most capable and interesting robotics companies in the world. One of those companies was SCHAFT, which originated at the JSK Robotics Laboratory at the University of Tokyo... ... SCHAFT co-founder and CEO Yuto Nakanishi climbed onstage to introduce his company’s new bipedal robot. He explains that the robot can climb stairs, carry a 60-kg payload, and step on a pipe and keep its balance. It can also move in tight spaces, and the video shows the robot climbing a narrow staircase by positioning its legs behind its body (1:22). In a curious part of the demo (1:36), the robot is shown cleaning a set of stairs with a spinning brush and what appears to be a vacuum attached to its feet... ( article )

Efficient 3D Object Segmentation from Densely Sampled Light Fields with Applications to 3D Reconstruction

From Kaan Yücer, Alexander Sorkine-Hornung, Oliver Wang, Olga Sorkine-Hornung: Precise object segmentation in image data is a fundamental problem with various applications, including 3D object reconstruction. We present an efficient algorithm to automatically segment a static foreground object from highly cluttered background in light fields. A key insight and contribution of our paper is that a significant increase of the available input data can enable the design of novel, highly efficient approaches. In particular, the central idea of our method is to exploit high spatio-angular sampling on the order of thousands of input frames, e.g. captured as a hand-held video, such that new structures are revealed due to the increased coherence in the data. We first show how purely local gradient information contained in slices of such a dense light field can be combined with information about the camera trajectory to make efficient estimates of the foreground and background. These estimates are then propagated to textureless regions using edge-aware filtering in the epipolar volume. Finally, we enforce global consistency in a gathering step to derive a precise object segmentation both in 2D and 3D space, which captures fine geometric details even in very cluttered scenes. The design of each of these steps is motivated by efficiency and scalability, allowing us to handle large, real-world video datasets on a standard desktop computer... ( paper )

Cafe X Robotic Barista

From LAUNCH Festival 2016:   CafeX unveils fully automated robotic cafe at Launch Festival; companion iOS & Android app will allow users to order drinks prior to arrival; works w/ local coffee growers in ea mkt; cafe is ~60 sq ft & is open 24 hrs/day.  

OpenROV Trident Pre-orders

From OpenROV: OpenROV Trident Features: Depth: Capable of 100m (will ship with a 25m  tether - longer tethers will be sold separately) Mass: 2.9 kg Top Speed: 2 m/s Run Time: 3 hours Connectivity The data connection to Trident is a major evolution from the connection setup of the original OpenROV kit. It uses a neutrally buoyant tether to communicate to a towable buoy on the surface (radio waves don't travel well in water) and the buoy connects to the pilot using a long range WiFi signal. Using a wireless towable buoy greatly increases the practical range of the vehicle while doing transects and search patterns since a physical connection between the vehicle and the pilot doesn't need to be maintained. You can connect to the buoy and control Trident using a tablet or laptop from a boat or from the shore... ( preorder $1,199.00 )  

America, Regulate Drones Now or Get Left Behind

Missy Cummings for Wired:  Drones are a big business and getting bigger, a reality that comes with both economic opportunities and risks. The UAV market is set to jump from $5.2 billion in 2013 to $11.6 billion in 2023. Opportunities for delivery services, cinematography, and even flying cell towers could introduce thousands of jobs and reinvigorate an ailing aerospace market. At the same time, drone sales to hobbyists have exploded. Registered drone operators in the US now outnumber registered manned aircraft. In tandem with that growth, close calls with commercial aircraft have more than doubled in the past two years. An analysis of FAA reports by Bard College’s Center for the Study of the Drone counts 28 instances in which pilots changed course in order to avoid a collision.   Cont'd...

Robotics expert: Self-driving cars not ready for deployment

Joan Lowy for PHYS.org:  Self-driving cars are "absolutely not" ready for widespread deployment despite a rush to put them on the road, a robotics expert warned Tuesday. The cars aren't yet able to handle bad weather, including standing water, drizzling rain, sudden downpours and snow, Missy Cummings, director of Duke University's robotics program, told the Senate commerce committee. And they certainly aren't equipped to follow the directions of a police officer, she said. While enthusiastic about research into self-driving cars, "I am decidedly less optimistic about what I perceive to be a rush to field systems that are absolutely not ready for widespread deployment, and certainly not ready for humans to be completely taken out of the driver's seat," Cummings said. It's relatively easy for hackers to take control of the GPS navigation systems of self-driving cars, Cummings said. "It is feasible that people could commandeer self-driving vehicles ... to do their bidding, which could be malicious or simply just for the thrill of it," she said, adding that privacy of personal data is another concern.   Cont'd...

Image Processing 101

Sher Minn Chong wrote a good introductory to image processing in Python: In this article, I will go through some basic building blocks of image processing, and share some code and approaches to basic how-tos. All code written is in Python and uses  OpenCV , a powerful image processing and computer vision library... ... When we’re trying to gather information about an image, we’ll first need to break it up into the features we are interested in. This is called segmentation. Image segmentation is the process representing an image in segments to make it more meaningful for easier to analyze3. Thresholding One of the simplest ways of segmenting an image isthresholding. The basic idea of thresholding is to replace each pixel in an image with a white pixel if a channel value of that pixel exceeds a certain threshold... ( full tutorial ) ( iPython Notebook )

Postdoc's Trump Twitterbot Uses AI To Train Itself On Transcripts From Trump Speeches

From MIT: This week a postdoc at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) developed a Trump Twitterbot that Tweets out remarkably Trump-like statements, such as “I’m what ISIS doesn’t need.” The bot is based on an artificial-intelligence algorithm that is trained on just a few hours of transcripts of Trump’s victory speeches and debate performances... ... ( MIT article ) ( twitter feed )

How This New Drone Can Track Your Every Move

Lisa Eadicicco  for Time:  Drones can already follow professional snowboarders as they speed down a slope or keep up with mountain bikers racing through rocky terrain. But drone-equipped athletes are usually required to keep their phone nearby, since the aerial devices often rely on handheld devices’ GPS signal to track a person’s location. DJI’s newest drone, the Phantom 4, claims to eliminate that hassle. The company says the Phantom 4’s new ActiveTrack feature uses the drone’s front-facing sensors to see and track a target. “Being able to learn about the object, as it squats, as it rotates, as it turns, is really complicated,” says Michael Perry, DJI’s director of strategic partnerships. “When you’re flying toward something, you have to make a decision to fly around it, fly above it, or stop. And to train the system to learn those different functions is also a big challenge.”   Cont'd...

Records 46 to 60 of 121

First | Previous | Next | Last

Featured Product

Schmalz Technology Development - Vacuum Generation without Compressed Air – Flexible and Intelligent

Schmalz Technology Development - Vacuum Generation without Compressed Air - Flexible and Intelligent

• Vacuum generation that's 100% electrical; • Integrated intelligence for energy and process control; • Extensive communication options through IO-Link interface; Schmalz already offers a large range of solutions that can optimize handling process from single components such as vacuum generators to complete gripping systems. Particularly when used in autonomous warehouse, conventional vacuum generation with compressed air reaches its limits. Compressed air often is unavailable in warehouses. Schmalz therefore is introducing a new technology development: a gripper with vacuum generation that does not use compressed air. The vacuum is generated 100% electrically. This makes the gripper both energy efficient and mobile. At the same time, warehouses need systems with integrated intelligence to deliver information and learn. This enables the use of mobile and self-sufficient robots, which pick production order at various locations in the warehouse. Furthermore, Schmalz provides various modular connection options from its wide range of end effectors in order to handle different products reliably and safely.