5D Robotics + Aerial MOB = Autonomy and Reliability

The marrying of Aerial MOBs robust operational experience and IP portfolio with 5Ds robust autonomy and behavioral technology really bridges many of the gaps for delivering valuable products to many industrial type clients, such as those in oil and gas, utilities, and construction among others.

Real-time behaviour synthesis for dynamic Hand-Manipulation

From Vikash Kumar at University of Washington: Dexterous hand manipulation is one of the most complex types of biological movement, and has proven very difficult to replicate in robots. The usual approaches to robotic control - following pre-defined trajectories or planning online with reduced models - are both inapplicable. Dexterous manipulation is so sensitive to small variations in contact force and object location that it seems to require online planning without any simplifications. Here we demonstrate for the first time online planning (or model-predictive control) with a full physics model of a humanoid hand, with 28 degrees of freedom and 48 pneumatic actuators. We augment the actuation space with motor synergies which speed up optimization without removing flexibility. Most of our results are in simulation, showing nonprehensile object manipulation as well as typing. In both cases the input to the system is a high level task description, while all details of the hand movement emerge online from fully automated numerical optimization. We also show preliminary results on a hardware platform we have developed "ADROIT" - a ShadowHand skeleton equipped with faster and more compliant actuation... (website)

Ingestible origami robot

MIT News via Larry Hardesty for RoboHub:  In experiments involving a simulation of the human esophagus and stomach, researchers at MIT, the University of Sheffield, and the Tokyo Institute of Technology have demonstrated a tiny origami robot that can unfold itself from a swallowed capsule and, steered by external magnetic fields, crawl across the stomach wall to remove a swallowed button battery or patch a wound. The new work, which the researchers are presenting this week at the International Conference on Robotics and Automation, builds on a long sequence of papers on origamirobots from the research group of Daniela Rus, the Andrew and Erna Viterbi Professor in MIT’s Department of Electrical Engineering and Computer Science.   Cont'd...

Artistic Style Transfer for Videos

From Manuel Ruder, Alexey Dosovitskiy, Thomas Brox of the University of Freiburg: In the past, manually re-drawing an image in a certain artistic style required a professional artist and a long time. Doing this for a video sequence single-handed was beyond imagination. Nowadays computers provide new possibilities. We present an approach that transfers the style from one image (for example, a painting) to a whole video sequence. We make use of recent advances in style transfer in still images and propose new initializations and loss functions applicable to videos. This allows us to generate consistent and stable stylized video sequences, even in cases with large motion and strong occlusion. We show that the proposed method clearly outperforms simpler baselines both qualitatively and quantitatively... (pdf paper)  

A 'pick-by-robot' solution using a perception-controlled logistic robot called TORU

Designed to navigate freely and dynamically amongst a human workforce, TORU operates between regular shelves, picking a wide range of objects.

Zero Zero Hover Camera drone uses face tracking tech to follow you

Lee Mathews for Geek:  Camera-toting drones that can follow a subject while they’re recording aren’t a new thing, but a company called Zero Zero is putting a very different spin on them. It’s all about how they track what’s being filmed. Zero Zero’s new Hover Camera doesn’t require you to wear a special wristband like AirDog. There’s no “pod” to stuff in your pocket like the one that comes with Lily, and it doesn’t rely on GPS either. Instead, the Hover Camera uses its “eyes” to follow along. Unlike some drones that use visual sensors to lock on to a moving subject, the Hover Camera uses them in conjunction with face and body recognition algorithms to ensure that it’s actually following the person you want it to follow. For now, it can only track the person you initially select. By the time the Hover Camera goes up for sale, however, Zero Zero says it will be able to scan the entire surrounding area for faces.   Cont'd...

Face2Face: Real-time Face Capture and Reenactment of RGB Videos

From Justus Thies, Michael Zollhöfer, Marc Stamminger, Christian Theobalt and Matthias Nießner: We present a novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video). The source sequence is also a monocular video stream, captured live with a commodity webcam. Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion. To this end, we first address the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling. At run time, we track facial expressions of both source and target video using a dense photometric consistency measure. Reenactment is then achieved by fast and efficient deformation transfer between source and target. The mouth interior that best matches the re-targeted expression is retrieved from the target sequence and warped to produce an accurate fit. Finally, we convincingly re-render the synthesized target face on top of the corresponding video stream such that it seamlessly blends with the real-world illumination... (full paper)  

As Robots Are Fast Becoming Our Co-workers, We Take A Look At When And Where This Alliance Began

While there are an increasing number of 'physical robots such as drones and self-driving delivery vehicles, software robots are becoming more and more common in the workplace, automating front and back office functions across a variety of industries and sectors.

US to sail submarine drones in South China Sea

Geoff Dyer for CNBC:  As it watches China build up its presence in the South China Sea, one reclaimed island at a time, the US military is betting on a new technology to help retain its edge — submarine drones. During the past six months, the Pentagon has started to talk publicly about a once-secret program to develop unmanned undersea vehicles, the term given to the drone subs that are becoming part of its plan to deter China from trying to dominate the region. Ashton Carter, US defense secretary, made special mention of drone subs in a speech about military strategy in Asia and hinted at their potential use in the South China Sea, which has large areas of shallower water. The Pentagon's investment in subs "includes new undersea drones in multiple sizes and diverse payloads that can, importantly, operate in shallow water, where manned submarines cannot", said Mr Carter, who visited a US warship in the South China Sea on Friday.   Cont'd...

Interested in UAVs? Visit AUVSI Xponential 2016

This show has all of the tools to enable UAV developers, along with innovative software to manage your entire commercial UAS operation. It's a show you don't want to miss.

AUVSI XPONENTIAL 2016 - What to Expect from Micromo!

MICROMO was been part of XPONENTIAL/AUVSI since 2010. Our team of Application Engineers will be at the show.

Zora, The First Social Robot Already Widely Used In Healthcare

Controlled via a tablet by health professionals, Zora can lead a physical therapy class, read out TV programmes, weather forecasts or local news.

SCHAFT Unveils Awesome New Bipedal Robot at Japan Conference

From Evan Ackerman at IEEE Spectrum:   Right now, the New Economic Summit (NEST) 2016 conference is going on in Tokyo, Japan. One of the keynote speakers is Andy Rubin. Rubin was in charge of Google’s robotics program in 2013, when the company (now Alphabet) acquired a fistful of some of the most capable and interesting robotics companies in the world. One of those companies was SCHAFT, which originated at the JSK Robotics Laboratory at the University of Tokyo... ... SCHAFT co-founder and CEO Yuto Nakanishi climbed onstage to introduce his company’s new bipedal robot. He explains that the robot can climb stairs, carry a 60-kg payload, and step on a pipe and keep its balance. It can also move in tight spaces, and the video shows the robot climbing a narrow staircase by positioning its legs behind its body (1:22). In a curious part of the demo (1:36), the robot is shown cleaning a set of stairs with a spinning brush and what appears to be a vacuum attached to its feet... ( article )

Robots Helping Grads Get Jobs

Not many students can claim they have hands-on experience with automation and robotics going into an interview. Looking at the question with a macro lens, our students are offered job opportunities on being well-rounded, even at the sophomore-level when many accept summer/semester-long internships.

Efficient 3D Object Segmentation from Densely Sampled Light Fields with Applications to 3D Reconstruction

From Kaan Yücer, Alexander Sorkine-Hornung, Oliver Wang, Olga Sorkine-Hornung: Precise object segmentation in image data is a fundamental problem with various applications, including 3D object reconstruction. We present an efficient algorithm to automatically segment a static foreground object from highly cluttered background in light fields. A key insight and contribution of our paper is that a significant increase of the available input data can enable the design of novel, highly efficient approaches. In particular, the central idea of our method is to exploit high spatio-angular sampling on the order of thousands of input frames, e.g. captured as a hand-held video, such that new structures are revealed due to the increased coherence in the data. We first show how purely local gradient information contained in slices of such a dense light field can be combined with information about the camera trajectory to make efficient estimates of the foreground and background. These estimates are then propagated to textureless regions using edge-aware filtering in the epipolar volume. Finally, we enforce global consistency in a gathering step to derive a precise object segmentation both in 2D and 3D space, which captures fine geometric details even in very cluttered scenes. The design of each of these steps is motivated by efficiency and scalability, allowing us to handle large, real-world video datasets on a standard desktop computer... ( paper )

Records 376 to 390 of 555

First | Previous | Next | Last

Featured Product

BitFlow Introduces 6th Generation Camera Link Frame Grabber: The Axion

BitFlow Introduces 6th Generation Camera Link Frame Grabber: The Axion

BitFlow has offered a Camera Link frame grabbers for almost 15 years. This latest offering, our 6th generation combines the power of CoaXPress with the requirements of Camera Link 2.0. Enabling a single or two camera system to operate at up to 850 MB/S per camera, the Axion-CL family is the best choice for CL frame grabber. Like the Cyton-CXP frame grabber, the Axion-CL leverages features such as the new StreamSync system, a highly optimized DMA engine, and expanded I/O capabilities that provide unprecedented flexibility in routing. There are two options available; Axion 1xE & Axion 2xE. The Axion 1xE is compatible with one base, medium, full or 80-bit camera offering PoCL, Power over Camera Link, on both connectors. The Axion 2xE is compatible with two base, medium, full or 80-bit cameras offering PoCL on both connectors for both cameras. The Axion-CL is a culmination of the continuous improvements and updates BitFlow has made to Camera Link frame grabbers.