Weiss Enters the Octagon: Unique Shaped Dial Plate Accomodates Robotic Inertia Performance and Ergonomic Efficiencies
By collaborating with WEISS on their intricate medical part subassembly system, Jerit Automation was able to achieve its next-generation octagonal design and performance goals.
Tina Amirtha for Benelux: In 2014, three software engineers decided to create a drone company in Wavre, Belgium, just outside Brussels. All were licensed pilots and trained in NATO security techniques. But rather than build drones themselves, they decided they would upgrade existing radio-controlled civilian drones with an ultra-secure software layer to allow the devices to fly autonomously. Their company, EagleEye Systems, would manufacture the onboard computer and design the software, while existing manufacturers would provide the drone body and sensors. Fast-forward to the end of March this year, when the company received a Section 333 exemption from the US Federal Aviation Administration to operate and sell its brand of autonomous drones in the US. The decision came amid expectations that the FAA will loosen its restrictions on legal drone operations and issue new rules to allow drones to fly above crowds. Cont'd...
From Ross Goodwin on Medium: To call the film above surreal would be a dramatic understatement. Watching it for the first time, I almost couldn’t believe what I was seeing—actors taking something without any objective meaning, and breathing semantic life into it with their emotion, inflection, and movement. After further consideration, I realized that actors do this all the time. Take any obscure line of Shakespearean dialogue and consider that 99.5% of the audience who hears that line in 2016 would not understand its meaning if they read it in on paper. However, in a play, they do understand it based on its context and the actor’s delivery. As Modern English speakers, when we watch Shakespeare, we rely on actors to imbue the dialogue with meaning. And that’s exactly what happened inSunspring, because the script itself has no objective meaning. On watching the film, many of my friends did not realize that the action descriptions as well as the dialogue were computer generated. After examining the output from the computer, the production team made an effort to choose only action descriptions that realistically could be filmed, although the sequences themselves remained bizarre and surreal... (medium article with technical details) Here is the stage direction that led to Middleditch’s character vomiting an eyeball early in the film: C (smiles) I don’t know anything about any of this. H (to Hauk, taking his eyes from his mouth) Then what? H2 There’s no answer.
A true system doesn't only take the technology into account, but also the processes and human aspects.
The marrying of Aerial MOB's robust operational experience and IP portfolio with 5D's robust autonomy and behavioral technology really bridges many of the gaps for delivering valuable products to many industrial type clients, such as those in oil and gas, utilities, and construction among others.
From Vikash Kumar at University of Washington: Dexterous hand manipulation is one of the most complex types of biological movement, and has proven very difficult to replicate in robots. The usual approaches to robotic control - following pre-defined trajectories or planning online with reduced models - are both inapplicable. Dexterous manipulation is so sensitive to small variations in contact force and object location that it seems to require online planning without any simplifications. Here we demonstrate for the first time online planning (or model-predictive control) with a full physics model of a humanoid hand, with 28 degrees of freedom and 48 pneumatic actuators. We augment the actuation space with motor synergies which speed up optimization without removing flexibility. Most of our results are in simulation, showing nonprehensile object manipulation as well as typing. In both cases the input to the system is a high level task description, while all details of the hand movement emerge online from fully automated numerical optimization. We also show preliminary results on a hardware platform we have developed "ADROIT" - a ShadowHand skeleton equipped with faster and more compliant actuation... (website)
MIT News via Larry Hardesty for RoboHub: In experiments involving a simulation of the human esophagus and stomach, researchers at MIT, the University of Sheffield, and the Tokyo Institute of Technology have demonstrated a tiny origami robot that can unfold itself from a swallowed capsule and, steered by external magnetic fields, crawl across the stomach wall to remove a swallowed button battery or patch a wound. The new work, which the researchers are presenting this week at the International Conference on Robotics and Automation, builds on a long sequence of papers on origamirobots from the research group of Daniela Rus, the Andrew and Erna Viterbi Professor in MIT’s Department of Electrical Engineering and Computer Science. Cont'd...
From Manuel Ruder, Alexey Dosovitskiy, Thomas Brox of the University of Freiburg: In the past, manually re-drawing an image in a certain artistic style required a professional artist and a long time. Doing this for a video sequence single-handed was beyond imagination. Nowadays computers provide new possibilities. We present an approach that transfers the style from one image (for example, a painting) to a whole video sequence. We make use of recent advances in style transfer in still images and propose new initializations and loss functions applicable to videos. This allows us to generate consistent and stable stylized video sequences, even in cases with large motion and strong occlusion. We show that the proposed method clearly outperforms simpler baselines both qualitatively and quantitatively... (pdf paper)
Designed to navigate freely and dynamically amongst a human workforce, TORU operates between regular shelves, picking a wide range of objects.
Lee Mathews for Geek: Camera-toting drones that can follow a subject while they’re recording aren’t a new thing, but a company called Zero Zero is putting a very different spin on them. It’s all about how they track what’s being filmed. Zero Zero’s new Hover Camera doesn’t require you to wear a special wristband like AirDog. There’s no “pod” to stuff in your pocket like the one that comes with Lily, and it doesn’t rely on GPS either. Instead, the Hover Camera uses its “eyes” to follow along. Unlike some drones that use visual sensors to lock on to a moving subject, the Hover Camera uses them in conjunction with face and body recognition algorithms to ensure that it’s actually following the person you want it to follow. For now, it can only track the person you initially select. By the time the Hover Camera goes up for sale, however, Zero Zero says it will be able to scan the entire surrounding area for faces. Cont'd...
From Justus Thies, Michael Zollhöfer, Marc Stamminger, Christian Theobalt and Matthias Nießner: We present a novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video). The source sequence is also a monocular video stream, captured live with a commodity webcam. Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion. To this end, we first address the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling. At run time, we track facial expressions of both source and target video using a dense photometric consistency measure. Reenactment is then achieved by fast and efficient deformation transfer between source and target. The mouth interior that best matches the re-targeted expression is retrieved from the target sequence and warped to produce an accurate fit. Finally, we convincingly re-render the synthesized target face on top of the corresponding video stream such that it seamlessly blends with the real-world illumination... (full paper)
While there are an increasing number of 'physical' robots such as drones and self-driving delivery vehicles, software robots are becoming more and more common in the workplace, automating front and back office functions across a variety of industries and sectors.
Geoff Dyer for CNBC: As it watches China build up its presence in the South China Sea, one reclaimed island at a time, the US military is betting on a new technology to help retain its edge — submarine drones. During the past six months, the Pentagon has started to talk publicly about a once-secret program to develop unmanned undersea vehicles, the term given to the drone subs that are becoming part of its plan to deter China from trying to dominate the region. Ashton Carter, US defense secretary, made special mention of drone subs in a speech about military strategy in Asia and hinted at their potential use in the South China Sea, which has large areas of shallower water. The Pentagon's investment in subs "includes new undersea drones in multiple sizes and diverse payloads that can, importantly, operate in shallow water, where manned submarines cannot", said Mr Carter, who visited a US warship in the South China Sea on Friday. Cont'd...
This show has all of the tools to enable UAV developers, along with innovative software to manage your entire commercial UAS operation. It's a show you don't want to miss.
MICROMO was been part of XPONENTIAL/AUVSI since 2010. Our team of Application Engineers will be at the show.
Records 226 to 240 of 409
The ATI Robotic Tool Changer provides the flexibility to automatically change end-effectors or other peripheral tooling. These tool changers are designed to function reliably for millions of cycles at rated load while maintaining extremely high repeatability. For this reason, the ATI Tool Changer has become the number-one tool changer of choice around the world. ATI Tool Changer models cover a wide range of applications, from very small payloads to heavy payload applications requiring significantly large moment capacity.