Robot wars: Boston Dynamics fell out with Google over humanoid Atlas

Mary-Ann Russon for International Business Times:  When former Boston Dynamics employees released video of humanoid robot Atlas – walking unassisted over difficult terrain, such as rocks and snow – Google was reportedly displeased; despite the research receiving high praise from roboticists while wowing the public. And the real reason Google is selling off Boston Dynamics is, by and large, due to insiders telling Tech Insider that the robotics firm was unwilling to fall in line with the internet giant's vision of a consumer robot for the home. Google reportedly envisioned the firm as one of nine in a division called Replicant. Initially, under the guidance of Android co-founder Andy Rubin, the firms would continue with existing research and Google would see what ideas and innovations they came up with.   Cont'd...

Chrysalix partners with Dutch RoboValley on €100 million robotics fund

Terry Dawes for Cantech Letter:  Vancouver-based Chrysalix Venture Capital has announced a €100 million fund aimed at driving the global robotics revolution, in partnership withRoboValley, a centre for robotics commercialization based at the Delft University of Technology in the Netherlands. The RoboValley Fund is Chrysalix’s first robotics fund, and will concentrate on disbursing seed and Series A rounds of funding to early-stage companies developing component technology, intelligent software, and other breakthrough robotics technologies. “Robotics is predicted to be the next big step in the digital revolution having an unprecedented impact on the way that we live, and provides an answer to some of the grand challenges of the 21st Century,” said RoboValley managing director Arie van den Ende. “Together with Chrysalix long-standing expertise in commercializing early stage industrial innovations, the RoboValley Fund will bring much needed capital and accelerated paths to market for our most promising next generation robotics technologies.”   Cont'd...

OpenAI Gym Beta

From the OpenAI team: We're releasing the public beta of  OpenAI Gym , a toolkit for developing and comparing  reinforcement learning (RL) algorithms. It consists of a growing suite of environments  (from  simulated robots  to  Atari  games), and a site for  comparing and reproducing  results. OpenAI Gym is compatible with algorithms written in any framework, such as Tensorflow  and  Theano . The environments are written in Python, but we'll soon make them easy to use from any language. We originally built OpenAI Gym as a tool to accelerate our own RL research. We hope it will be just as useful for the broader community.   Getting started:   If you'd like to dive in right away, you can work through our  tutorial ...  (full intro post)

Foxconn Replaces 60,000 Labourers With Robots in China

Subhrojit Mallick  for GIZMODO India:    Apple and Samsung phone manufacturer, Foxconn has already taken a step towards the dystopian future. The South China Morning Post reported the manufacturing giant has replaced 60,000 laborers with robots. The total strength of Foxconn factory workers reduced from 110,000 to 50,000, marking a huge shift towards automation of routine jobs.  The Foxconn technology group confirmed to the BBC that they are automating many of the manufacturing tasks associated with their operations by introducing robots. However, they maintained the move will not affect long-term job losses.    Cont'd...

​Forget self-driving cars: What about self-flying drones?

Tina Amirtha for Benelux:  In 2014, three software engineers decided to create a drone company in Wavre, Belgium, just outside Brussels. All were licensed pilots and trained in NATO security techniques. But rather than build drones themselves, they decided they would upgrade existing radio-controlled civilian drones with an ultra-secure software layer to allow the devices to fly autonomously. Their company, EagleEye Systems, would manufacture the onboard computer and design the software, while existing manufacturers would provide the drone body and sensors. Fast-forward to the end of March this year, when the company received a Section 333 exemption from the US Federal Aviation Administration to operate and sell its brand of autonomous drones in the US. The decision came amid expectations that the FAA will loosen its restrictions on legal drone operations and issue new rules to allow drones to fly above crowds.   Cont'd...

SUNSPRING by 32 Tesla K80 GPUs

From Ross Goodwin on Medium: To call the film above surreal would be a dramatic understatement. Watching it for the first time, I almost couldn’t believe what I was seeing—actors taking something without any objective meaning, and breathing semantic life into it with their emotion, inflection, and movement.  After further consideration, I realized that actors do this all the time. Take any obscure line of Shakespearean dialogue and consider that 99.5% of the audience who hears that line in 2016 would not understand its meaning if they read it in on paper. However, in a play, they do understand it based on its context and the actor’s delivery.  As Modern English speakers, when we watch Shakespeare, we rely on actors to imbue the dialogue with meaning. And that’s exactly what happened inSunspring, because the script itself has no objective meaning. On watching the film, many of my friends did not realize that the action descriptions as well as the dialogue were computer generated. After examining the output from the computer, the production team made an effort to choose only action descriptions that realistically could be filmed, although the sequences themselves remained bizarre and surreal... (medium article with technical details) Here is the stage direction that led to Middleditch’s character vomiting an eyeball early in the film: C (smiles) I don’t know anything about any of this. H (to Hauk, taking his eyes from his mouth) Then what? H2 There’s no answer.

China's Big Bid For Germany's Industry 4.0 Technology

Klaus E. Meyer for Forbes:   Midea, the Chinese household appliances (“white goods”) manufacturer just made what analysts called an ‘incredibly high’ bid for German robot maker Kuka. This acquisition would take the Chinese investor right to the heart of Industry 4.0 : Kuka is a leading manufacturer of multifunctional robots that represent an important building block for enterprises upgrading their factories with full automation, the latest human-machine interface functionality, and machine-to-machine communication. Midea want a 30% stake in Kuka and have offered €115 per share. Kuka’s shares traded at €84 the day before and had already increased 60% since the beginning of the year. This offer values Kuka at €4.6 billion, which means Midea’s 30% stake would be worth €1.4 billion – on par with Beijing Enterprise’s February 2016 takeover of recycling company EEW which was the largest Chinese acquisition of a German firm to-date. Midea’s takeover bid underscores Chinese interest in German Industry 4.0 technology; in January 2016, ChemChina paid €925 million for Munich-based KraussMaffei machine tools, in part because of their advances into Industry 4.0. Recent smaller Chinese acquisitions in the German machine tool industry, which include the partial acquisitions of H.Stoll by the ShangGong Group and of Manz by the Shanghai Electric Group are, in part, motivated by the objective to partake in the latest Industry 4.0 developments.   Cont'd...

Tether free actuator hailed as soft robotics breakthrough

Jon Excell for The Engineer:  Designed by a team at the Max Planck Institute for Intelligent Systems in Stuttgart, the new device is claimed to have considerable advantages over existing pneumatically-powered soft actuators as it doesn’t require a tether. The device consists of a dielectric elastomer actuator (DEA): a membrane made of hyperelastic material like a latex balloon, with flexible (or ‘compliant’) electrodes attached to each side. The stretching of the membrane is regulated by means of an electric field between the electrodes, as the electrodes attract each other and squeeze the membrane when voltage is applied. By attaching multiple such membranes, the place of deformation can be shifted controllably in the system. Air is displaced between two chambers. The membrane material has two stable states. In other words, it can have two different volume configurations at a given pressure without the need to minimize the larger volume. Thanks to this bi-stable state, the researchers are able to move air between a more highly inflated chamber and a less inflated one. They do this by applying an electric current to the membrane of the smaller chamber which responds by stretching and sucking air out of the other bubble.   Cont'd...

Real-time behaviour synthesis for dynamic Hand-Manipulation

From Vikash Kumar at University of Washington: Dexterous hand manipulation is one of the most complex types of biological movement, and has proven very difficult to replicate in robots. The usual approaches to robotic control - following pre-defined trajectories or planning online with reduced models - are both inapplicable. Dexterous manipulation is so sensitive to small variations in contact force and object location that it seems to require online planning without any simplifications. Here we demonstrate for the first time online planning (or model-predictive control) with a full physics model of a humanoid hand, with 28 degrees of freedom and 48 pneumatic actuators. We augment the actuation space with motor synergies which speed up optimization without removing flexibility. Most of our results are in simulation, showing nonprehensile object manipulation as well as typing. In both cases the input to the system is a high level task description, while all details of the hand movement emerge online from fully automated numerical optimization. We also show preliminary results on a hardware platform we have developed "ADROIT" - a ShadowHand skeleton equipped with faster and more compliant actuation... (website)

Ingestible origami robot

MIT News via Larry Hardesty for RoboHub:  In experiments involving a simulation of the human esophagus and stomach, researchers at MIT, the University of Sheffield, and the Tokyo Institute of Technology have demonstrated a tiny origami robot that can unfold itself from a swallowed capsule and, steered by external magnetic fields, crawl across the stomach wall to remove a swallowed button battery or patch a wound. The new work, which the researchers are presenting this week at the International Conference on Robotics and Automation, builds on a long sequence of papers on origamirobots from the research group of Daniela Rus, the Andrew and Erna Viterbi Professor in MIT’s Department of Electrical Engineering and Computer Science.   Cont'd...

These Five Exponential Trends Are Accelerating Robotics

Alison E. Berman for Singularity Hub:  If you've been staying on top of artificial intelligence news lately, you may know that the games of chess and Go were two of the grand challenges for AI. But do you know what the equivalent is for robotics? It's table tennis. Just think about how the game requires razor sharp perception and movement, a tall order for a machine. As entertaining as human vs. robot games can be, what they actually demonstrate is much more important. They test the technology's readiness for practical applications in the real world—like self-driving cars that can navigate around unexpected people in a street. Though we used to think of robots as clunky machines for repetitive factory tasks, a slew of new technologies are making robots faster, stronger, cheaper, and even perceptive, so that they can understand and engage with their surrounding environments. Consider Boston Dynamic’s Atlas Robot, which can walk through snow, move boxes, endure a hefty blow with a hockey stick by an aggressive colleague, and even regain its feet when knocked down. Not too long ago, such tasks were unthinkable for a robot. At the Exponential Manufacturing conference, robotics expert and director of Columbia University’s Creative Machine Labs, Hod Lipson, examined five exponential trends shaping and accelerating the future of the robotics industry.   Cont'd...

Artistic Style Transfer for Videos

From Manuel Ruder, Alexey Dosovitskiy, Thomas Brox of the University of Freiburg: In the past, manually re-drawing an image in a certain artistic style required a professional artist and a long time. Doing this for a video sequence single-handed was beyond imagination. Nowadays computers provide new possibilities. We present an approach that transfers the style from one image (for example, a painting) to a whole video sequence. We make use of recent advances in style transfer in still images and propose new initializations and loss functions applicable to videos. This allows us to generate consistent and stable stylized video sequences, even in cases with large motion and strong occlusion. We show that the proposed method clearly outperforms simpler baselines both qualitatively and quantitatively... (pdf paper)  

Scientists develop bee model that will impact the development of aerial robotics

Phys.org:  Scientists have built a computer model that shows how bees use vision to detect the movement of the world around them and avoid crashing. This research, published in PLOS Computational Biology, is an important step in understanding how the bee brain processes the visual world and will aid the development of robotics. The study led by Alexander Cope and his coauthors at the University of Sheffield shows how bees estimate the speed of motion, or optic flow, of the visual world around them and use this to control their flight. The model is based on Honeybees as they are excellent navigators and explorers, and use vision extensively in these tasks, despite having a brain of only one million neurons (in comparison to the human brain's 100 billion). The model shows how bees are capable of navigating complex environments by using a simple extension to the known neural circuits, within the environment of a virtual world. The model then reproduces the detailed behaviour of real bees by using optic flow to fly down a corridor, and also matches up with how their neurons respond.   Cont'd...

Billions Are Being Invested in a Robot That Americans Don't Want

Keith Naughton for Bloomberg Technology:  Brian Lesko and Dan Sherman hate the idea of driverless cars, but for very different reasons.  Lesko, 46, a business-development executive in Atlanta, doesn’t trust a robot to keep him out of harm’s way. “It scares the bejeebers out of me,” he says. Sherman, 21, a mechanical-engineering student at the University of Minnesota, Twin Cities, trusts the technology and sees these vehicles eventually taking over the road. But he dreads the change because his passion is working on cars to make them faster. “It’s something I’ve loved to do my entire life and it’s kind of on its way out,” he says. “That’s the sad truth.” The driverless revolution is racing forward, as inventors overcome technical challenges such as navigating at night and regulators craft new rules. Yet the rush to robot cars faces a big roadblock: People aren’t ready to give up the wheel. Recent surveys by J.D. Power, consulting company EY, the Texas A&M Transportation Institute, Canadian Automobile Association, researcher Kelley Blue Book and auto supplier Robert Bosch LLC all show that half to three-quarters of respondents don’t want anything to do with these models.   Cont'd...

The US service-sector jobs at risk from a robot revolution

Sam Fleming for Financial Times:  When Andy Puzder, chief executive of restaurant chains Carl’s Jr and Hardee’s, said in March that rising employment costs could drive the spread of automation in the fast-food sector, he tapped into a growing anxiety in the US. From touchscreen ordering systems to burger-flipping robots and self-driving trucks, automation is stalking an increasing number of professions in the country’s service sector, which employs the vast majority of the workforce. Two-fifths of US employees are in occupations where at least half their time is spent doing activities that could be automated by adapting technology already available, according to research from the McKinsey Global Institute. These include the three biggest occupations in the country: retail salespeople, store cashiers and workers preparing and serving food, collectively totalling well over 10m people. Yet evidence of human obsolescence is conspicuous by its absence in the US’s economic statistics. The country is in the midst of its longest private-sector hiring spree on record, adding 14.4m jobs over 73 straight months, and productivity grew only 1.4 per cent a year from 2007 to 2014, compared with 2.2 per cent from 1953 to 2007. Those three big occupations all grew 1-3 per cent from 2014 to 2015.  Cont'd...

Records 901 to 915 of 1452

First | Previous | Next | Last

Featured Product

BitFlow Introduces 6th Generation Camera Link Frame Grabber: The Axion

BitFlow Introduces 6th Generation Camera Link Frame Grabber: The Axion

BitFlow has offered a Camera Link frame grabbers for almost 15 years. This latest offering, our 6th generation combines the power of CoaXPress with the requirements of Camera Link 2.0. Enabling a single or two camera system to operate at up to 850 MB/S per camera, the Axion-CL family is the best choice for CL frame grabber. Like the Cyton-CXP frame grabber, the Axion-CL leverages features such as the new StreamSync system, a highly optimized DMA engine, and expanded I/O capabilities that provide unprecedented flexibility in routing. There are two options available; Axion 1xE & Axion 2xE. The Axion 1xE is compatible with one base, medium, full or 80-bit camera offering PoCL, Power over Camera Link, on both connectors. The Axion 2xE is compatible with two base, medium, full or 80-bit cameras offering PoCL on both connectors for both cameras. The Axion-CL is a culmination of the continuous improvements and updates BitFlow has made to Camera Link frame grabbers.