Google's developing its own version of the Laws of Robotics

Graham Templeton for ExtremeTech:  Google’s artificial intelligence researchers are starting to have to code around their own code, writing patches that limit a robot’s abilities so that it continues to develop down the path desired by the researchers — not by the robot itself. It’s the beginning of a long-term trend in robotics and AI in general: once we’ve put in all this work to increase the insight of an artificial intelligence, how can we make sure that insight will only be applied in the ways we would like? That’s why researchers from Google’s DeepMind and the Future of Humanity Institute have published a paper outlining a software “killswitch” they claim can stop those instances of learning that could make an AI less useful — or, in the future, less safe. It’s really less a killswitch than a blind spot, removing from the AI the ability to learn the wrong lessons.   Cont'd...

OpenAI Gym Beta

From the OpenAI team: We're releasing the public beta of  OpenAI Gym , a toolkit for developing and comparing  reinforcement learning (RL) algorithms. It consists of a growing suite of environments  (from  simulated robots  to  Atari  games), and a site for  comparing and reproducing  results. OpenAI Gym is compatible with algorithms written in any framework, such as Tensorflow  and  Theano . The environments are written in Python, but we'll soon make them easy to use from any language. We originally built OpenAI Gym as a tool to accelerate our own RL research. We hope it will be just as useful for the broader community.   Getting started:   If you'd like to dive in right away, you can work through our  tutorial ...  (full intro post)

​Forget self-driving cars: What about self-flying drones?

Tina Amirtha for Benelux:  In 2014, three software engineers decided to create a drone company in Wavre, Belgium, just outside Brussels. All were licensed pilots and trained in NATO security techniques. But rather than build drones themselves, they decided they would upgrade existing radio-controlled civilian drones with an ultra-secure software layer to allow the devices to fly autonomously. Their company, EagleEye Systems, would manufacture the onboard computer and design the software, while existing manufacturers would provide the drone body and sensors. Fast-forward to the end of March this year, when the company received a Section 333 exemption from the US Federal Aviation Administration to operate and sell its brand of autonomous drones in the US. The decision came amid expectations that the FAA will loosen its restrictions on legal drone operations and issue new rules to allow drones to fly above crowds.   Cont'd...

SUNSPRING by 32 Tesla K80 GPUs

From Ross Goodwin on Medium: To call the film above surreal would be a dramatic understatement. Watching it for the first time, I almost couldn’t believe what I was seeing—actors taking something without any objective meaning, and breathing semantic life into it with their emotion, inflection, and movement.  After further consideration, I realized that actors do this all the time. Take any obscure line of Shakespearean dialogue and consider that 99.5% of the audience who hears that line in 2016 would not understand its meaning if they read it in on paper. However, in a play, they do understand it based on its context and the actor’s delivery.  As Modern English speakers, when we watch Shakespeare, we rely on actors to imbue the dialogue with meaning. And that’s exactly what happened inSunspring, because the script itself has no objective meaning. On watching the film, many of my friends did not realize that the action descriptions as well as the dialogue were computer generated. After examining the output from the computer, the production team made an effort to choose only action descriptions that realistically could be filmed, although the sequences themselves remained bizarre and surreal... (medium article with technical details) Here is the stage direction that led to Middleditch’s character vomiting an eyeball early in the film: C (smiles) I don’t know anything about any of this. H (to Hauk, taking his eyes from his mouth) Then what? H2 There’s no answer.

Scientists develop bee model that will impact the development of aerial robotics

Phys.org:  Scientists have built a computer model that shows how bees use vision to detect the movement of the world around them and avoid crashing. This research, published in PLOS Computational Biology, is an important step in understanding how the bee brain processes the visual world and will aid the development of robotics. The study led by Alexander Cope and his coauthors at the University of Sheffield shows how bees estimate the speed of motion, or optic flow, of the visual world around them and use this to control their flight. The model is based on Honeybees as they are excellent navigators and explorers, and use vision extensively in these tasks, despite having a brain of only one million neurons (in comparison to the human brain's 100 billion). The model shows how bees are capable of navigating complex environments by using a simple extension to the known neural circuits, within the environment of a virtual world. The model then reproduces the detailed behaviour of real bees by using optic flow to fly down a corridor, and also matches up with how their neurons respond.   Cont'd...

Billions Are Being Invested in a Robot That Americans Don't Want

Keith Naughton for Bloomberg Technology:  Brian Lesko and Dan Sherman hate the idea of driverless cars, but for very different reasons.  Lesko, 46, a business-development executive in Atlanta, doesn’t trust a robot to keep him out of harm’s way. “It scares the bejeebers out of me,” he says. Sherman, 21, a mechanical-engineering student at the University of Minnesota, Twin Cities, trusts the technology and sees these vehicles eventually taking over the road. But he dreads the change because his passion is working on cars to make them faster. “It’s something I’ve loved to do my entire life and it’s kind of on its way out,” he says. “That’s the sad truth.” The driverless revolution is racing forward, as inventors overcome technical challenges such as navigating at night and regulators craft new rules. Yet the rush to robot cars faces a big roadblock: People aren’t ready to give up the wheel. Recent surveys by J.D. Power, consulting company EY, the Texas A&M Transportation Institute, Canadian Automobile Association, researcher Kelley Blue Book and auto supplier Robert Bosch LLC all show that half to three-quarters of respondents don’t want anything to do with these models.   Cont'd...

Zero Zero Hover Camera drone uses face tracking tech to follow you

Lee Mathews for Geek:  Camera-toting drones that can follow a subject while they’re recording aren’t a new thing, but a company called Zero Zero is putting a very different spin on them. It’s all about how they track what’s being filmed. Zero Zero’s new Hover Camera doesn’t require you to wear a special wristband like AirDog. There’s no “pod” to stuff in your pocket like the one that comes with Lily, and it doesn’t rely on GPS either. Instead, the Hover Camera uses its “eyes” to follow along. Unlike some drones that use visual sensors to lock on to a moving subject, the Hover Camera uses them in conjunction with face and body recognition algorithms to ensure that it’s actually following the person you want it to follow. For now, it can only track the person you initially select. By the time the Hover Camera goes up for sale, however, Zero Zero says it will be able to scan the entire surrounding area for faces.   Cont'd...

Bring 3D printed robots to life with 'Ziro' hand-controlled robotics kit

Benedict for 3Ders.org:  Tech startup ZeroUI, based in San Jose, California, has launched an Indiegogo campaign for Ziro, the “world’s first hand-controlled robotics kit”. The modular kit has been designed to bring 3D printed creations to life, and has already surpassed its $30,000 campaign goal. It would be fair to say that the phenomenon of gesture recognition, throughout the wide variety of consumer electronics to which it has been introduced, has been a mixed success. The huge popularity of the Nintendo Wii showed that—for the right product—users were happy to use their hands and bodies as controllers, but for every Wii, there are a million useless webcam or smartphone functions, lying dormant, unused, and destined for the technology recycle bin.   Full Article:  

US to sail submarine drones in South China Sea

Geoff Dyer for CNBC:  As it watches China build up its presence in the South China Sea, one reclaimed island at a time, the US military is betting on a new technology to help retain its edge — submarine drones. During the past six months, the Pentagon has started to talk publicly about a once-secret program to develop unmanned undersea vehicles, the term given to the drone subs that are becoming part of its plan to deter China from trying to dominate the region. Ashton Carter, US defense secretary, made special mention of drone subs in a speech about military strategy in Asia and hinted at their potential use in the South China Sea, which has large areas of shallower water. The Pentagon's investment in subs "includes new undersea drones in multiple sizes and diverse payloads that can, importantly, operate in shallow water, where manned submarines cannot", said Mr Carter, who visited a US warship in the South China Sea on Friday.   Cont'd...

Over 1,000 Student-Led Robotics Teams Converge At VEX Worlds

VEX Worlds 2016 kicks off this week! Presented by the Robotics Education and Competition (REC) Foundation and the Northrop Grumman Foundation, this culminating event brings together the top 1,000 teams from around the world in one city and under one roof for one incredible celebration of robotics engineering, featuring the world's largest and fastest growing international robotics programs - the VEX IQ Challenge, the VEX Robotics Competition and VEX U. On April 20-23, at the Kentucky Exposition Center in Louisville, Ky., over 16,000 participants from 37 nations will come together to put their engineering expertise to the test as they seek to be crowned the Champions of VEX Worlds.   Follow the competition here:

Shockingly, Robots Are Really Bad at Waiting Tables

Evan Ackerman for IEEE Spectrum:  According to Chinese newspaper Workers’ Daily, two restaurants in Guangzhou, China, that gained some amount of notoriety for their use of robotic waiters have now been forced to close down. One employee said, “the robots weren’t able to carry soup or other food steady and they would frequently break down. The boss has decided never to use them again.” Yeah, we can’t say we’re surprised. As far as I can tell, all of these waiter robots can do essentially one thing: travel along a set path while holding food. They can probably stop at specific tables, and maybe turn or sense when something has been taken from them, but that seems to be about it. “Their skills are somewhat limited,” a robot restaurant employee told Workers’ Daily. “They can’t take orders or pour hot water for customers.” Those are just two of the many, many more skills that human servers have, because it’s necessary to have many, many more skills than this to be a good server.  Cont'd...

Toyota Expands AI, Robotics Research to Third Facility

Kirsten Korosec for Fortune:  Toyota  will expand the footprint of its artificial intelligence and robotics research center by adding a third facility in Ann Arbor, Mich. Gill Pratt, CEO of the Toyota Research Institute, made the announcement on Thursday during his keynote speech at Nvidia’s GPU Technology Conference in San Jose. The Ann Arbor facility will be located near the University of Michigan, where it will fund research in artificial intelligence, robotics, and materials science. Last year, the world’s largest automaker said it would invest $1 billion over the next five years in a research center for artificial intelligence to be based in Palo Alto, Calif. The institute aims to bridge the gap between research in AI and robotics in order to bring this technology to market. The technology is largely being developed for self-driving cars, but the institute is also researching and developing AI products for the home.   Cont'd...

Efficient 3D Object Segmentation from Densely Sampled Light Fields with Applications to 3D Reconstruction

From Kaan Yücer, Alexander Sorkine-Hornung, Oliver Wang, Olga Sorkine-Hornung: Precise object segmentation in image data is a fundamental problem with various applications, including 3D object reconstruction. We present an efficient algorithm to automatically segment a static foreground object from highly cluttered background in light fields. A key insight and contribution of our paper is that a significant increase of the available input data can enable the design of novel, highly efficient approaches. In particular, the central idea of our method is to exploit high spatio-angular sampling on the order of thousands of input frames, e.g. captured as a hand-held video, such that new structures are revealed due to the increased coherence in the data. We first show how purely local gradient information contained in slices of such a dense light field can be combined with information about the camera trajectory to make efficient estimates of the foreground and background. These estimates are then propagated to textureless regions using edge-aware filtering in the epipolar volume. Finally, we enforce global consistency in a gathering step to derive a precise object segmentation both in 2D and 3D space, which captures fine geometric details even in very cluttered scenes. The design of each of these steps is motivated by efficiency and scalability, allowing us to handle large, real-world video datasets on a standard desktop computer... ( paper )

Robotics makes baby steps toward solving Japan's child care shortage

Roy Bishop for The Japan Times:  Child care is a hard job, but somebody, or something, has got to do it. Japanese researchers have developed androids to meet that need, which includes happily reading that fairy tale again and again and again. The androids, which were created by a team of education and robotics specialists at a research facility in Abiko, Chiba Prefecture, are part of a larger system called RoHo Care. Short for Robotic Hoikujo (day care center), RoHo is being touted as a high-tech solution to the staffing crisis that forced the Health, Labor and Welfare Ministry to announce emergency measures this week. “I never thought I’d see this day, but we’re now confident that RoHo could blaze a trail for child care worldwide,” said team leader Makoto Hara. At a briefing on Thursday, Hara introduced a “care-droid” prototype named Or-B, the core component of RoHo’s vision for day care assistance, and said it will undergo a trial run this summer before full-scale implementation in 2018.   Cont'd...

OpenROV Trident Pre-orders

From OpenROV: OpenROV Trident Features: Depth: Capable of 100m (will ship with a 25m  tether - longer tethers will be sold separately) Mass: 2.9 kg Top Speed: 2 m/s Run Time: 3 hours Connectivity The data connection to Trident is a major evolution from the connection setup of the original OpenROV kit. It uses a neutrally buoyant tether to communicate to a towable buoy on the surface (radio waves don't travel well in water) and the buoy connects to the pilot using a long range WiFi signal. Using a wireless towable buoy greatly increases the practical range of the vehicle while doing transects and search patterns since a physical connection between the vehicle and the pilot doesn't need to be maintained. You can connect to the buoy and control Trident using a tablet or laptop from a boat or from the shore... ( preorder $1,199.00 )  

Records 481 to 495 of 571

First | Previous | Next | Last

Featured Product

Bota Systems - The SensONE 6-axis force torque sensor for robots

Bota Systems - The SensONE 6-axis force torque sensor for robots

Our Bota Systems force torque sensors, like the SensONE, are designed for collaborative and industrial robots. It enables human machine interaction, provides force, vision and inertia data and offers "plug and work" foll all platforms. The compact design is dustproof and water-resistant. The ISO 9409-1-50-4-M6 mounting flange makes integrating the SensONE sensor with robots extremely easy. No adapter is needed, only fasteners! The SensONE sensor is a one of its kind product and the best solution for force feedback applications and collaborative robots at its price. The SensONE is available in two communication options and includes software integration with TwinCAT, ROS, LabVIEW and MATLAB®.