Google's developing its own version of the Laws of Robotics

Graham Templeton for ExtremeTech:  Google’s artificial intelligence researchers are starting to have to code around their own code, writing patches that limit a robot’s abilities so that it continues to develop down the path desired by the researchers — not by the robot itself. It’s the beginning of a long-term trend in robotics and AI in general: once we’ve put in all this work to increase the insight of an artificial intelligence, how can we make sure that insight will only be applied in the ways we would like?

That’s why researchers from Google’s DeepMind and the Future of Humanity Institute have published a paper outlining a software “killswitch” they claim can stop those instances of learning that could make an AI less useful — or, in the future, less safe. It’s really less a killswitch than a blind spot, removing from the AI the ability to learn the wrong lessons.  Cont'd...

Featured Product

Boston Dynamics Webinar - Why Humanoids Are the Future of Manufacturing

Boston Dynamics Webinar - Why Humanoids Are the Future of Manufacturing

Join us November 18th for this Webinar as we reflect on what we've learned by observing factory floors, and why we've grown convinced that chasing generalization in manipulation—both in hardware and behavior—isn't just interesting, but necessary. We'll discuss AI research threads we're exploring at Boston Dynamics to push this mission forward, and highlight opportunities our field should collectively invest more in to turn the humanoid vision, and the reinvention of manufacturing, into a practical, economically viable product.