Google’s developing its own version of the Laws of Robotics

Graham Templeton for ExtremeTech:  Google’s artificial intelligence researchers are starting to have to code around their own code, writing patches that limit a robot’s abilities so that it continues to develop down the path desired by the researchers — not by the robot itself. It’s the beginning of a long-term trend in robotics and AI in general: once we’ve put in all this work to increase the insight of an artificial intelligence, how can we make sure that insight will only be applied in the ways we would like?

That’s why researchers from Google’s DeepMind and the Future of Humanity Institute have published a paper outlining a software “killswitch” they claim can stop those instances of learning that could make an AI less useful — or, in the future, less safe. It’s really less a killswitch than a blind spot, removing from the AI the ability to learn the wrong lessons.  Cont'd...

Comments (0)

This post does not have any comments. Be the first to leave a comment below.


Post A Comment

You must be logged in before you can post a comment. Login now.

Featured Product

Co-Bots: Working Together Side by Side

Co-Bots: Working Together Side by Side

Join Grant Imahara in meeting with KUKA to learn how human-robot collaboration and robot learning is transforming the workplace. Is Industry 4.0 a future where robots and humans all hold hands? Tune in to see.