Google’s developing its own version of the Laws of Robotics

Graham Templeton for ExtremeTech:  Google’s artificial intelligence researchers are starting to have to code around their own code, writing patches that limit a robot’s abilities so that it continues to develop down the path desired by the researchers — not by the robot itself. It’s the beginning of a long-term trend in robotics and AI in general: once we’ve put in all this work to increase the insight of an artificial intelligence, how can we make sure that insight will only be applied in the ways we would like?

That’s why researchers from Google’s DeepMind and the Future of Humanity Institute have published a paper outlining a software “killswitch” they claim can stop those instances of learning that could make an AI less useful — or, in the future, less safe. It’s really less a killswitch than a blind spot, removing from the AI the ability to learn the wrong lessons.  Cont'd...

Comments (0)

This post does not have any comments. Be the first to leave a comment below.


Post A Comment

You must be logged in before you can post a comment. Login now.

Featured Product

KEBA - KeMotion - The complete solution for robot and machine automation

KEBA - KeMotion - The complete solution for robot and machine automation

KeMotion stands for the fast, open and customizable automation of robots and machines in the Industry 4.0 era. Thanks to the scalable hardware in combination with many high-performance technology functions and turnkey software packages, the optimum solution can be created quickly and easily.