AI systems are able to learn better ways of exemplifying data without human intervention and build models that match human values. When humans can’t decide how to designate values, AI systems could identify configurations and create suitable models by themselves.

AI Apocalypse
AI Apocalypse

Len Calderone for | RoboticsTomorrow

Elon Musk stated that we needed to colonize Mars, so that we’ll have somewhere to go should Artificial Intelligence (AI) goes rogue and turns on humanity. Elon Musk put out a warning three years ago that there was a possibility of AI running amok. Shane Legg, a partner in DeepMind, stated, “I think human extinction will probably occur, and technology will likely play a part in this.”

InSight Mission to Mars (JPL.NASA.GOV)

In science fiction AI is portrayed as robots with human-like characteristics. AI includes everything from Google’s search algorithms to IBM’s Watson to autonomous vehicles. It is important that AI does what we want it to do if it controls a car, an airplane, a pacemaker, or a power grid. An essential question is what will happen if the search for a strong artificial intelligence succeeds and an AI system becomes better than humans at all intellectual tasks.

Inventing radically new technologies, such as superintelligence, might help us eliminate war, disease, and poverty; yet there is a concern that it might also be the last major event in human history unless we bring AI into line with our objectives before it becomes super intelligent. Will we be able to enjoy the benefits of AI while avoiding the dangers?

There is a possibility that scientists will get so occupied in their work that they won’t recognize the consequences of what they’re doing. Right now, our phones and our computers are extensions of us, but we interface with them through finger movements or speech, which are time-consuming. At some point, we will need a neural lace that would actually hardwire your brain to communicate quickly and wirelessly with computers or to unlimited computing power in the cloud. 

Photo by Jack Moreh (Free Range Stock)

AI systems are able to learn better ways of exemplifying data without human intervention and build models that match human values. When humans can’t decide how to designate values, AI systems could identify configurations and create suitable models by themselves. More importantly, the opposite could also occur when AI could construct something that appears like a correct model of human references and values but is, in reality, precariously wrong. Considering our challenging needs and partialities, it’s problematic to model the values of any one person. Agreeing on values that relate to all humans, and then effectively modeling them for AI systems, could be an impossible mission.

Our real worry about AI isn’t doing evil, but competence. Contrary to some belief, AI can have goals. A super intelligent AI is excellent at achieving its goals, but what are those goals? They are there, whether the AI is conscious and has an understanding of its purpose or not. Therefore, we need to instill goals that are aligned with ours. 

It’s far more likely that robots would unintentionally harm or exasperate humans while carrying out our orders than they would commit evil against us. As AI starts to make decisions for us in the real world, the risk factors are much higher. As an example, an autonomous car might be instructed never to go through a red light, but the car might then hack into the traffic light control system and change all of the lights to green.

AI would not inevitably be driven by the same emotional craving to amass power that often drives human beings. However, AI could be inspired to take over the world as a rational means toward accomplishing its definitive goals. The AI would take over resources that it needs, preventing others from using the same resources. An AI machine could have the goal of making as many of a device as it can. It could then take over the world to create as many devices as possible. It could also prevent humans from shutting it down or using those resources on objects other than the one particular device. 

An AI with the aptitudes of an experienced artificial intelligence researcher would be able to change its own source code and improve its own intelligence. It would then be able to reprogram itself with the result of an intelligence upsurge where a super AI would by far out strip human intelligence, and simply outwit human opposition.

Replicating robots (NASA graphic)

When AI can produce more economic wealth than the cost of its hardware, humans would have an incentive to allow the AI to make copies of itself, controlling the economy. A super intelligent AI could manipulate computers connected to the Internet, and proliferate copies of itself onto those systems, and even steal money to finance its plans.

Imagine a network of human-level intelligences calculated to network together and share intricate thoughts seamlessly. It would be able to collectively work as a team without tension. It would consist of trillions of human-level intelligences. It would become a collective superintelligence, similar to the Borg

The thought of an intelligent machine having the ability to perform warfare without any human involvement or intervention is becoming a reality that cannot be ignored. AI is on its way to revolutionizing warfare as autonomous weapons are developed. Who and what will be the target? When AI goes to war with other AI, the ongoing cybersecurity challenges will add enormous risks to the future of humanity and the human ecosystem. 

Many researchers think that advanced AI weapon systems have a huge potential for catastrophic failures. They could go astray in ways that humans cannot correct and potentially wiping us out.

* DOD photo

Is super intelligent AI a decade away? A century? As long as we’re not 100% sure when it will happen, it’s smart to employ safety measures to protect humankind now to prepare for the eventuality. AI must not be motivated to find loopholes. Safety problems associated with super intelligent AI are so hard to solve it might take decades. We must take into consideration Isaac Asimov's "Three Laws of Robotics." 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

 
 
The content & opinions in this article are the author’s and do not necessarily represent the views of RoboticsTomorrow

Comments (0)

This post does not have any comments. Be the first to leave a comment below.


Post A Comment

You must be logged in before you can post a comment. Login now.

Featured Product

DESTACO - Revolutionizing Industrial Automation

DESTACO - Revolutionizing Industrial Automation

Looking for a reliable solution to enhance your automation process? Look no further than the DESTACO Robohand Grippers. These grippers are designed for the modern world of robotics, offering unparalleled performance and precision. Whether you need to grip fragile items, irregularly shaped objects, or heavy-duty components, the DESTACO Robohand Grippers have got you covered. Their modular design allows for quick and easy customization, ensuring a perfect fit for your application.