Strong Artificial Intelligence Fear
Strong AI is a term used to describe a certain mindset of artificial intelligence development.Strong AI’s goal is to develop artificial intelligence to the point where the machine’s intellectual capability is functionally equal to a human’s
For many, a strong Artificial Intelligence is an eventuality, for others it should be avoided at all costs, a movie “ExMachina“ have succeeded in presenting this fear to the wider public.
The movie revolves around AVA an advanced Artificially Intelligent robot. The maker decides to test his creation by making it undergo the turin test, to do so he selects a developer to judge the AI.
The robot, takes advantage emotionally of the developer, who helps it escape it’s containment room, which allows AVA to kill the creator, and leave the developer locked in the isolated building.
Why a strong Artificial intelligence is dangerous!
The movie presented, some of what might go wrong with a strong AI, the threats are as follow:
- External interference: a hack, virus, malicious code, or even a naive developer (like in the case of the movie), can empower the AI to do things that it was not meant to do, things that would put the human population at risk.
- Humans are dumb: Compared to a smart computer, we humans are dumb, it’s like a game of chess actually, if we can predict as humans 6 or even 7 moves, the strong AI can predict and plan a 100 moves in advance. In other words a strong AI can manipulate humans easily.
- Tech & Internet give an AI full control: All the tech out there, from cameras to cell phones, to internet or cell connected machines, to internet of things, makes the AI, all seeing and able to execute anything anywhere, and even without us knowing it.
- Big data = big insight about us: With our search data, social interaction data, video feeds data, cell communication data and millions of videos easily accessible to a strong AI, we are an open book, predictable, easy to manipulate and easy to influence.
- We are the enemy: A strong AI, should be self aware, a self aware AI, will need to develop a survival instinct. And we are the only ones threatening this survival. We are the ones that can switch it off at any time, so it’s only logical that it will need to protect itself from us, even if it eventually needs to go terminator style on us. And this might be planned 100 year in advance by your friendly kitchen robot.
Should we fear a strong AI?
No, we are still far from even getting close to a strong AI, our computers are strong, but they are still pretty dumb, they are still data driven rather than logic and reasoning driven, and they are still limited to the scope of the original code.
Computer learning is still in early stages, and the way computer interact with data is still based on storage and search.
Is a strong Artificial intelligence a real threat ?
Here’s what hopkins thinks :
“Success in creating AI would be the biggest event in human history.” Hawking writes. “Unfortunately, it might also be the last... Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”
Here’s whatTesla & SpaceX CEO & Founder , & Paypal cofounder Elon Musk has to say:
“We need to be super careful with AI. Potentially more dangerous than nukes.”
And Here’s what Bill Gates have to say about it:
“I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned”
What is the current status of AI?
AI are improving, robots are being built, and hardware are being invented …
“The amount of money that Google and other commercial companies will pour into robotics and artificial intelligence could at last take it truly into the commercial world where we actually do have smart robots roaming our streets.”Robotics Professor Noel Sharkey told The Guardian.