In brief
- Superintelligent AI could manipulate us rather than destroy us.
- Experts fear we’ll hand over control without realizing it.
- The future may be shaped more by code than by human intent.
At some point in the future, most experts say that artificial intelligence won’t just get better, it’ll become superintelligent. That means it’ll be exponentially more intelligent than humans, as well as strategic, capable—and manipulative.
What happens at that point has divided the AI community. On one side are the optimists, also known as Accelerationists, who believe that superintelligent AI can coexist peacefully and even help humanity. On the other are the so-called Doomers who believe there’s a substantial existential risk to humanity.
In the Doomers’ worldview, once the singularity takes place and AI surpasses human intelligence, it could begin making decisions we don’t understand. It wouldn’t necessarily hate humans, but since it might no longer need us, it might simply view us the way we view a Lego, or an insect.
Go to Source to See Full Article
Author: Jason Nelson
Tip BTC Newswire with Cryptocurrency