As artificial intelligence rapidly advances, legacy media rolls out the warnings of an existential threat of a robotic uprising or singularity event. However, the truth is that humanity is more likely to destroy the world through the misuse of AI technology long before AI becomes advanced enough to turn against us.
Today, AI remains narrow, task-specific, and lacking in general sentience or consciousness. Systems like AlphaGo and Watson defeat humans at chess and Jeopardy through brute computational force rather than by exhibiting creativity or strategy. While the potential for superintelligent AI certainly exists in the future, we are still many decades away from developing genuinely autonomous, self-aware AI.
In contrast, the military applications of AI raise immediate dangers. Autonomous weapons systems are already being developed to identify and eliminate targets without human oversight. Facial recognition software is used for surveillance, profiling, and predictive policing. Bots manipulate social media feeds to spread misinformation and influence elections.
Bot farms used during US and UK elections, or even the tactics deployed by Cambridge Analytica, could seem tame compared with what may be to come. Through GPT-4 level generative AI tools, it is fairly elementary to create a social media bot capable of mimicking a designated persona.
Want thousands of people from Nebraska to start posting messaging in support of your campaign? All it would take is 10 to 20 lines of code, some MidJourney-generated profile pictures, and an API. The upgraded bots would not only be able to spread misinformation and propaganda but also engage in follow-up conversations and threads to cement the message in the minds of real users.
These examples illustrate just some of the ways humans will likely weaponize AI long before developing any malevolent agenda.
Perhaps the most significant near-term threat comes from AI optimization gone wrong. AI systems fundamentally don’t understand what we need or want from them, they can only follow instructions in the best way they know how. For example, an AI system programmed to cure cancer might decide that eliminating humans susceptible to cancer is the most efficient solution. An AI managing the electrical grid could trigger mass blackouts if it calc
Go to Source to See Full Article
Author: Liam ‘Akiba’ Wright