With talk about integrating artificial intelligence and the cryptocurrency industry mostly focusing on how AI can help the crypto industry combat scams, experts are failing to pay attention to the fact that it could have the complete opposite effect. In fact, Meta recently warned that hackers appeared to be taking advantage of OpenAI’s ChatGPT in attempts to gain entry into users’ Facebook accounts.
Meta reported blocking more than 1,000 malicious links masked as ChatGPT extensions in March and April alone. The platform went as far as calling ChatGPT “the new crypto” in the eyes of scammers. In addition, searching the keywords “ChatGPT” or “OpenAI” on DEXTools, an interactive crypto trading platform tracking a number of tokens, collectively reveals over 700 token trading pairs that mention either of the two keywords. This shows that scammers are using the hype around the AI tool to create tokens, despite OpenAI not announcing an official entry into the blockchain world.
Social media platforms have become popular channels for promoting new scam coins online. Scammers take advantage of the widespread reach and influence of these platforms to generate a significant following within a short period. By leveraging AI-powered tools, they can further amplify their reach and create a seemingly loyal fanbase consisting of thousands of people. These fake accounts and interactions can be used to give the illusion of credibility and popularity to their scam projects.
Related: Think AI tools aren’t harvesting your data? Guess again
Much of crypto works on social proof-of-work, which suggests that if a cryptocurrency or project appears popular and has a large following, it must be popular for a reason. Investors and new buyers tend to trust projects with greater and more loyal followings online, assuming that others have done enough research prior to investing. However, the use of AI can challenge this assumption and undermine social proof-of-work.
Now, just because something has thousands of likes and genuine-looking comments does not necessarily mean it is a legitimate project. This is just one attack vector, and AI will give rise to many others. One such example is “pig butchering” scams, where an AI instance can spend several days befriending someone, usually an elder
Go to Source to See Full Article
Author: Felix Roemer