ChatGPT has been the talk of the tech sector for the past few months, and its rise has prompted further developments in generative AI and other tools and chatbots like Google’s Bard and Microsoft’s Bing Chat. Despite the immense potential of AI and its utilities in the tech sector, it has attracted the eyes of scammers as well. According to a new report published by Meta, there has been a steep increase in malware disguised as ChatGPT and similar AI software and tools.
The social media giant has already taken down a network of more than 100 accounts that were based in China, and which posed as organizations in the US and Europe pushing pro-China propaganda. This includes “warnings against boycotting the 2022 Beijing Olympics; allegations of US foreign policy in Africa,” “claims of comfortable living conditions for Uyghurs in China,” as well as “negative commentary about Uyghur activists and critics of the Chinese state.” The network could not directly be linked to the Chinese government, even though Meta said that it found links to individuals in China associated with a technology company.
This news has significant implications for the security of AI software and the protection of personal data. As the use of AI software continues to grow, it is becoming increasingly important to ensure that such software is secure and free from malware. Companies that provide AI software must take the necessary steps to ensure that their products are safe for users to download and use. The increasing use of AI chatbots and other AI-powered software has made it easier for cybercriminals to spread malware and launch other attacks. As AI technology continues to evolve and become more widespread, it is likely that we will see an increase in the number of malware attacks that use AI as a disguise.
The implications of this report are clear: cybercriminals are becoming more sophisticated in their tactics, and users must be increasingly vigilant to protect themselves from attacks. The rise of AI-powered chatbot malware underscores the need for stronger cybersecurity measures and the importance of staying informed about the latest threats.
By spreading their attack infrastructure across multiple platforms, scammers are making it more difficult for tech companies to detect their malicious activity and respond accordingly. Ten new malware strains such as Ducktail, NodeStealer, and newer malware pose as ChatGPT browser extensions and productivity tools. Thus, they take advantage of their newfound popularity to target individuals via malicious browser extensions, ads, and various social media platforms, and steal their steal account credentials.
“As an industry, we’ve seen this across other topics that are popular in their time such as crypto scams fueled by the immense interest in digital currency,” Guy Rosen, Meta’s Chief Information Security Officer, said. “So from a bad actor’s perspective, ChatGPT is the new crypto.”
Since March 2023, Meta claims to have blocked more than 1,000 malicious links used in generative AI-themed lures, shared the URLs with other tech companies, and reported multiple browser extensions and mobile apps related to these malicious campaigns. Meta identified these malware operations at different stages of their lifecycle and in response to this development, added new controls for business accounts on Facebook.
These controls will help manage, audit, and limit who can become an account administrator and add other administrators, and who can perform sensitive actions. Furthermore, it is launching a step-by-step tool for businesses to help them flag and remove malware on their enterprise devices and will even suggest using third-party malware scanners, as well as a new type of account for businesses. Called “Meta Work” accounts, they will enable users to access Facebook’s Business Manager tools without a personal Facebook account. They will roll out later this year.