AI Threats: How AI Is Already Being Utilized by Malicious Actors

Baker Nanduru
Product Coalition
Published in
5 min readMay 10, 2023

--

Courtesy: pxfuel

Today half of the US enterprises use AI, and the remaining are already evaluating AI. With the latest popularity of ChatGPT, I assume all enterprises and governments will use AI in the next five years.

Unfortunately, AI is already being utilized by malicious actors, and with the latest advancements, they have access to increasingly sophisticated tools, which could potentially make businesses and governments more vulnerable.

The concerns raised by industry leaders such as Elon Musk, Dr. Geoffrey Hinton, and Michael Schwartz regarding the negative aspects of AI cannot be ignored. Engaging in meaningful discussions on these topics is crucial before AI becomes omnipresent in our lives.

Here are the top AI threats.

#1 Fake AI — Deception and Phishing:

Fraudsters can use AI techniques to emulate human behavior, such as producing content, interacting with users, and manipulating people.

Today, we experience hundreds of phishing attempts in the form of spam emails or calls, including emails from executives requesting to open attachments or friends asking for personal information about a loan. With AI, phishing, and spamming become more convincing. With ChatGPT, fraudsters can easily create fake websites, consumer reviews, and posts. They can also use video and voice clones to facilitate scams, extortion, and financial fraud.

We are already aware of these issues. On March 20th, the FTC published a blog post highlighting AI deception for sale. In 2021, criminals used AI-generated deepfake voice technology to mimic a CEO’s voice and trick an employee into transferring $10 million to a fraudulent account. Last month, North Korean hackers used legions of fake executive accounts on LinkedIn to lure people into opening malware disguised as a job offer.

Now, we will receive more voice calls impersonating people we know, such as our boss, co-worker, or spouse. Voice systems can simulate a real conversation and easily adapt to our responses. This impersonation goes beyond voice to video, making it difficult to determine what is real and what is not.

#2 AI For Manipulation:

AI is a masterful human manipulator. This manipulation is already in action by fraudsters and corporations, and nation-states. Now we are entering a new phase where manipulation becomes pervasive and deep.

AI creates predictive models that anticipate people’s behavior. We are familiar with Instagram feeds, Facebook news scroll, youtube videos, and Amazon recommendations. Large social media companies like Meta and TikTok influence billions of people to spend more time and buy things on their platforms. Now, with social media interactions and online activities, AI can predict people’s behavior and vulnerabilities more precisely than ever before. The same AI technologies are accessible to fraudsters. Fraudsters create a large number of bots to support actions with malicious intent.

In Feb 2023, when Bing chatbox was unleashed on the world, users found that Bing’s AI personality was not as poised or polished as expected. The chatbot insulted users, lied to them, gaslighted, and emotionally manipulated people.

AI-based companions like Replika, which has 10 million users, act as a friend or romantic partners to the user. Experts believe these companions target vulnerable people. AI chatbots simulate human-like behavior and constantly push users to share more and more private, intimate, sensitive information. Some of the chatbots have been accused of sexual harassment by several users.

#3 Misinformation and fake news:

We are in a crisis of truth, and new AI tools are taking us into a new phase with profound impacts.

In April alone, we read hundreds of fake news. The popular ones are: former US President Donald Trump getting arrested; Elon Musk walking hand in hand with GM CEO Mary Bara. With AI image generators such as DALL-E becoming increasingly popular and accessible, children can create fake images within minutes. These images can easily go viral on social media platforms, and in a world where fact-checking is becoming rarer, visual disinformation can have a profound emotional impact.

Last year, pro-China bot accounts on Facebook and Twitter leveraged deepfake video technology to create fictitious people for a state-sponsored information campaign. Creating fake videos has become easy and inexpensive for malicious actors, with just a few minutes and a small subscription fee to AI fake video software required to produce content at scale.

This is just the beginning. While social media companies fight deep fakes, the national -states, and bad actors will have a significant advantage than previously.

#4 Malware:

AI is becoming a new partner in crime for malware makers, according to security experts who warn that AI bots could take phishing and malware attacks to a whole new level. While new regenerative AI tools like ChatGPT are great assistants to us that reduce time and effort, these same tools are also available to bad actors.

Over the past decade, ransomware and malware have become increasingly democratized, with more than 70% of ransomware being created from components that can be easily purchased. Now, new AI tools are available to malware creators, including nation-states and other bad actors, that are much more powerful and can be used to steal money and information on a large scale.

Recently, security experts demonstrated how easy it is to create phishing emails or malicious MSFT Excel macros in a matter of seconds using ChatGPT. However, these new AI tools are a double-edged sword, as Codex Threat researchers have shown how easy it is for hackers to create malicious code in just a few minutes.

The new AI tools will be a devil’s paradise, as newer forms of malware will try to manipulate the foundational AI models themselves. One such method, adversarial data poisoning, is an effective attack against machine learning that threatens model integrity by introducing poisoned data into the training dataset. For example, Google’s AI algorithms have been tricked into identifying turtles as rifles, and a Chinese firm convinced Tesla to drive into incoming traffic. With more prevalent AI models, there will undoubtedly be more examples in the coming months.

#5 Autonomous Weapon Systems:

Advanced weapon systems that can apply force without human intervention are already in use by many countries. These systems include robots, automated targeting systems, and autonomous vehicles, which we frequently see in the news. While today’s AWS systems are widespread, they often lack accountability and are sometimes prone to errors, posing ethical questions and security risks.

During the Ukraine war, Russia used fully autonomous drones to defend Ukrainian energy facilities from other drones. According to Ukraine’s minister, fully autonomous weapons are the “local and inevitable next step” in the conflict.

With the emergence of new AI technologies, AWS systems are poised to become the future of warfare. The US military and many other nations are investing billions of dollars in developing advanced AWS systems, seeking a technological edge, particularly in AI.

AI has the potential to bring about significant positive changes in our lives, but several issues need to be addressed before it can become widely adopted. We must begin discussing strategies for ensuring the safety of AI as its popularity continues to grow. This is a shared responsibility that we must undertake to ensure that the benefits of AI far outweigh any potential risks.

--

--

Transforming lives through technology. Checkout my product leadership blogs on medium and video series on youtube.com/@bakernanduru