NewsNational NewsScripps News

Actions

FBI warns of hackers using artificial intelligence to create malware

Malware refers to any software designed to harm or exploit computer systems, networks, or users, by stealing information or disrupting operations.
FBI warns of hackers using artificial intelligence to create malware
Posted
and last updated

After months of warnings from tech executives about the dangers of artificial intelligence, the Federal Bureau of Investigation has a new list of concerns.

The agency has issued a stark warning to Americans about cybercriminals using AI tools, like ChatGPT, to create malicious code and launch attacks that had previously required much more effort.

The agency detailed its concerns on a phone call with journalists and explained that AI chatbots have been used to help criminals carry out various types of illicit activity. One FBI official said bad actors are using it as a tool to supplement criminal activities by utilizing things like AI voice generators to impersonate trusted individuals in order to defraud people.

The bottom line is there are fewer people, less expertise, and less time needed for a lot of these threats, ultimately lowering the barrier for entry, according to officials. Furthermore, the agency said it is working with private companies to identify synthetically generated content online that's made with the help of AI.

SEE MORE: Cybersecurity firm finds compromised ChatGPT accounts on dark web

It's not the first time an alarm has been sounded about the potential threats of AI. Cybersecurity firm Group-IB reported that more than 26,000 compromised ChatGPT accounts were detected on the dark web in may and were being offered for sale. According to the firm, more employers are using tools like ChatGPT to optimize their work, but entries into the chatbot could include sensitive or proprietary information that could be exploited by hackers.

However, some companies have been wary to jumping on the chatbot train. Apple has already restricted employees from using ChatGPT. Other companies, like Verizon and JPMorgan Chase, have taken similar measures.

SEE MORE: Musk, tech leaders call for pause on 'out-of-control' AI race

Earlier this month, President Joe Biden hosted seven major tech companies at the White House to discuss ways to protect the public from the potential harms of AI. The companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI — all made voluntary commitments to ensure their products are safe for consumers prior to launch.

One of those promises is a commitment to internal and external security testing—meaning allowing third-party, independent experts to review AI cybersecurity.

"I think that's a very important step," said Dr. Arati Prabhakar, lead science adviser to the president. "There are other fields where that happens [where] it's been very helpful, but I think it's going to be very constructive for AI."


Trending stories at Scrippsnews.com