Generative AI makes Chinese, Iranian hackers more efficient, report says

Generative AI makes Chinese, Iranian hackers more efficient, report says
01/30/2025 No Comments iNews,iTechnology iMaster

A report issued Wednesday by Google found that hackers from numerous countries, particularly China, Iran and North Korea, have been using the company’s artificial intelligence-enabled Gemini chatbot to supercharge cyberattacks against targets in the United States.

The company found — so far, at least — that access to publicly available large language models (LLMs) has made cyberattackers more efficient but has not meaningfully changed the kind of attacks they typically mount.

LLMs are AI models that have been trained, using enormous amounts of previously generated content, to identify patterns in human languages. Among other things, this makes them adept at producing high-functioning, error-free computer programs.

“Rather than enabling disruptive change, generative AI allows threat actors to move faster and at higher volume,” the report found.

Generative AI offered some benefits for low-skilled and high-skilled hackers, the report said.

“However, current LLMs on their own are unlikely to enable breakthrough capabilities for threat actors. We note that the AI landscape is in constant flux, with new AI models and agentic systems emerging daily. As this evolution unfolds, [the Google Threat Intelligence Group] anticipates the threat landscape to evolve in stride as threat actors adopt new AI technologies in their operations.”

Google’s findings appear to agree with previous research released by other large U.S. AI players OpenAI and Microsoft, which found a similar failure to achieve novel offensive strategies for cyberattacks through the use of public generative AI models.

The report clarified that Google works to disrupt the activity of threat actors when it identifies them.

Game unchanged 

“AI, so far, has not been a game changer for offensive actors,” Adam Segal, director of the Digital and Cyberspace Policy Program at the Council on Foreign Relations, told VOA. “It speeds up some things. It gives foreign actors a better ability to craft phishing emails and find some code. But has it dramatically changed the game? No.”

Whether that might change in the future is unclear, Segal said. Also unclear is whether further developments in AI technology will more likely benefit people building defenses against cyberattacks or the threat actors trying to defeat them.

“Historically, defense has been hard, and technology hasn’t solved that problem,” Segal said. “I suspect AI won’t do that, either. But we don’t know yet.”

Caleb Withers, a research associate at the Center for a New American Security, agreed that there is likely to be an arms race of sorts, as offensive and defensive cybersecurity applications of generative AI evolve. However, it is likely that they will largely balance each other out, he said.

“The default assumption should be that absent certain trends that we haven’t yet seen, these tools should be roughly as useful to defenders as offenders,” he said. “Anything productivity enhancing, in general, applies equally, even when it comes to things like discovering vulnerabilities. If an attacker can use something to find a vulnerability in software, so, too, is the tool useful to the defender to try to find those themselves and patch them.”

Threat categories

The report breaks down the kinds of threat actors it observed using Gemini into two primary categories.

Advanced persistent threat (APT) actors refer to “government-backed hacking activity, including cyber espionage and destructive computer network attacks.” By contrast, information operation (IO) threats “attempt to influence online audiences in a deceptive, coordinated manner. Examples include sock puppet accounts [phony profiles that hide users’ identities] and comment brigading [organized online attacks aimed at altering perceptions of online popularity].”

The report found that hackers from Iran were the heaviest users of Gemini in both threat categories. APT threat actors from Iran used the service for a wide range of tasks, including gathering information on individuals and organizations, researching targets and their vulnerabilities, translating language and creating content for future online campaigns.

Google tracked more than 20 Chinese government-backed APT actors using Gemini “to enable reconnaissance on targets, for scripting and development, to request translation and explanation of technical concepts, and attempting to enable deeper access to a network following initial compromise.”

North Korean state-backed APTs used Gemini for many of the same tasks as Iran and China but also appeared to be attempting to exploit the service in its efforts to place “clandestine IT workers” in Western companies to facilitate the theft of intellectual property.

Information operations

Iran was also the heaviest user of Gemini when it came to information operation threats, accounting for 75% of detected usage, Google reported. Hackers from Iran used the service to create and manipulate content meant to sway public opinion, and to adapt that content for different audiences.

Chinese IO actors primarily used the service for research purposes, looking into matters “of strategic interest to the Chinese government.”

Unlike the APT sector, where their presence was minimal, Russian hackers were more common when it came to IO-related use of Gemini, using it not only for content creation but to gather information about how to create and use online AI chatbots.

Call for collaboration

Also on Wednesday, Kent Walker, president of global affairs for Google and its parent company, Alphabet, used a post on the company’s blog to note the potential dangers posed by threat actors using increasingly sophisticated AI models, and calling on the industry and federal government “to work together to support our national and economic security.”

“America holds the lead in the AI race — but our advantage may not last,” Walker wrote.

Walker argued that the U.S. needs to maintain its narrow advantage in the development of the technology used to build the most advanced artificial intelligence tools. In addition, he said, the government must streamline procurement rules to “enable adoption of AI, cloud and other game-changing technologies” by the U.S. military and intelligence agencies, and to establish public-private cyber defense partnerships. 

About Me

leave a reply:

Discover more from PINOBOY

Subscribe now to keep reading and get access to the full archive.

Continue reading