Photographer: Johannes Eisele/AFP via Getty Images

Threat Actors Exploit AI Tools for Cyberattacks


March 9, 2026
Updated on March 9, 2026
{{bullet-1}}
{{bullet-2}}
Share this article
Email

Cybersecurity researchers are warning of a surge in AI-powered cyberattacks as threat actors increasingly adopt generative AI tools to automate and enhance malicious operations. CyberStrikeAI, an AI security testing platform, has been observed in hacker campaigns to conduct rapid reconnaissance, phishing lures, and develop malicious code. Meanwhile, OpenAI has confirmed that Chinese-linked threat actor groups have leveraged ChatGPT to craft phishing campaigns and bypass traditional defenses. Researchers have also discovered critical infrastructure vulnerabilities, such as a recently exposed flaw in Google Cloud API keys that unintentionally grants attackers unauthorized access to sensitive Gemini AI endpoints.

Hackers Adopt CyberStrikeAI for Automated Attacks

In a report, Senior Threat Intel Advisor for Team Cymru, Will Thomas, found that CyberStriekAI was used by the cybercriminals to automate the most time-consuming parts of a cyberattack. With CyberStrikeAI, even less-experienced attackers can quickly scan networks for weak spots, write malicious code, and generate highly convincing phishing emails in multiple languages. By lowering the technical barrier to entry, this tool allows hackers to launch faster, more sophisticated attacks against businesses at a much larger scale.

OpenAI Confirms Chinese-Linked Threat Actors Using ChatGPT

In a separate trend, OpenAI confirmed that threat actors have been caught using ChatGPT to improve their cyber espionage operations. Rather than inventing entirely new exploits, these groups use AI to accelerate social engineering, translate bulk messaging, and debug malicious code. OpenAI also notes that cybercriminals are no longer relying on a single platform. Threat actors frequently use multiple AI models across different stages of an attack to evade detection and maintain long-term access to victim networks.

Unintended Privilege Escalation in Google Cloud API Keys via Gemini

Security researchers from cybersecurity company Truffle Security Co. discovered a flaw involving Google Cloud Application Programming Interface (API) keys, highlighting how public API keys, originally intended as safe-to-share public identifiers for front end services, can actually silently gain access to sensitive Gemini AI endpoints, enabling unauthorized data access. When developers enable the Gemini API in Google Cloud project, existing public facing API keys default to accessing it without warning. This unintentionally upgrades public-facing website keys into sensitive credentials capable of authenticating to backend AI models. 

This creates massive financial and data privacy risks that are incredibly difficult to spot. Because companies were originally told these keys were safe to post publicly, standard security tools completely ignore them. Threat actors can easily copy these exposed keys from your website to use your AI services for free, resulting in unexpected, massive cloud billing charges. Furthermore, anyone with the key can bypass security to read private files, AI chat prompts, and sensitive company data processed by Gemini.

Read the full technical report from Truffle Security Co. here. 

PSA notes the following risk management concerns surrounding this development:

  • Data and intellectual property theft: State-sponsored groups and cybercriminals use AI to speed up cyber espionage, increasing the likelihood that highly sensitive corporate data, trade secrets, and credentials will be exfiltrated.
  • Operational disruption and costs: AI-accelerated breaches increase the risk of severe business downtime, resulting in lost revenue, regulatory scrutiny, and high recovery expenses.
  • Endpoint vulnerabilities via fake productivity tools: Employees seeking productivity tools may inadvertently download malicious browser extensions disguised as legitimate AI assistants (like fake ChatGPT or DeepSeek tools), opening backdoors directly into corporate networks.