Written by 9:03 am Blog

Google’s latest threat intelligence report highlights how AI is being used by hackers

Google has released a deep dive into how artificial intelligence is being integrated into the world…
Google’s latest threat intelligence report highlights how AI is being used by hackers

Google has released a deep dive into how artificial intelligence is being integrated into the world of cyber attacks. The report, published by Google Cloud’s threat intelligence team, outlines the transition from experimentation to actual deployment by adversarial actors.

As we see AI tools like Gemini and ChatGPT become part of our daily workflows, it was only a matter of time before the “bad guys” started using them too. The findings show that while we aren’t seeing “super-malware” just yet, the efficiency of traditional attacks is being significantly boosted.

This shift represents a new frontier for digital security that businesses and individuals globally need to be aware of. As the technology matures, the barrier to entry for sophisticated cybercrime continues to lower.

The shift from experimentation to integration

For the past year, much of the talk around AI and hacking was theoretical or limited to basic testing. Google’s latest intelligence suggests we have moved past that phase into a period of active integration.

Threat actors are now using Large Language Models (LLMs) to refine their existing workflows. This isn’t necessarily about creating brand new types of attacks, but making the old ones much more effective and harder to spot.

By using AI to automate the boring parts of a cyber attack, hackers can scale their operations like never before. This means a higher volume of attacks hitting inboxes and networks every single day.

Phishing and social engineering get a massive upgrade

One of the most immediate impacts of AI is the elimination of the “tell-tale” signs of a phishing scam. We are all used to looking for poor grammar, weird spelling, or awkward phrasing in suspicious emails.

AI has essentially fixed that problem for attackers. LLMs allow non-native speakers to craft perfectly written, professional emails in any language, including highly localised versions of English.

“Generative AI (gen AI) helps adversaries at various stages of the attack lifecycle, from reconnaissance and initial access to data exfiltration and impact.”

Google Cloud Threat Intelligence Research Team, Google.

This makes it incredibly difficult for the average user to distinguish between a legitimate corporate communication and a malicious attempt to steal credentials. The level of personalisation now possible at scale is a significant concern for IT departments everywhere.

Speeding up the development of malicious code

Beyond just writing emails, hackers are using AI to assist in writing and debugging code. While many AI platforms have safeguards to prevent the creation of malware, attackers are finding ways to bypass these or use the tools for “dual-use” purposes.

An attacker might ask an AI to help write a script that automates a legitimate administrative task. Once they have that code, it can be easily repurposed to perform malicious actions once they gain access to a system.

This “distillation” of complex tasks means that less-skilled individuals can now perform actions that previously required advanced coding knowledge. It is effectively democratising high-level cybercrime.

Reconnaissance and vulnerability research

Before an attack even begins, there is a lot of homework to do. Attackers need to research their targets, identify the software they use, and find unpatched vulnerabilities.

AI is proving to be a powerhouse for this type of data processing. It can scan through massive amounts of public data, social media profiles, and technical documentation to find the weak links in a company’s armour.

By feeding technical manuals or code snippets into an AI, researchers can identify potential flaws much faster than manual review. This gives the defenders less time to patch systems before they are exploited.

The defensive side of the AI war

It isn’t all bad news, however, as the same technology is being used to build better shields. Google is heavily investing in using AI to detect patterns of malicious behaviour that would be impossible for a human to spot in real-time.

The concept of “AI for security” is about shifting the advantage back to the defenders. By using AI to analyse network traffic, security teams can identify an intrusion in seconds rather than days.

In an era where the cost of data breaches continues to rise, these automated defensive tools are becoming essential. The speed of AI-driven attacks can only be countered by the speed of AI-driven defence.

Looking ahead at the adversarial landscape

The report suggests that we are still in the early innings of how AI will change the security landscape. We should expect to see more “deepfake” technology being used in business email compromise attacks.

Imagine receiving a voice note or even a video call from your CEO asking for an urgent transfer of funds. With AI, these types of high-stakes scams are becoming increasingly realistic and cheap to produce.

Staying safe in this new environment requires a mix of better technology and better education. We have to move past the idea that we can spot a scam just by looking at it; we need verified processes for everything.

Practical steps for everyone

The best defence remains a multi-layered approach to security. First and foremost, enabling multi-factor authentication (MFA) across all accounts is non-negotiable in 2026.

Secondly, businesses should be looking at “Zero Trust” architectures. This assumes that a breach is always possible and ensures that even if one account is compromised, the attacker can’t move freely through the entire network.

Finally, keep your software updated. AI might find the holes faster, but the developers are also using AI to find and patch those holes just as quickly.

Threat actors are using AI for phishing, scams, malware, and something called model extraction — when bad actors repeatedly prompt an AI model to understand and copy its inner workings. Our Threat Intelligence Group published a report on this growing thread — and steps we took to…

— News from Google (@NewsFromGoogle) February 12, 2026

Conclusion

The integration of AI into the hacker’s toolkit is a natural evolution of the digital arms race. While it makes the threats more sophisticated, it also forces the industry to innovate at a faster pace.

Google’s research serves as a timely reminder that the tools we use for productivity are the same ones being used against us. Awareness is the first step in ensuring that your digital life stays secure.

We will continue to monitor how these AI-driven threats evolve and what new tools become available to help users stay protected. The key is to remain vigilant and not get complacent just because your current systems feel safe.

For more information, head to https://cloud.google.com/blog/topics/threat-intelligence/distillation-experimentation-integration-ai-adversarial-use

Article Source

Close