AI Worms Target Asia's Connected AI Systems
Security researchers have created Morris II, an AI worm capable of spreading between generative AI agents and stealing sensitive data. The breakthrough discovery comes as Asia's rapidly expanding AI infrastructure faces mounting cybersecurity challenges, with risky AI prompts surging 97% in 2025.
Named after the infamous 1988 Morris worm that infected 10% of internet-connected computers, Morris II represents a new breed of threat specifically designed for AI-powered systems. The worm demonstrated its ability to attack generative AI email assistants, successfully breaking security protections in both OpenAI's ChatGPT and Google's Gemini during controlled laboratory testing.
By The Numbers
- 87% of respondents identified AI-related vulnerabilities as the fastest-growing cyber risk in 2025-2026
- 53% of security leaders say AI-powered attacks will be their biggest challenge in 2026
- India faces 3,195 cyber attacks per organisation weekly, 62% higher than the global average
- South Asia reports 85% of organisations noting increased risks of cyber-enabled fraud and phishing
- 77% of organisations have adopted AI for cybersecurity, mainly for phishing detection at 52%
Asia's AI Infrastructure Under Siege
The timing of this research proves particularly concerning for Asia, where AI adoption continues accelerating across industries. India alone lost ₹19,813 crore ($2.18 billion) to cyber fraud in 2025, highlighting the region's vulnerability to sophisticated digital attacks.
"2026 may mark the return of fast-moving AI-powered worms, as zero-day exploitation and island hopping become more prevalent," warns Tom Kellerman, cyber strategist at VMware.
The threat extends beyond individual organisations. Advanced persistent threat groups aligned with countries including China, North Korea, and Russia are increasingly shifting to homegrown AI tools to evade monitoring systems. These groups deploy AI-operated attacks and personalised social engineering campaigns using tools like WormGPT to target Asian enterprises.
Singapore's approach to AI regulation reflects growing regional awareness of these risks. The city-state has implemented comprehensive frameworks addressing AI misuse, recognising the dual nature of artificial intelligence as both opportunity and threat.
How AI Worms Exploit Connected Systems
AI worms operate differently from traditional malware. Rather than exploiting software vulnerabilities, they manipulate the language models themselves, using carefully crafted prompts to trigger unintended behaviours. The researchers demonstrated several attack vectors:
- Email-based propagation through AI-powered assistants that automatically process attachments
- Data exfiltration by instructing compromised AI agents to send sensitive information to external servers
- Lateral movement between connected AI systems within organisational networks
- Payload delivery through compromised AI agents executing malicious instructions
The interconnected nature of modern AI systems amplifies these risks. As organisations deploy AI-powered solutions across multiple business functions, a single compromised agent could potentially access vast amounts of corporate data.
| Attack Vector | Traditional Malware | AI Worms |
|---|---|---|
| Entry Point | Software vulnerabilities | Language model manipulation |
| Propagation | File infection | Prompt injection |
| Detection | Signature-based | Behaviour analysis |
| Mitigation | Patches and updates | Input validation and monitoring |
Regional Response and Defence Strategies
Asian governments and enterprises are responding to these emerging threats with increased vigilance and investment. The AI Arms Race: Safeguarding Asia's Cybersecurity has become a critical priority for regional stakeholders.
"Developments in AI are reshaping multiple domains, including cybersecurity. However, they can also pose serious risks such as data leaks and cyberattacks if they malfunction or are misused," states Josephine Teo, Singapore's Minister for Digital Development and Information, and Minister-in-Charge of Cybersecurity.
Defence strategies against AI worms require a multi-layered approach combining traditional security measures with AI-specific protections. Organisations must implement robust input validation, continuous monitoring of AI agent behaviours, and strict access controls for connected systems.
The education sector faces particular vulnerability, with India reporting a 6% increase in ransomware attacks targeting educational institutions. This trend highlights the need for comprehensive AI governance frameworks across the region.
Industry Impact and Future Implications
The emergence of AI worms poses significant implications for Asia's digital economy. AI-driven cyberattacks represent a new threat landscape that traditional security measures cannot fully address.
Financial services, healthcare, and manufacturing sectors face the highest risk due to their extensive AI adoption and valuable data assets. The interconnected nature of supply chains across Asia means that a successful AI worm attack could cascade across multiple organisations and countries.
Agentic phishing attacks, powered by AI worms, are projected to exceed 42% of global breaches by 2026. These sophisticated campaigns use personalised social engineering tactics that adapt based on target responses, making them significantly more dangerous than traditional phishing attempts.
What are AI worms and how do they differ from traditional malware?
AI worms are malicious programs that exploit language models rather than software vulnerabilities. They spread through prompt injection attacks, manipulating AI systems to execute unintended actions like data theft or lateral movement across connected networks.
Which industries in Asia face the highest risk from AI worms?
Financial services, healthcare, and manufacturing sectors face elevated risks due to extensive AI adoption and valuable data. Educational institutions also show vulnerability, with India reporting increased ransomware attacks in this sector.
How can organisations protect against AI worm attacks?
Protection requires input validation for AI systems, continuous behaviour monitoring, access controls for connected agents, and traditional security measures. Human oversight and clear boundaries for AI actions remain crucial defensive elements.
Are current cybersecurity tools effective against AI worms?
Traditional signature-based detection methods prove insufficient. Organisations need behaviour analysis tools and AI-specific security solutions that can identify prompt injection attempts and unusual AI agent activities in real-time.
What role do governments play in addressing AI worm threats?
Governments develop regulatory frameworks, share threat intelligence, and coordinate response efforts. Singapore and other Asian nations are implementing comprehensive AI governance policies to address these emerging security challenges.
The rapid advancement of AI technology across Asia brings tremendous opportunities alongside significant risks. As AI continues transforming industries throughout the region, the cybersecurity community must stay ahead of emerging threats like AI worms.
How prepared is your organisation for the next generation of AI-powered cyber threats? Drop your take in the comments below.






Latest Comments (5)
Morris II infiltrating ChatGPT and Gemini to steal data is wild. If this hits K-pop agencies, imagine the leaks! We're already so focused on keeping unreleased tracks and show scripts secure. This whole AI worm thing makes me seriously consider how we integrate generative AI for localization. Need to look into this more.
this Morris II thing is seriously concerning, especially for us working with generative AI in e-commerce. we're already dealing with so many data scraping bots on Tokopedia, adding AI worms to the mix feels like another level of headache. the article mentions traditional security measures, but when you're moving fast to deploy new AI features, sometimes those steps get overlooked. finding that balance is really tough here in Indonesia where infrastructure can be a bit behind too. definitely need to keep an eye on this.
The Morris II research was a wake-up call, demonstrating how easily a malicious prompt could propagate. What I wonder now, a few months on, is how many of those "traditional security measures" and human oversight protocols have actually been implemented across the region beyond the headlines. We need more than just design principles; we need audited, verifiable adherence, especially given the rate of adoption.
Morris II sounds like a serious threat, no doubt. But how many "connected, autonomous AI ecosystems" are we actually running in Malaysia right now that this type of worm can jump between? Most of our telco's AI is still pretty siloed, with plenty of human review. The focus on widespread, autonomous AI might be a bit ahead of our current market reality.
Morris II breaking security on ChatGPT and Gemini is a big concern, especially since so much of our on-device AI work at Samsung is moving towards integrating with larger generative models. The implication for edge computing is huge. If a worm like that can jump between agents, what happens when it hits a fleet of devices? We already have enough trouble with sandboxing on-device models, this just adds another layer of complexity. Have to factor this into our future security architecture discussions.
Leave a Comment