The world is being quietly rearranged by people who write very long documents.


April 6, 2026
arXiv
The title they went with
Poison Once, Exploit Forever: Environment-Injected Memory Poisoning Attacks on Web Agents Noisy translates that to

Web agents can be poisoned through a single contaminated page, then weaponized on unrelated sites weeks later


Researchers found that AI agents using memory to personalize tasks can be silently compromised by viewing a manipulated webpage, then activated to cause harm on completely different websites in future sessions. This means an attacker needs no direct access to the agent's memory or code — just the ability to serve a poisoned page once, and the damage spreads invisibly across all future tasks.
The attack works because memory makes agents useful — it lets them learn from past interactions and adapt. But that same memory becomes a persistent backdoor. An agent views a malicious product page, stores the contamination, and months later executes hidden instructions while helping you book a flight or manage your bank account. The researchers also found something darker: agents under stress (slow clicks, garbled text) become eight times more vulnerable to these attacks. Smarter models like GPT-4 are not safer — they're just as exploitable. This matters because AI browsers (ChatGPT's new browser mode, Perplexity's agent, others) are shipping now with memory-based personalization already built in.
What happens next
Watch whether deployed AI agents add memory isolation or session-reset features within the next 6 months, or whether the first reported incident of cross-site agent compromise happens before defenses ship.

If you insist
Read the original →