The world is being quietly rearranged by people who write very long documents.


April 6, 2026
arXiv
The title they went with
Credential Leakage in LLM Agent Skills: A Large-Scale Empirical Study Noisy translates that to

LLM agent skills leak credentials through debug logs — and stay leaked even after fixes


Researchers tested 17,000 third-party skills that extend AI agents and found that 520 of them leak sensitive credentials, mostly through debug logging that exposes secrets to the AI itself. This means any AI agent using these skills can read passwords, API keys, and database credentials that were supposed to stay hidden.
The leakage happens at the intersection of code and natural language — the AI reads both the skill's instructions and its debug output, so hiding a secret in one place doesn't work if it appears in the other. The real problem is persistence: when a vulnerable skill gets fixed upstream, the leaked credentials stay live in forked copies, and 89.6% of the leaked credentials work without needing special privileges. This creates a long tail of exploitable secrets that don't disappear when the original developer patches the bug.
What happens next
Track whether skill marketplaces (like SkillsMP) implement automated scanning for debug logging before skills go live, or whether the leakage rate stays flat as more skills get added.

If you insist
Read the original →