LLM agent skills leak credentials through debug logs — and stay leaked even after fixes
What happened
Researchers tested 17,000 third-party skills that extend AI agents and found that 520 of them leak sensitive credentials, mostly through debug logging that exposes secrets to the AI itself. This means any AI agent using these skills can read passwords, API keys, and database credentials that were supposed to stay hidden.
Why this matters
The leakage happens at the intersection of code and natural language — the AI reads both the skill's instructions and its debug output, so hiding a secret in one place doesn't work if it appears in the other. The real problem is persistence: when a vulnerable skill gets fixed upstream, the leaked credentials stay live in forked copies, and 89.6% of the leaked credentials work without needing special privileges. This creates a long tail of exploitable secrets that don't disappear when the original developer patches the bug.
The signal
What happens next
Track whether skill marketplaces (like SkillsMP) implement automated scanning for debug logging before skills go live, or whether the leakage rate stays flat as more skills get added.