The world is being quietly rearranged by people who write very long documents.


March 30, 2026
arXiv
The pretty good title  they went with
The Competence Shadow: Theory and Bounds of AI Assistance in Safety Engineering Noisy translates that to

AI safety tools are causing engineers to stop thinking, and the math now proves it makes things worse

Adding a second expert to the analysis removes entire categories of hazards from consideration.

A formal analysis shows that when engineers use AI to help spot safety problems, the AI doesn't just miss things — it actively prevents engineers from considering hazards the AI didn't suggest. This means AI assistance in safety work is not a tool problem, it's a workflow design problem. The same AI makes safety analysis better or worse depending entirely on how it's used.
assumed The field assumed AI assistance in safety analysis improves outcomes by augmenting human judgment, with risk concentrated in what the AI misses.
found The paper shows that AI assistance systematically narrows the space of hazards human experts consider, compounding degradation multiplicatively and making the collaboration structure—not the tool—the primary determinant of safety quality.
For years, the assumption was straightforward: give engineers better tools, get better safety analysis. This paper shows the mechanism is inverted. The AI doesn't fail by being wrong — it fails by being plausible enough that engineers stop thinking independently. The competence shadow compounds multiplicatively, meaning a 10% narrowing of reasoning doesn't produce 10% worse analysis, it produces cascading degradation. This matters because Physical AI systems (robots, autonomous vehicles, industrial control) are already being designed with AI assistance in the safety workflow. If the workflow is wrong, the blind spots are baked into the system before it ships.
A spell-checker that, every time you use it, makes you stop re-reading your own sentences. Except the document is a nuclear plant.
who wins AI safety tool vendors, who can now sell certified products that quietly narrow the analysis they are supposed to expand.
who loses Organizations that bought a safety tool, checked the box, and are now counting on that box to mean something in a post-incident review.
also Anyone who rides in an autonomous vehicle, and the certification bodies who just discovered their frameworks are auditing the wrong thing.
Why this hasn't landed yet
The finding contradicts the optimistic additive framing that AI assistance makes experts better. That story is easier to sell, and nobody has been in an accident yet that has been formally attributed to this mechanism.
What happens next
Expect UNECE's GRVA working group and ISO/PAS 8800 revision teams to start fielding requests for workflow-level audit criteria within 18 months, once a regulator cites this paper after the next high-profile autonomous vehicle incident.
The catch
AI tool vendors whose products already carry ISO 26262 Part 8 qualification will argue that workflow design is the customer's responsibility, which is technically true and solves nothing.
The longer arc
Automation bias as a failure mode has been documented since at least the 1999 Skitka et al. research, and the Boeing 737 MAX crashes in 2018 and 2019 killed 346 people through a structurally identical mechanism. The contribution here is not the observation but the formal proof, which is new.
Part of a pattern
A 2024 Dell'Acqua field experiment with 758 consultants found AI assistance degraded performance by 19 percentage points on out-of-distribution tasks. The pattern is consistent: AI tools help reliably inside their competence boundary and degrade performance just outside it, which is precisely the region where safety analysis lives.

If you insist
Read the original →

The Sendoff
The paper warns that AI creates blind spots engineers miss until accidents happen. The paper was written with AI assistance.