Your pipeline is probably covered. Is your endpoint?
Between March 19 and March 31, 2026, five software ecosystems were compromised in a cascading campaign that security teams will be studying for years. Trivy, Checkmarx KICS, LiteLLM, Telnyx, and Axios - each one a trusted name, each one turned into a weapon against the people who depended on it.
The response from the security community was impressive. Within hours, every major vendor had published detailed technical analysis, IOCs, and remediation guides. If you've read any of them - and by now you've probably read twelve - you know the playbook by heart: pin GitHub Actions to SHA hashes, enforce lockfiles, use `npm ci`, check for provenance attestations, block the C2 domains.
Good advice. Genuinely. I mean it.
But almost all of it is about the CI/CD pipeline. And while everyone was busy hardening runners and verifying provenance, the actual payload was settling into a very different kind of machine - the developer's laptop. The one with twelve months of cached credentials, a coding agent with terminal access, and no forensic logging of what got installed at 3am.

The Axios RAT shipped with purpose-built binaries for macOS, Windows, and Linux - each with persistence mechanisms designed for machines that stay on (not ephemeral containers in the CI). It beaconed every 60 seconds, supported remote shell, and then cleaned up so thoroughly that a post-infection `npm audit` returned nothing. The `node_modules` directory looked perfectly normal.
The Trivy payload did something similar on non-CI systems - it installed systemd user services polling for additional payloads every 50 minutes. That's a patience interval for a laptop that sleeps and wakes up.
And the Trivy stealer's target list is revealing: SSH keys, cloud credentials across AWS, GCP, and Azure, Kubernetes tokens, Docker configs, npm tokens, database passwords, cryptocurrency wallets, VPN configs, Slack webhooks. I've seen a lot of credential harvesting payloads designed for CI runners. This one went after _a person's entire digital life_.
Four Ways a Package Reaches a Developer Machine Without Anyone Deciding to Install It
Every remediation guide assumes a developer consciously typed npm install axios. That's one scenario. Here are four others that got almost no coverage, each arguably more common and harder to detect.
Your AI tools are pulling packages behind the scenes. This is the one that surprises people. When you ask an AI assistant to generate a spreadsheet, create a presentation, or process a PDF, many of these tools run in sandboxed environments that install npm and pip packages from public registries at runtime - silently, as part of their execution. The user asked for a chart. The tool ran `npm install` to make it happen. Nobody reviewed what got pulled in, and in many of these environments the tool also has access to whatever files the user uploaded for processing. A compromised package in that chain gets access to user data and outbound network - and the user never even knew a package was involved.
Vibe Coding it - Coding agents install dependencies as a side effect of doing their job. When a developer uses Cursor, Windsurf, Claude Code, or Codex to build something, the agent doesn't just write code - it resolves and installs the packages needed to make that code run. Axios sits in the dependency tree of thousands of packages. The agent picks a framework, that framework depends on Axios three layers deep, the install happens autonomously. The developer sees working code. The dependency was a means to an end, never a decision anyone evaluated. The postinstall script ran before the developer saw the first line of output.
IDE extensions and editor plugins vendor it silently. The Trivy and Checkmarx campaigns also compromised OpenVSX extensions. An extension that bundles a compromised dependency delivers the payload the moment it updates - no terminal, no `npm install`, just a routine background update in VS Code. The developer didn't install a package. They had an extension that did.
A developer clones a repo and runs `npm install` if there is no lockfile, the resolver pulls whatever `latest` is. During those three hours on March 31, `latest` meant a RAT. This is one of the most common scenarios.

What Now?
The CI/CD hardening advice has been covered well. Here are three questions worth asking about the other half of the attack surface:
Can you tell which packages are installed on your developers' machines right now? Not what's in the repo - what's in `node_modules` on the actual laptops across your fleet. When the next Axios happens, scoping exposure means knowing which endpoints have which packages. If your only visibility is committed lockfiles, you're missing every local experiment, every AI tool runtime, **every vibe-coded script that hasn't been pushed yet**.
Do you have any signal when a coding agent installs a dependency? The moment where a human used to evaluate a package before installing it is quietly disappearing. Agents resolve dependencies as a means to an end. If you only see what arrives in a pull request, you're seeing it after the postinstall script is already executed on someone's machine.
Do you know what's actually running on your developers' machines? The VS Code extensions, brew packages, the MCP servers, the coding agents and their configurations. The Trivy and Checkmarx campaigns compromised OpenVSX extensions alongside the GitHub Actions. These tools live on the endpoint, they run with the developer's permissions, and most security teams have zero inventory of them.
The industry has great tools for securing the pipeline. What's missing is the same depth of visibility and analysis for the endpoint - understanding not just what's installed, but what it does, what it can reach, and what risk it creates in the context of everything else on that machine. That's the layer that didn't exist during these twelve days. And it's the one that would have mattered most.