Two weeks ago, a North Korean state actor compromised the lead maintainer of Axios and published malicious versions to npm. The library has roughly 100 million weekly downloads. The poisoned packages were live for about three hours before anyone noticed.
Three hours. That is all it took to potentially compromise tens of thousands of development environments worldwide.
The attacker did not find a bug in the code. They targeted the person. Social engineering, a compromised machine, stolen npm credentials. The malicious versions looked identical to legitimate releases. No code review would have caught it because the code never went through the project’s CI/CD pipeline at all.
This is the reality of open source security in 2026. The most popular packages in the world are maintained by small teams. Sometimes a single person. And that single person is now a target for nation-state attackers.
We need to talk about what we can actually do about this.
Independent release validators. Open source projects should introduce a validation layer. A small group of trusted reviewers, separate from the core contributors, whose only job is to confirm that a release contains no external risk before it goes live. Versions could ship with a “validated” flag so consumers can decide whether to accept unvalidated releases. For the most critical packages, those first few hours after a new version drops are exactly when the risk is highest. A validation gate would buy time.
Hardware-based signing keys. Contributors on high-impact projects should self-enforce the use of hardware security keys for authentication and package signing. If the Axios maintainer’s credentials had been tied to a physical device, the attacker could not have published from a different machine. This is not theoretical. FIDO2 keys exist. They work. The friction is minimal compared to the blast radius of a compromised account.
Corporate funding for open source security. This one is already starting to happen. The Linux Foundation recently announced $12.5 million in grants from Anthropic, AWS, Google, GitHub, Microsoft, and OpenAI, all directed at strengthening open source security through OpenSSF and Alpha-Omega. That is a start. But $12.5 million spread across an ecosystem with millions of packages is not going to be enough. The companies that profit most from open source need to treat its security as critical infrastructure, not charity.
Lock your dependencies. Stop using install commands that resolve versions. Most developers still run npm install out of habit. The problem is that npm install resolves version ranges and can pull newer subversions that match your package.json constraints. That is exactly how the Axios attack spread. If your package.json said "axios": "^1.14.0", a regular install would happily grab 1.14.1, the poisoned version. npm ci does none of that. It reads your package-lock.json and installs exactly those versions. No resolution, no surprises.
You can block npm install entirely with a preinstall script:
{
"scripts": {
"preinstall": "if [ \"$npm_command\" = \"install\" ] && [ \"$npm_config_ignore_scripts\" != \"true\" ]; then echo \"Error: Use npm ci instead of npm install\" && exit 1; fi"
}
}Code language: JSON / JSON with Comments (json)
Add --ignore-scripts to prevent postinstall hooks from executing. That is the exact mechanism the Axios attacker used to deploy the RAT:
# GitHub Actions
- name: Install dependencies
run: npm ci --ignore-scriptsCode language: PHP (php)
This is not just a JavaScript problem. Every ecosystem has the same gap.
Ruby (Bundler). Never run bundle install without a committed Gemfile.lock. In CI, always use bundle install --frozen. This flag makes Bundler refuse to update the lockfile. If anything drifts from what is committed, the build fails:
# GitHub Actions
- name: Install gems
run: bundle install --frozen --jobs 4Code language: PHP (php)
You can also add this to your .bundle/config so no one on the team accidentally resolves new versions locally:
BUNDLE_FROZEN: "true"
BUNDLE_DEPLOYMENT: "true"Code language: JavaScript (javascript)
Python (pip). Stop installing from requirements.txt with loose version specifiers. Pin every dependency to an exact version and hash. pip install --require-hashes will refuse to install anything that does not match a known hash, which means even if a package is compromised on PyPI, the install fails if the content changed:
# requirements.txt with hashes
axios-like-lib==2.1.0 \
--hash=sha256:abc123...Code language: PHP (php)
In CI:
# GitHub Actions
- name: Install Python dependencies
run: pip install --require-hashes -r requirements.txtCode language: PHP (php)
For a more robust approach, use pip-compile from pip-tools to generate fully pinned and hashed requirements from a loose requirements.in file. This gives you the convenience of declaring top-level dependencies while locking the entire transitive tree.
None of these ideas are radical. Validators exist in other trust models. Hardware keys are standard in enterprise environments. Corporate security funding is table stakes for any supply chain you depend on. Lockfile enforcement is a one-line config change.
The radical part is that we still do not do any of this at scale.
Open source built the modern internet. The least we can do is stop pretending a single maintainer with a password is an acceptable security model for software that runs on every continent.
Final Words
If you agree with this, share it. If you disagree, I want to hear why. You can find me on LinkedIn, X, and Threads.
If you want to talk about open source security, software architecture, or anything else covered on this blog, reach out through my contact page.
What would it take to make you trust a new package version the day it drops?
If this post made you think, you'll probably like the next one. I write about what's actually changing in software engineering, not what LinkedIn wants you to believe. No spam, unsubscribe anytime.