When Every Hour Counts: How Preparation Turns Critical Vulnerabilities into Non-Events
Fri, December 05, 2025- by Mark Beharrell
- 2 minute read
Last week, the security community issued warnings about vulnerabilities in React and Next.js that were being compared to Log4j. For context, Log4j caused billions in damages and took some organisations months to remediate. Our customer-facing applications use these frameworks. Our response time? Hours, not months. And our customers experienced precisely zero disruption.
Here's what made the difference.
The Threat Was Real
These weren't theoretical risks. The vulnerabilities scored 10.0 on the severity scale—the maximum possible rating. They could be exploited remotely, required no authentication, and affected default configurations. In plain terms: anyone running these frameworks was exposed, and attackers didn't need special access to take advantage.
For organisations hosting customer solutions, this is the scenario that keeps security teams awake at night.
Speed Comes from Preparation, Not Panic
The reality is that you can't respond quickly to something you didn't see coming. We'd anticipated exactly this type of event and built our processes accordingly.
Real-time monitoring flagged the vulnerability the moment it was disclosed. We weren't scrambling to understand the situation—we knew immediately that action was required.
More importantly, we'd already automated the remediation path. A Gulp task within our development environment automatically pulls secure package versions, eliminating the manual work that typically delays response. When the patched versions became available, applying them was straightforward.
Automation Plus Human Judgment
There's a temptation in security to automate everything. We resist it. Automation handles the predictable parts—pulling updates, triggering builds, running scans. But human oversight catches the edge cases that automation misses.
After the automated update, we performed manual sanity checks to confirm application stability. A hotfix was created, reviewed, and merged. Deployment pipelines were then triggered across environments. The combination of speed and verification meant we moved quickly without introducing new problems.
Defence in Depth Isn't Optional
This incident reinforced something we already believed: single points of protection aren't enough. Our security posture uses multiple overlapping layers.
Snyk continuously scans our dependencies for known vulnerabilities. SonarCloud performs static analysis to identify unsafe coding patterns before they reach production. Azure Defender for Cloud integrates directly into our CI/CD pipelines, monitoring infrastructure and configuration.
Each tool catches different things. Together, they create a net that's difficult to slip through.
What This Means for Your Data
If you're running transformation projects with us, this matters directly to you. The platforms hosting your data, the applications processing your insights, the interfaces your teams interact with—all of these depend on frameworks that occasionally have security issues discovered.
The question isn't whether vulnerabilities will emerge. They will. The question is whether your technology partner has built the systems and discipline to respond before exposure becomes exploitation.
We have.
The Takeaway
Rapid vulnerability mitigation isn't a technical achievement to celebrate internally. It's a commitment to the people who trust us with their data and their business outcomes. When critical security events happen—and they will—the preparation you've done beforehand determines whether it's a crisis or a controlled response.
This time, preparation won.