
Can AI Fix What We Broke?
Rethinking the People–Process–Technology Equation in Cyber Security
For over two decades, we’ve been told that the strength of any cyber security program rests on three legs: People, Process and Technology and how well they work together. The problem is, one of those legs often wobbles and we keep trying to balance it with expensive tech instead of fixing the structure.
Most practitioners know this story too well. You can buy the latest shiny platform, automate everything and feed dashboards until they glow, but if the process is broken, the outcomes remain pretty much the same. Technology can highlight inefficiency. It can’t heal culture.
The recent F5 BIG-IP breach is a textbook example. It reminds us that even world-class technology can fail when governance and process take a back seat.
When the Process Breaks, the Tech Follows
If you’ve spent enough time in this industry, you’ve seen the pattern:
- A clever engineer spots a shortcut.
- A change request slips through review.
- The next audit flags a “minor deviation”, one that might later become headline material.
In the F5 case, attackers reportedly obtained access to proprietary BIG-IP source code, sparking national security concerns and prompting an ASD advisory urging Australian organisations to isolate exposed devices immediately.
Was the flaw in F5’s technology? Not really. It was in how access, validation and code integrity were governed. The process, not the product, was compromised.
Here’s the uncomfortable truth: this pattern repeats everywhere. From exposed API keys on GitHub to misconfigured cloud policies, it’s rarely a lack of controls. It’s a lack of process discipline and cultural consistency.
People, Process, Technology – Why Harmony Still Escapes Us
The triangle was supposed to be balanced. Yet we keep leaning on technology because it’s tangible, measurable and purchasable.
People and processes are harder. They involve:
- Training and retention in a market starved for talent
- Building a culture that values consistency over heroics
- Incentives that reward doing it right, not doing it fast
Every maturity model, from NIST CSF 2.0 to the Essential Eight reinforces that processes underpin resilience. Yet many programs still equate tool adoption with maturity.
We’ve created sophisticated telemetry that measures everything except whether people actually follow the process.
AI: A New Player or the Missing Conductor?
This brings us to the big question: Can AI bridge the gap?
AI isn’t a miracle worker. It can’t fix apathy, poor leadership, or broken culture. But it can do something profound – make invisible process drift visible.
Imagine AI systems that:
- Map how work actually happens across teams, not how flowcharts say it does
- Detect patterns of skipped steps or ignored alerts
- Predict where process bottlenecks will break before the next incident
- Translate endless logs into human-readable insight that says, “Hey, this isn’t just an anomaly, it’s a recurring failure mode”
That’s where the opportunity lies. Not in replacing people, but in augmenting their awareness.
In many ways, AI could become the conductor ensuring People, Process, and Technology play in tune, each section aware of the other’s timing and tone.
The Fine Line Between Aid and Autonomy
There’s a temptation to over-automate. Some believe AI can fully replace decision-making, that we can hand over our runbooks and sleep easy – let AI run it for us.
That’s wishful thinking !!
AI models learn from data and our data often reflects the very inefficiencies we’re trying to fix. Feed a flawed process into an algorithm and it’ll optimise the flaws. Faster.
The goal isn’t autonomy; it’s augmented governance:
- AI that assists humans in spotting weak signals early
- Systems that translate process gaps into actionable insight
- Playbooks that adapt in real time, reflecting how incidents actually unfold
If we get this right, AI can do for governance what SIEM did for visibility, make process compliance observable and measurable in real time.
The F5 Lesson and the Road Ahead
The F5 breach serves as a wake-up call for every organisation building or selling technology. The incident wasn’t about a single product. It was about trust in process, governance of code and control of access.
As more systems become AI-assisted, this trust boundary becomes even more fragile. The next generation of attacks won’t just target code; they’ll target how AI interprets and enforces policy.
That’s why the future of cyber resilience isn’t just AI-powered, it’s AI-aware. We must design systems that not only detect anomalies but also understand why they occurred and which process failed to prevent them.
What’s Next – From Shiny Tools to Smarter Systems
The next evolution of maturity will depend on how well we:
- Use AI to illuminate broken processes, not hide them.
- Align culture and accountability so that automation amplifies good habits, not bad shortcuts.
- Design technology around human decision-making, not the other way around.
AI will not fix culture, but it can coach it into clarity. It can help teams see what’s really happening behind the noise.
In the end, the harmony between People, Process and Technology won’t come from more dashboards. It will come from systems that think with us, not for us.
Key Takeaways
- The F5 breach highlights how process failure, not technology weakness, drives major incidents.
- AI can expose process drift and enforce discipline, but it can’t fix poor culture or leadership.
- The future lies in AI-assisted governance that connects human behaviour, process integrity and technology execution.