AI Strategy - What Problem Are We Actually Solving?

Thu, March 19, 2026
Most AI projects fail because organisations start with technology instead of a clear business problem. This article explains how to focus on outcomes, choose between AI and automation, and build the data foundations needed to deliver real impact.
AI Strategy - What Problem Are We Actually Solving?

Everyone wants to talk about AI. Agents, models, platforms, copilots. The technology is exciting, but there’s a more important question that often just gets skipped. What exactly are we trying to fix? What business advantage are we trying to achieve?

It sounds obvious, but it’s not. More than 80% of AI projects fail, twice the rate of IT projects that don’t involve AI. Gartner predicts that more than 40% of agentic AI projects will be cancelled by 2027. MIT researchers found that only 5% of integrated AI pilots deliver measurable impact on the bottom line. The remaining 95% produce nothing that shows up in the P&L.

The pattern behind these failures is remarkably consistent. Organisations start with the technology and work backwards to find a problem worth solving. The successful ones do the opposite. They define a business problem clearly, then pick the right approach to solve it. This is now becoming the key insight.

AI vs Automation - Knowing the Difference

Not every problem needs AI. Some need better process automation. Getting this distinction wrong is expensive.

Automation follows fixed rules. If a customer submits a form, route it to the right department. If an invoice exceeds a threshold, flag it for approval. These are predictable tasks with predictable inputs and predictable outputs. They’re valuable to automate, but they don’t require reasoning.

AI earns its place when the task requires interpretation, context, or judgment. Consider a helpdesk call where first-line support always escalates to expensive third-line specialists. A simple FAQ bot that matches keywords to pre-written answers would be the right automation, but no! A system that understands the user’s problem in context, reasons through possible causes grounded in existing expert knowledge and guides the first-line agent to a resolution, that’s where AI excels. The business outcome is measurable: fewer escalations, faster resolution, and expert staff redeployed to higher-value work.

The distinction matters because the investment profiles are completely different. Automation projects are typically faster, cheaper, and lower risk. AI projects require data foundations, governance, and ongoing refinement. Labelling an automation problem as an AI problem doesn’t just waste budget; it sets expectations that the technology can’t meet.

Start With the Pain, Not the Platform

Organisations that succeed with AI share a common discipline: they define the problem in business terms before selecting any technology.

Vodafone’s Copilot deployment is a good example. The company didn’t start with “let’s deploy AI.” Their legal and compliance team was spending excessive time drafting, reviewing, and renegotiating contracts; a core activity for a regulated business. The problem was defined precisely: contract turnaround is too slow, and the work doesn’t scale. Vodafone ran a 300-person trial, measured the results with KPMG, and found legal staff were saving four hours per person per week. That data-driven confidence led to a rollout across 68,000 employees. The legal team became Vodafone’s biggest internal advocates for AI, not because they were excited about the technology, but because it solved a problem they cared about.

Air India followed the same logic. The airline’s passenger base was doubling, but its contact centre couldn’t scale proportionally without unsustainable cost increases. The question wasn’t “should we use AI?” It was “how do we handle twice the query volume without twice the headcount?” With that constraint defined, they built a generative AI assistant that now handles 40,000 queries daily at a 97% automation rate. Call centre volumes stayed flat at 9,000 per day despite the growth. The technology decision followed the problem definition.

When the Data Foundation Comes First

Octopus Energy’s AI story is worth examining closely because it illustrates what happens when an organisation builds the right foundations before AI even enters the conversation.

During the 2022 energy crisis, customer service demand surged across the sector. Octopus, now the UK’s largest electricity supplier, needed to handle significantly higher volumes without degrading the service quality that had differentiated them from competitors. That was the problem.

The reason they could move quickly is that their Kraken platform had been recording data holistically from day one. Every phone call transcribed, every email, every click, every meter reading, every payment, even every failed payment, all stored in a single, unified system. As CEO Greg Jackson put it, the company was able to implement AI so quickly and so effectively because it had really good data. If you store it in silos, it’s exponentially less powerful than if you store it holistically.

Their AI tool, Magic Ink, is fed continuous context around each customer’s history and product specifications. It summarises all interactions, generates draft responses, and suggests actions such as requesting a meter reading. Critically, a verification system annotates facts with their source and highlights anything that couldn’t be verified. Human agents review every AI-generated response before it reaches a customer.

The results are measurable. AI now drafts over 40% of Octopus’s digital communications, achieving an 80% customer satisfaction rating compared to 65% for trained human agents. The AI handles the equivalent of 250 employees’ worth of customer service work, freeing agents to focus on complex cases that genuinely require human expertise.

The lesson here isn’t about the AI. It’s about the data. Kraken wasn’t built for generative AI, it was built to run an energy company. But because it stored data holistically rather than in silos, it became the perfect grounding layer when the technology arrived. The problem was defined first. The data foundation already existed. The AI was the last piece, not the first.

Extend What You Have Before Building From Scratch

Once the problem is defined, the next temptation is to build a bespoke solution from the ground up. It feels ambitious. It’s usually a mistake.

The smarter move is to extend existing platforms first and only build custom when you find a gap that absolutely cannot be filled otherwise. This isn’t a compromise, it’s a design principle that the Microsoft certifications for AI solutions architects identifies as the foundational decision that can make or break an entire AI programme.

Vodafone extended its existing Microsoft 365 environment with Copilot rather than commissioning a purpose-built contract review system. The result was faster to deploy, lower risk, and immediately familiar to the people using it. Siemens took a similar approach when it faced a shortage of Surlyn, a specialised resin used in medical diagnostic packaging. Because Surlyn is patented, there were no alternative manufacturers. Rather than building a custom procurement intelligence system, Siemens used an AI-powered supplier discovery tool that searched import and shipping documents. Within days, it generated a list of 150 distributors, several of which had inventory available for immediate purchase.

Building custom makes sense when you’ve identified a genuine gap, a capability your existing platforms simply cannot deliver. But most organisations reach for custom too early, before they’ve explored what their current stack can do with the right extensions.

Connecting AI to Business Priorities

The most reliable filter for AI investment decisions is a simple one: can you articulate the non-AI alternative cost?

If Vodafone’s legal team continued reviewing contracts manually, the cost was quantifiable in hours per week per person. If Air India’s contact centre scaled linearly with passenger growth, the cost was quantifiable in additional headcount. If a manufacturer can’t identify alternative suppliers for a critical component within days, the cost is quantifiable in production downtime.

When the non-AI cost is clear, the business case writes itself. When it isn’t - when the justification is “everyone else is doing it” or “we need an AI strategy” that’s a signal to stop and define the problem properly before committing resources.

Budget holders can stress-test any AI proposal with three questions:

  • What specific business outcome does this improve and how will we measure it?
  • Is this genuinely an AI problem, or would better automation or data consolidation solve it?
  • Are we extending what we already have, or building from scratch without good reason?

If the answers are vague, the project isn’t ready.

How 5Y Approaches This

At 5Y, we start every engagement with the problem, not the technology. When a client says, “we want AI,” our first questions are about their data architecture, their business processes, and the outcomes they’re trying to improve. We’ve found that the right answer is often not a standalone AI project; it’s a data foundation that makes AI reliable when it’s eventually applied.

Our AI products are layered on top of our solution accelerators, which means business users can perform complex analytics on their data using natural language straight out of the box. That’s the first stage. The next stage is understanding and replicating complex business processes through orchestrated agents, the kind of capability where decisions need to be made quickly, such as ordering emergency stock for a critical part when supply chain risks are detected. We’re actively investing in upskilling our solution architects to think this way: business problem first, then the right technology to solve it.

This approach isn’t just about avoiding failure. It’s about ensuring that when AI is deployed, it’s deployed against a problem worth solving, with data foundations that make it reliable and success criteria that make it accountable.

The Takeaway

AI projects don’t fail because the technology is immature. They fail because the problem was never properly defined. The organisations getting real value from AI; Vodafone, Air India and Octopus Energy, are the ones that started with a specific, measurable business pain. Not a technology ambition.

Before committing budget, define the problem in terms a CFO would recognise. Quantify the non-AI alternative cost. Distinguish between tasks that need reasoning and tasks that need rules. Extend what you have before building what you don’t. And if you want AI to work, build the data foundations first, because when the technology arrives, it’s the quality of your grounding data that determines whether it delivers or disappoints.

Next in this series: What’s the State of Our Data Foundations? Why clean data isn’t enough and what AI actually needs to understand your business.

Related content

AI
AI Strategy for the Enterprise
AI
The Swan Effect: How AI Will Expose Finance’s Hidden Challenges
AI
Why Finance Doesn’t Trust Its Own Data - And Why That Matters for AI
AI
From Questions to Answers: What Intelligent Search Delivers
AI
What AI Actually Needs to Understand Your Questions
AI
Marketing vs Reality: A Practical Assessment of Enterprise AI Agent Platforms
Business Transformation
Why Enterprise Search Still Fails in the AI Era
Business Transformation
The new executive currency: Data continuity
Business Transformation
Before AI foresight, build the foundation
AI
Agentic AI starts with data foundations, not algorithms

Get all of our tips, data and insights straight to your inbox…