
Why AI investment fails without use cases and data
The organisations that absorb AI successfully are those whose leadership combines strategic clarity, data discipline and organisational redesign.
Key points: Many organizations default to generative AI for every use case without understanding that established technologies like machine learning may be better suited, leading to misaligned deployments and eroded executive confidence.
Barbara Cresti, Founder of StratEdge, argues that AI cannot compensate for missing business strategy or poor data governance and that scaling requires top-down alignment with clear strategic objectives.
Long-term resilience depends on embedding technology into business continuity planning, including digital sovereignty, dependency risk on external platforms, and structured feedback loops that protect workforce judgment.
AI cannot be adopted just because it's available. It has to be aligned with a clear business purpose, otherwise it remains an experiment instead of becoming a strategic capability.
Most organizations experimenting with AI are stuck. They run pilots, announce initiatives, and invest in tools. But the results rarely scale. The problem is not the models but the absence of a clear business strategy, clean data foundations, and executive leadership willing to define where AI fits and where it does not.
Barbara Cresti is the Founder of StratEdge, an advisory firm that helps boards and executives lead AI strategy and governance. Previously, she held marketing leadership roles at Amazon Web Services and Orange. With over 20 years of experience across regulated and tech-driven industries, Cresti now advises midsize and enterprise organizations working to close the gap between AI experimentation and strategic execution."
AI cannot be adopted just because it's available. It has to be aligned with a clear business purpose, otherwise it remains an experiment instead of becoming a strategic capability," Cresti says. Across industries, organizations are allocating significant budgets to AI. Yet many initiatives are detached from operational KPIs. Pilots are launched, but integration into revenue generation, cost optimization, or risk mitigation remains unclear. The confusion often starts with the technology itself.
The wrong AI for the job: "We told them that maybe generative AI is not the best AI for that. Maybe you need machine learning," Cresti recalls. "They didn't realize that other technologies could be better suited for their use case."
For tasks like demand forecasting and logistics optimization, machine learning has a longer track record, greater reliability, and stronger auditability than generative alternatives. When the team understood that, confidence went up and perceived risk went down.
Strategy before tools: "The issue today is that AI is seen as a tool instead of an enabler to the business strategy. If you don't have a strategy, you don't know where you should focus first. AI cannot compensate for a missing strategy. It can only accelerate what is already defined," she explains. Organizations that lack clarity on growth priorities, cost structures, or performance indicators will struggle to identify viable AI use cases. The result is experimentation without scale and capital without return.
Garbage in, amplified risk out: "If the data is messy, there is nothing you can do. AI cannot fix garbage data and without governance and visibility into what data is strategic, you cannot scale. AI can only be as reliable as the data it is trained and fed on. If governance is weak, the technology will reflect that weakness at scale," Cresti says.
She points to her time at AWS, where personalization efforts were constrained by inconsistent CRM data. If such challenges exist within a technology leader, the implications are greater in organizations where data architecture has evolved organically over decades.
Scaling AI requires identifying which datasets are strategically critical, assigning ownership, implementing KPIs, and ensuring traceability. The organizational side of the problem is equally unresolved. Cresti says adoption in many organizations has been bottom-up, with individual teams experimenting without executive sponsorship or strategic alignment. She points to AWS as a counterexample, where implementation was driven top-down with clear definitions of where each function should and should not apply AI.
Safe spaces for pushback: Organizations need formal mechanisms, such as structured feedback loops, escalation for questionable outputs, periodic validation reviews, and mandatory documentation of override decisions. "Leadership should install safe spaces and request mandatory feedback from employees on how it is going with AI," she says. At AWS, designated champions collected concerns across teams and escalated them to central program leadership. That structure ensured visibility and institutional learning and improvement.
When AI begins decisions, human oversight becomes a governance requirement. Technology is embedded in business continuity. Yet many organizations treat AI vendor selection as a procurement decision rather than a resilience decision. Cresti draws a parallel to Europe's reliance on global payment networks. If a critical provider becomes unavailable, operational disruption is immediate. Dependency on a single provider can directly affect operational continuity. Resilient AI deployment requires diversified architectures, portable data, and clearly defined contractual and fallback mechanisms.
Organizations that integrate AI into core workflows without diversification and contingency planning risk building structural fragility into their operations."Think strategically about your competitiveness and your unique advantages," Cresti says. "How can you protect and augment them over the medium to long term?" Organizations that treat AI as a strategic enabler will convert investment into measurable impact. Those that do not risk embedding complexity, dependency, and exposure into their present and future operations.
AI Data press news, March 5
