Escaping "Pilot Purgatory": how to get past the POC stage in your AI journey

Written by Ewan Langford | Mar 13, 2026 3:55:02 PM

In our previous post, we explored the AI maturity scale and the seven steps required to move from basic automation toward Organisational General Intelligence (OGI). We identified that while 88% of businesses are using AI, around 62% are stuck in the experimental phase with AI agents1.

Why? Because they are stuck in Pilot Purgatory – building countless Proofs of Concept (PoCs) but failing to generate sufficient confidence for further investment.


PoCs are a powerful way of showcasing your company's potential with agentic AI; they should represent an exciting kickstart to your AI journey, not the first nail in the coffin. Reaching escape velocity with your pilot requires proactively tackling your sticking points with a human-first culture and a suitable agent architecture to support your specific needs and goals.

 

Top 5 Sticking Points in the Experimental Phase

We’ve identified five key barriers in the experimental phase that many current agentic AI systems navigate poorly:

1. Technical Reliability and Performance Gaps:

AI agents can "hallucinate” - generating false, illogical, or misleading information while presenting it confidently as fact. Furthermore, memory and reasoning constraints can compound errors during more complex, multi-step tasks. 45% of businesses reported this as a major concern for adoption2.

2. Integration and Infrastructure Limitations:

Integration is often hampered by siloed legacy systems that are difficult to connect with, or low-quality "ROT" (redundant, outdated or trivial) data, both of which increase the likelihood of inaccuracies and hallucinations. Only 12% of organisations report that their data is of sufficient quality and accessibility to support AI initiatives3.

3. Trust, Security, and Governance:

The "black box" nature of some AI systems poses significant security and compliance risks. Coupling this with autonomous access to tools, APIs, and data creates "digital insiders" that can be exploited to enable untraceable data leaks or "agent jailbreaking".

4. Cultural and Organisational Hurdles:

Cultural misunderstandings about the human value of work can lead to fear of job displacement and internal resistance to AI. Misinterpreting agents as a replacement for human workers rather than a force multiplier significantly limits a project’s ROI.

5. Ethical and Regulatory Challenges:

Difficulty defining legal liability for mistakes by autonomous systems can pose significant regulatory challenges under strict data privacy laws and industry-specific transparency standards. “Black box” systems often conflict with the "right to explanation" required by frameworks like the EU AI Act, creating a barrier to adoption in regulated sectors.

 

Preparing your Data

To operationalise autonomous AI agents, enterprises must transition from fragmented data silos to a cohesive, machine-readable data fabric. Legacy infrastructures relying on unstructured data lakes and isolated relational databases degrade LLM context windows, leading to poor retrieval recalls and hallucinations.

Modernisation demands an Agentic RAG (Retrieval-Augmented Generation) architecture. To make this work, you need three things: vector databases to help the AI instantly understand your unstructured files, knowledge graphs to connect the dots across your business, and clear APIs that give the agent the 'keys' to act within your system. By establishing this continuous, normalised data pipeline, organisations ground AI agents in verifiable enterprise truth, enabling them to shift from stateless conversational interfaces to stateful systems capable of autonomous, multi-step workflow execution.

 

Defining Your Agent Expectations

This doesn’t mean lowering your expectations for your agents; in fact, it’s the opposite. Today’s business leaders understand that AI is not only a productivity revolution, it’s also a cultural one. Making the shift may require some serious internal work, but, as we explored in the last blog, the impact on ROI will be exponential.

In “Sticking Point 4”, we explored how most companies treat AI as a "bolt-on" tool for isolated task automation, focusing purely on linear productivity gains such as faster drafting or quicker data retrieval. Successful Agentic AI implementation requires treating these systems as "digital teammates"—capable of autonomous reasoning and multi-step execution across your entire operation. Crucially, true autonomy demands intelligent oversight. Designing intentional 'human-in-the-loop' workflows ensures your domain experts remain the ultimate decision-makers, providing the necessary guardrails and allowing human and AI workers to amplify each other’s expertise.

By shifting the focus from simple task-completion to building a governed, multi-agent orchestration layer, you move past "pilot paralysis" and begin to capture the exponential productivity gains that Organisational General Intelligence (OGI) can provide.

 

The Human side

Changing your perspective on what AI can do is only one side of the coin; leading businesses realise that human roles must also evolve across the entire organisation. This transition is delicate, as job displacement anxiety remains high - a February 2025 Eurobarometer Survey found that 66% of Europeans believe that AI will replace more jobs than it creates.

However, attitudes vary widely across different job roles and regions: 75% of managers and other white-collar workers believe AI will have a positive impact, compared to 50% of stay-at-home husbands or wives, suggesting that those who are closer to the tech tend to have a more positive outlook. Leaders must act as both visionaries and educators to help their teams navigate the unpredictable business landscape.

At Growth Directors, we strongly believe in the Human Value of Work – that the AI revolution will empower professionals to grow into “goal stewards”, whose identities are no longer tied to the tasks they complete but rather the broader strategic goals they can now accomplish.

In 2025, Microsoft introduced the idea that every employee must, in some way, become an “agent boss”, aligning their identity with the broader goals they pursue rather than with individual tasks or skills. This will enable your company to focus on the correct KPI’s to join the top 5% of companies that see actual returns from their AI projects4, while fostering a more fulfilled, capable and productive workforce.

 

Choosing a Platform for Success

Once your company is ready for digital teammates, you need to find the right technical infrastructure to implement them. To decide on a suitable platform, you need to audit the needs and capabilities of your company:

Do you have the technical capacity to build your own agents? Or do you want them pre-made? Many platforms provide pre-built agents or workflows which are perfect for common business uses. This can be great for smaller companies that don’t have the time, money or technical capacity to build their own agents or employ engineers to do it for them. For a larger enterprise, this might not be a factor; instead, you might only need a platform license so that your engineers can build custom agents and workflows to suit your company’s individual needs.

What areas of your business do you want to deploy these agents in (first)? You must assess who will need to be able to build and manage their own agents. As mentioned earlier, the answer might soon be “everyone”. This makes accessibility another important factor: good platforms will enable less-technical workers to build their own agent teams using intuitive natural-language prompts rather than code. Workflows should be visually clear, intuitive to build and easy to track, like in this example from lyzr.ai.

How might you want your AI to develop in the future? In the rapidly evolving world of AI, it is important to ensure you choose a platform that is set up for growth to avoid your investment quickly becoming outdated. Selecting a platform that is designed to integrate new developments with its existing products means you don’t need to do a complete system overhaul every time a new agent or technology is released, helping to future-proof your business.

What are your data and security requirements? Whether it’s for protecting personal information, confidential strategies or regulatory restrictions, good data- and cyber-security is non-negotiable. The platform you choose must comply with your specific local industry regulations and ideally keep all your data within a secure private cloud. Full transparency and auditability will also greatly reduce stress for you and your IT department by enabling proactive tracking of security risks or data leaks. In fact, 52% of companies say auditabilty is their primary success metric for agentic AI5.

How will you measure performance and ROI? With a staggering 95% of pilots unable to see any ROI4, you need to ensure that your platform provides the analytics and visibility necessary to refine workflows, optimise output and demonstrate clear value. This again requires clear auditability from your system, but you’ll also need the engineering capacity to build these factors into your workflows. Comprehensive analytics and clear action plans can make the difference between a useful AI architecture and a transformative one. You can see how we structured these analytics in this POC we built for a Zurich-based telecommunications company.

Other features to consider are:

  • Memory and Orchestration: Does the system need to have long or short-term memory of user preferences or client data? If your roadmap involves moving towards Organisational General Intelligence (OGI), then the platform you choose must be able to seamlessly share state memory between agents.
  • Context Standardisation: Look for platforms that support standardised protocols (like the Model Context Protocol) to ensure your agents can securely query your internal data repositories without constantly rebuilding custom API connections.
  • Hallucination Management & Self-Regulation: For enterprise deployments, erratic behaviour is a massive liability. Look for platforms that support built-in "critique" agents or self-reflection loops, allowing the system to evaluate its outputs against your ground-truth data before it ever reaches a client or updates a critical record.

Once you have a good idea of what you need from the agent architecture, you will be in a better position to compare platforms and choose what’s right for you.

 

The Path out of Purgatory

Escaping pilot purgatory depends heavily on establishing the right expectations from your leaders, your employees (both digital and human) and your platform. With the right culture and architecture established, you will have surpassed the five sticking points and be ready to develop a POC that won’t disappoint. Instead of treating agentic AI as a series of isolated, risky experiments, you can begin building the governed, multi-agent orchestration layer necessary to capture exponential value.

Progressing further along the 7 Steps to AI Maturity and moving toward true Organisational General Intelligence (OGI) is a significant undertaking, but it is the only way to join the top tier of companies seeing measurable, transformative productivity gains.

 

 

Sources

  1. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
  2. https://www.ibm.com/think/insights/ai-adoption-challenges#:~:text=The%205%20biggest%20AI%20adoption,Making%20headway
  3. https://www.lebow.drexel.edu/sites/default/files/2024-09/drexel-lebow-precisel-data-integrity-trends-insights-2025-outlook.pdf
  4. https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf
  5. https://www.dynatrace.com/info/reports/the-pulse-of-agentic-ai-in-2026/