Three ways organisations can reduce risk on their AI journey
Technology & Innovation

Three ways organisations can reduce risk on their AI journey

As enterprises accelerate the adoption of generative and agentic artificial intelligence, governance and security frameworks are struggling to keep pace, according to industry research cited by F5. While innovation around AI is advancing rapidly, experts warn that unresolved governance gaps could expose organisations to heightened operational and cybersecurity risks.

Data from PwC’s AI Agent Survey (May 2025) shows that 79 per cent of companies have already adopted AI, with only 2 per cent stating they are not considering agentic AI at all. Adoption remains high regardless of whether systems are fully agentic—capable of planning, executing multi-step actions and self-adjusting—or simpler chatbot-style tools linked to large language models (LLMs). Most organisations also plan to increase AI spending over the coming year.

However, F5’s own research presents a contrasting picture. Using its AI Readiness Index, the company found that just 2 per cent of organisations are considered highly prepared to scale, secure and sustain AI-enabled systems in real-world environments.

Despite this gap, organisations face mounting pressure to move forward. EY research indicates that many business leaders believe adopting agentic AI this year is essential to staying competitive by 2026. According to F5, companies that take time to address governance and security challenges now are likely to be better positioned—and less exposed to risk—than those that prioritise speed alone.

To help organisations reduce risk while continuing their AI journeys, F5 outlines three key practices.

Securing AI models
Generative AI systems rely on one or more LLMs at their core, and relying solely on model providers to prevent issues such as hallucinations or misuse is insufficient. Organisations are advised to invest in independent prompt and model services that can detect and block unwanted behaviours. Additionally, as most enterprises use multiple LLMs, obfuscating applications from inference API calls is critical to ensure availability, scalability, routing efficiency and cost control.

Securing enterprise data
Protecting data used by AI models goes beyond encrypting information at rest or in transit. Any enterprise data accessed within private environments, including data shared with approved third-party services, must be actively detected and safeguarded. According to F5, the focus should be on controlling data exit points to prevent unintended exposure, rather than solely tracking data provenance.

Securing AI agents
Agentic AI introduces new risks by enabling systems to independently decide on actions, access resources and modify information. These agents require close monitoring and external controls to evaluate their behaviour. Two emerging approaches are gaining attention: guardrail frameworks, which compare agent outputs against predefined ground truths, and “LLM-as-a-Judge” frameworks, which use separate language models to assess the quality and appropriateness of agent actions. Both approaches can also be strengthened through human-in-the-loop oversight.

F5 concludes that implementing controls across models, data and agents can significantly reduce cybersecurity and governance risks. Without these measures, organisations may face new vulnerabilities as generative and agentic AI systems become more deeply embedded in business operations.

Related Articles
+