Event Archive
Data readiness
The ultimate prerequisite for AI adoption.
01 April 2026
As with any new technology, artificial intelligence presents both an opportunity and a risk. “The opportunity lies in the huge increases to productivity and efficiency that AI offers an organisation,” says Daniel Acton, CTO, Accelera Digital Group. “AI agents are able to tackle repetitive tasks with accuracy and speed. They excel at performing routine jobs that you don’t need a human brain to do. Tasks such as accessing approvals are the perfect example of a workload that AI can handle seamlessly, freeing up your human talent to focus on more innovative projects that will grow the business. Creativity isn’t a machine’s strong point – efficiency is (although creativity is increasing as the models that back the agents improve over time).”
Whether an organisation is seeking improved efficiency to be competitive in the market, or wants to stay relevant in terms of technology, it may also have fears of the unknown when it comes to the use of agentic AI. A common worry is that AI agents will malfunction in some way and leak or expose its data. “To start with, it’s important to acknowledge that some of your employees are probably using AI already in the course of their work – perhaps to help with presentations or assist in editing documents,” Acton says. “Given this, there is already a risk of exfiltration of data in your organisation.
“For example, an employee who feeds a confidential report into a consumer version of an AI large language model (LLM) like ChatGPT, Gemini, etc., is unwittingly sharing your company’s information, and the terms and conditions of consumer versions of these LLMs usually allow for training those models on the data that users provide.
“Make sure that, from the start, there are guardrails and rules in place about using AI in a responsible way,” he states. “Using an enterprise version of an LLM will mitigate this risk since the terms and conditions of the enterprise versions generally prohibit the use of your data to train the models. If a secure, paid version is available and employees are aware they must use this version for all work-related tasks, then the threat of data leakage through everyday AI use would be somewhat mitigated. “If you are using something for free, then make no mistake – you are the product and the data that you feed into the model will be used for training. In a paid version, the licensing prohibits the use of any data to train the models,” Acton emphasises.
AI agents
Organisations need to establish well-defined rules governing AI agents that include accessing data, having the correct access at the correct time, and not allowing unfettered access. “As with all your human employees, you must trust but verify that your AI agent is operating within the correct parameters,” Acton says. “Allow the AI agent controlled access and then check that it is doing its job correctly and that the prompts and controls you have implemented are working. If a human employee tried to access an unauthorised resource, then your security would activate the alarms and alerts you have put in place to prevent this. The same needs to be done for AI agents.”
Finally, the AI agent’s performance and output will only be as good as the data that underlies it. “The agent performs actions based on the data you give it. AI must be able to access quality data that is consistent. To perform optimally, it needs guidelines and examples. If you are asking it to write a piece of code, it needs to be shown examples of what your code looks like. If you are asking it to analyse your sales, have you trained it on where to access the sales records within the database? Data quality and data governance are key issues here,” Acton says.
The correct foundation is essential. Metadata and the data dictionary need to be standardised across the board in your organisation so that the naming and descriptions of different tables and columns is consistent. This will make it easy to give the agent access to the correct data in the right way. The agent will understand what the data looks like and means. ‘Garbage in, garbage out’ is a well-known saying for a reason and it is very relevant to the success of agentic AI within a company.
Access control
In terms of cybersecurity, the appropriate guardrails and governance must apply to every individual who accesses data within your organisation, whether they are a human employee or an AI agent. Agents act on prompts and do what they are told. You can trust the process, but it’s critical to verify that the governance is correct by logging what the agent is doing and flagging any unauthorised accesses and actions.
Agents work most effectively when they are given specific tasks for which they need to access clearly defined data. “The concept of least privilege must apply,” Acton says. “You wouldn’t give an employee the keys to your kingdom, and nor should you do that with an agent. Agents can be prompted to confirm with a human supervisor (the so-called Human in the Loop) before they perform certain actions or make any changes, and including this permission step will add another layer of security and good governance to the process.”
Finally, Acton explains that governance should be as streamlined as possible. “Give the AI agent clear and precise instructions, organise your data consistently, ensure that the agent has access only to what it needs, and trust but verify. That way, you will reap huge benefits in terms of productivity in the most secure and efficient way.”
