Sponsored

Sponsored: GenAI in the workplace – let’s get real

Business advisory specialist CohesionX says organisations should develop a strategy that allows for access to and use of AI, but not at the expense of security.

01 March 2025

André Strauss, CEO, CohesionX

Business advisory specialist CohesionX says organisations should develop a strategy that allows for access to and use of AI, but not at the expense of security.

Generative AI is the most adopted technology in human history, despite the lack of regulation or policies. There seems to be little concern about the security implications of a casual approach to regulating this emerging technology.

This is according to business advisory specialist CohesionX, which delivers transformative technology solutions to help organisations leverage emerging technologies, notably GenAI. Company CEO André Strauss cites IBM research which found that a whopping 96% of employees in the companies approached confirmed they use GenAI in their jobs.

The fact is that there is risk involved in GenAI adoption, adds Strauss, and businesses are moving at breakneck speed to integrate AI-powered tools while unknowingly exposing themselves to data leaks, shadow AI risks, and AI hallucinations.

The security risks linked with the random, unregulated use of this technology can be severe, says Strauss, warning of the dangers of unsanctioned use of AI, known as shadow AI. “This has become one of the most serious security risks facing enterprises today,” says Strauss. “There are various scenarios that can play out, such as employees turning to public AI models (like ChatGPT or Copilot) for assistance, unknowingly sharing company secrets with external systems. There is no oversight and no control. IT teams cannot know what information is being fed into these tools, making compliance nearly impossible.”

LLM data leaks

He points out that it’s the potential loss of IP that has business leaders worried.

“Proprietary product data, client records, or financial reports can inadvertently become training data for publicly available AI models. Many businesses manage this risk effectively by offering employees secure, company-approved AI tools. This approach prevents the need to use external, uncontrolled AI services.” Many LLM products use their conversation data to learn. This means that if sensitive information is trained into an AI model, there is a real risk that the AI will recall and expose that data later, sometimes to competitors, clients, or even the public.

Strauss says mature organisations address this problem by offering dedicated, independent AI instances, ensuring data remains within a secure environment.

“This risk can be wholly mitigated with proper implementation and tools. This includes continuous monitoring, regular updates to security protocols, and employing advanced encryption techniques.” Another dangerous aspect of AI is that it doesn’t just get things wrong, it makes things up, says Strauss. AI hallucinations occur when a model generates false yet highly convincing information. For industries that rely on absolute accuracy, AI hallucinations can result in legal liabilities, security vulnerabilities, and even financial losses.

“Companies must implement AI guard rails and anti-hallucination systems to ensure that context-aware AI responses crosscheck multiple data sources before generating answers.”

Because AI learns from historical data, bias is an inherent risk; without proper systems, it can reinforce existing inequalities in business operations, Strauss adds.

Companies must ensure that they have a strong and clear ethical AI framework that tackles bias through bias-detection algorithms that flag and correct skewed decision-making. But there is also good news, says Strauss. “With the proper safeguards in place, AI can be a transformative force rather than a security nightmare.”

CohesionX recommends:

• Combating shadow AI with company-specific AI platforms;

• Implementing full AI traceability;

• Deploying AI guard rails;

• Implementing secure AI

“AI isn’t just an opportunity - it’s an inevitability. Businesses that fail to secure their AI adoption today will face serious security, compliance, and operational risks tomorrow. Your employees are using AI - rather than trying to block it, your business needs a secure AI strategy to employ your employees,” says Strauss.

Contact André info@cohesionx.co.za