Features

Before diving into GenAI, get guardrails

Create a strategic roadmap for responsible enterprise AI adoption that balances innovation with security

01 March 2025

With apologies to Seymour R Goff

More and more businesses are restricting the use of GenAI, or simply banning it, on data privacy and security grounds. A global survey by Cisco found that one in four organisations has now banned its use. Employees may inadvertently leak sensitive company information, which is exactly what happened at Samsung. It was reported that an engineer fed glitchy source code from a semiconductor database into ChatGPT, prompting the chatbot to fix the error. Later, another Samsung employee converted a smartphone recording of an internal company meeting to a document file, asking ChatGPT to generate the minutes. Both these incidents are cause for concern because ChatGPT saves user inputs to train its models, meaning Samsung’s trade secrets ended up in the hands of OpenAI, which owns the AI service.

The South Korean tech giant isn’t against AI – it’s said to be developing its own AI software for internal development – but it does highlight a growing challenge. How can businesses harness the power of GenAI while also protecting their data? As organisations rush to adopt GenAI tools, the security risks are multiplying, creating a complex landscape where innovation and security must coexist.

ITWeb Premium

Get 3 months of unlimited access
No credit card. No obligation.

Already a subscriber Log in