Technology

Through the looking glass

There’s plenty of appetite to train LLM models, but how can you do this using confidential data?

03 June 2024

AI has heated the market way beyond boiling point and has propelled some companies to stratospheric earnings. It’s also been particularly lucrative for companies that rent the infrastructure to run AI services rather than businesses actually providing the AI services.

Microsoft and Alphabet both reported double-digit growth in their first-quarter results towards the end of April, and their combined market value jumped by more than $250 billion. Shortly after, shares in Nvidia and Amazon wafted 2% higher. But for all this froth, it’s still not entirely clear, at least in the short term, what real business value AI is providing. Other than the hallucination problem, many IT leaders are hesitant to go all in with the technology because of privacy and security worries. Large language models (LLMs) need data, but how can proprietary or confidential data be protected? Financial and medical records are particularly sensitive, and with the second example, may include diagnoses and treatment plans.

ITWeb Premium

Get 3 months of unlimited access
No credit card. No obligation.

Already a subscriber Log in