Intelligent CIO Africa Issue 99 | Page 68

t cht lk

t cht lk

We need observability in AI because the technology is starting to show its limitations at the precise moment that it is becoming indispensable, and for enterprises, these limitations are simply unacceptable.
For example, I teach a computer science course on Trustworthy Machine Learning at Stanford University and advise my students to consider LLMs’ answers as hallucinatory unless proven otherwise. Why? Because LLMs are trained to generalise from large bodies of text, generating original text modelled on the general patterns found in the text they were trained on. They are not built to memorise facts.
Anupam Datta, Principal Research
Scientist and AI Research Team Lead, Snowflake
Observability helps companies evaluate and monitor the quality of inputs, outputs, and intermediate results of LLM-based applications and can help to flag and diagnose hallucinations, bias, and toxicity, as well as performance and cost issues.
But when LLMs are being used in place of search engines, some users approach them with the expectation that they will deliver accurate and helpful results. If AI fails to do that, it erodes trust. In one egregious example, two lawyers were fined for submitting a legal brief written by AI that cited nonexistent cases.
Hallucinations, security leaks, and incorrect answers undermine the trust businesses need to have in the
68 INTELLIGENTCIO AFRICA www. intelligentcio. com