Do GenAI Solutions “Hallucinate”? Let’s Stop Calling It That

GAI Insights Team :

Do GenAI Solutions “Hallucinate”_ Let’s Stop Calling It ThatFor the love of green beans and other good food everywhere, can we please stop using the term "hallucination" when talking about Generative AI? It’s confusing for executives and employees alike -that’s for sure. And trust me, when you’re trying to justify the budget for your latest AI project—especially when discussing AI Strategy or Enterprise AI—you don’t need to be the one who is in there causing confusion.

AI and Deterministic vs. Probabilistic Thinking

Let’s get one thing straight, people: computer systems fall into two camps—deterministic and probabilistic.

It’s like comparing apples to, well, probabilistic oranges.

LLMs (that's Large Language Models for the uninitiated) are built to be probabilistic … meaning they process inputs (a.k.a. prompts) and churn out outputs whether that’s words, numbers, you name it - that a GenAI maturity model works based on a complex web of training data, algorithms, and more.

This isn’t your grandmother’s recipe for apple pie; it’s a whole different beast.

Take your standard accounting software—plug in the numbers, and voila, the same output every time. It’s rock-solid, predictable, and totally explainable.

Now, compare that to ChatGPT and what it does every day. You feed it a prompt like, “Create a safety training program,” and out comes a beautifully crafted document—most of the time - as we would say. But here’s the kicker … there’s a small chance it might slip up and do something you’ll never forget.

That’s maybe 2-5% or so, let’s say.

Why? Ok:

 Because it’s probabilistic, not because it’s “hallucinating.”

Turing’s Outlook on Intelligent Systems

Remember Alan Turing? The father of modern computing didn’t coin terms like “hallucination” for systems that deviate from expected outcomes. Turing understood that variability in output can be a feature of a system designed to learn and adapt. GenAI’s probabilistic nature isn’t a flaw but a reflection of its complex design, rooted in principles that Turing himself explored.

The True Value of LLMs

In the context of AI Business Transformation and achieving Generative AI ROI, it's crucial to recognize that GenAI Solutions depend on their probabilistic nature to deliver unique value. So next time, for the love of everything, let’s use terms that make sense. In doing so, we not only clarify the true nature of GenAI but also uphold the integrity of our technological discussions.

 

FAQ

Q: Why do people use the term 'hallucination' in talking about LLMs?

A: The term "hallucination" is used to describe instances when a Large Language Model (LLM) generates factually incorrect or seemingly random content. While the term aims to convey the idea of outputs that don’t align with reality, it can be misleading, as it implies a cognitive process similar to human hallucinations, which LLMs do not have.

Q: What are some terms that would be better?

A: More accurate terms could include "inaccuracy," "misinformation," "fabrication," or "output error," which clearly describe the nature of the incorrect output without implying cognitive processes. Additionally, "probabilistic output variance" emphasizes the inherent variability in LLM outputs due to their probabilistic nature. 

Q: How can we work toward precision for these systems?

A: To achieve greater precision in LLMs, we can enhance training data quality, fine-tune models for specific domains, and implement post-processing verification systems to catch and correct errors. Communicating confidence levels in generated responses and incorporating ongoing monitoring and user feedback can also help improve accuracy. By combining these strategies, we can work towards reducing inaccuracies and enhancing the reliability of LLM-generated content.

Fine-Tuning Models for Next-Level AI

Fine-Tuning Models for Next-Level AI

In this talk, Roya Kandalan, who holds a PhD in electrical engineering and currently works as a senior research scientist with a focus on...

11 Types of Generative AI

11 Types of Generative AI

Generative AI is when an input is entered into an AI machine and an output is returned; i.e. the output is “generated” using probabilistic AI...

Trusted by AI leaders and vendors around the globe - we help you cut through the noise and stay informed so you can unlock the transformative power of GenAI .

Subscribe to Our Daily Briefing