What Happens When AI Gets It Wrong

Exploring trust and artificial intelligence.

Adobe Stock 1458569423
@AntonKhrupinArt - stock.adobe.com

In our previous articles, we explored different types of AI and their common uses. One of the more interesting — and important — issues is how much you should trust AI and the information it provides.

Editor's Note: previous articles offered a definition of AI for equipment rental businesses and thoughts on using Gen AI to help improve equipment rental businesses. 

I grew up in a small town of about 30,000 people, surrounded by even smaller towns. As a teenager, I once told my parents that they seemed to know everything my brothers and I did, plus a whole lot more. It was that “whole lot more” that usually got us in trouble. My favorite life lesson from those years remains: Believe none of what you hear, and half of what you see.

With conversational AI tools (like ChatGPT, CoPilot, and others), you ask a question and get an answer. How you ask — or “frame” — the question usually determines the quality of the answer. These tools are generally good at asking for clarification to refine responses, which often improves accuracy. But remember: their source of information is the internet, which contains a significant amount of misinformation. That can contaminate AI’s answers. 

Coupled with AI’s ability to generate highly realistic images, audio, and video, it may now be wise to update the old saying to: Believe none of what you hear, read, or see.

That naturally raises the question: If I can’t fully rely on AI, why use it at all? 

The answer is that AI is powerful and often very accurate — if used wisely. For fact-based questions (e.g., “What’s the difference between an LLC and an S-Corporation?”), you’ll likely get excellent information. For opinion-based questions (e.g., “Who would be a great next U.S. president?”), approach with caution. You might get an entertaining answer, but not necessarily one you’d want to base a decision on.

The same rules apply to agentic AI (AI that not only answers questions but also takes actions). Know exactly where the data it’s using comes from. Is it solely your company’s internal data, or is it pulling from the broader internet? Is that data accurate and consistent?

James McKay, CEO of VEN, recently pointed out: 

While 92 percent of Fortune 500 companies are using AI, Gartner just revealed that 85% of AI initiatives fail to deliver their promised value. Nearly nine out of ten companies are striking out. The AI is fine. The problem is us. Our data is a mess — and data is the lifeblood of AI. AI amplifies what you already are. If your data is clean and your processes work, AI makes you unstoppable. If not, AI just helps you fail faster.

 

Think of it like driving a car. If your data is clean and reliable, AI hits the accelerator and you speed much faster, and safely, toward your goal. If your data is flawed, AI still hits the accelerator — but now you risk slamming into a tree at high speed.

The takeaway: Always know the source of the data your AI is using and be honest about its quality. The more confidence you have in the source, the more confidence you can have in the answer. If the source is questionable, lean heavily on human judgment, common sense, and intuition before acting.

Page 1 of 40
Next Page