Artificial intelligence tools such as ChatGPT, Google Gemini, and Meta AI are becoming deeply embedded in daily life, with users relying on them for writing emails, answering questions, generating content, and supporting business decisions. However, Oracle founder Larry Ellison has warned that these systems have a critical limitation that many users fail to recognize.
Speaking about the rapid expansion of artificial intelligence, Ellison said that today’s most widely used AI models are not designed to understand the truth. Instead, they are built to predict language. According to him, AI systems do not actually know facts but generate responses by predicting the most likely sequence of words based on patterns in data. This allows AI to sound confident, detailed, and convincing even when the information it provides is incorrect.
Ellison pointed to the phenomenon known as AI hallucination, where systems produce information that appears accurate but has no factual basis. He said many AI models developed by companies including Google, OpenAI, and Meta are trained on vast volumes of outdated, mixed, and unverified internet data. These systems do not automatically verify whether their outputs are accurate, current, or reliable.
While this limitation may not pose serious risks for creative tasks such as writing poems or summarizing content, Ellison warned that the dangers increase significantly when AI is used in healthcare, finance, law, government policy, and corporate decision-making. In these areas even minor errors can have serious real-world consequences. He noted that AI can confidently deliver incorrect answers, and many users may lack the expertise to identify those mistakes.
Ellison also said AI systems do not understand context, ethics, or consequences in the way humans do. They cannot distinguish between right and wrong or grasp why certain errors may be harmful. As a result, AI tools may provide flawed medical advice, fabricate legal references, or confidently describe events that never occurred. To non-expert users, such responses can appear entirely trustworthy.
To reduce these risks, Ellison said Oracle is focusing on building AI systems that depend more on verified enterprise data, real-time information, and strict governance controls rather than relying solely on open internet sources. The objective, he said, is to ensure AI systems are grounded in trusted data instead of drawing from unreliable online material.
Ellison’s comments serve as a reminder that despite rapid advancements, artificial intelligence still has fundamental limitations. Until AI systems become more accurate, transparent, and accountable, users are advised to remain cautious and avoid placing blind trust in AI outputs, especially when decisions could impact people’s lives.


