The Business & Technology Network
Helping Business Interpret and Use Technology
S M T W T F S
 
 
 
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
15
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
 

AWS Turns to Ancient Logic to Tackle Modern AI’s Hallucinations

DATE POSTED:February 13, 2025

Cloud giant Amazon Web Services (AWS) is using automated reasoning — a method rooted in centuries-old principles of logic — to combat one of artificial intelligence (AI)’s newest and most persistent challenges: hallucinations, or the propensity of generative AI models to make things up.

The technique marks a major advancement in making AI outputs more reliable, which is especially valuable for highly regulated industries like finance and health care, said Mike Miller, AWS’ director of product management and one of its leaders of automated reasoning initiatives.

While AWS has been using automated reasoning for a decade, it recently reinvented the technique by adding generative AI to make it much easier and faster to use, Miller told PYMNTS.

Miller said it used to take an “army” of scientists and engineers a year or more to deploy automated reasoning, so AWS mainly used it only for mission-critical tasks. Now, the revised technique takes minutes to deploy and can be applied to many more business use cases.

“This technology is really the first and only generative AI safeguard that helps prevent these factual errors due to hallucinations,” Miller said.

For businesses, the consequences of acting on hallucinated information can be severe: Inaccurate answers can lead to wrong decisions, financial losses and even damage a company’s reputation.

Last year, Air Canada was held liable after its chatbot provided wrong information to a traveler. It told him that he could buy a bereavement fare at full price and get a partial refund later. The chatbot was wrong, and the airline didn’t want to give a refund. The traveler sued and won.

How Plato Contributed to Automated Reasoning

Automated reasoning has its roots in symbolic logic, a branch of logic that uses symbols and mathematical notations to express rules in an abstract form. While Plato and Socrates laid the foundation, symbolic logic was invented in the 19th century by English mathematician George Boole.

For example, mathematical formulas such as the Pythagorean theorem — an equation that lays out the relationship of the lengths of a right-angled triangle — will always hold true. “Automated reasoning gives you a 100% verifiable truth because you’re using mathematical logic to prove it,” Miller said.

In machine learning, however, the answer has wiggle room to be wrong. Machine learning trains the AI model to make predictions after looking at hundreds of thousands or millions of right-angled triangles to predict the length of the longest side of the triangle or come up with the formula, Miller said. Since it is predicting, this approach can get the answer wrong more often.

Even AI reasoning models positioned as being more accurate are not as verifiably certain as automated reasoning, he said. Hallucinations are still a danger in reasoning models because they still do predictions, even if they take longer to reason and they check themselves before they answer.

In automated reasoning, the process starts with setting the customer’s ground truth, whether these are regulations on money flows in finance or policies about insurance claims. These facts are converted from natural language into a mathematical model. The generative AI model’s responses are then measured against this mathematical model for accuracy.

However, Miller said one of the downsides of automated reasoning is that it is not as good with use cases that have a subjective side. For example, an advertiser who wants to create a marketing campaign that is empathetic yet assertive about making a sale is not good for automated reasoning.

Automated reasoning also does not remove all hallucinations. But Miller said that the technique can be used in addition to other approaches to further reduce hallucinations, including Retrieval-Augmented Generation (RAG) and chain-of-thought reasoning prevalent in reasoning AI models.

Automated Reasoning Coming to Amazon Q

AWS’ reinvention of automated reasoning makes deployment faster, allowing more companies to adopt this technique in their AI models. At AWS, it is offered through Amazon Bedrock Guardrails — configurable safeguards available on its Bedrock generative AI platform. Corporate clients use Bedrock to develop and deploy their AI applications.

Non-technical employees can use automated reasoning as well. For example, insurance claims workers don’t need to be technical experts can use them to verify that the company’s customer service chatbots provide accurate information about coverage and claims processes.

The technology is particularly valuable for heavily regulated industries where accuracy is paramount. Miller cited pharmaceutical companies dealing with FDA regulations as an ideal use case, where there are clear policies about what can be claimed about drugs and therapies.

“These are the kind of requirements that lend themselves well to automated reasoning checks,” he said.

AWS is positioning itself as the first cloud provider to implement automated reasoning at scale to prevent AI hallucinations. “We have a world-class team of automated reasoning experts,” Miller said, noting that the cloud giant has made significant investments in the technology.

The company is planning to expand the technology’s applications. AWS is adding automated reasoning to its AI coding assistant, Q Developer, to scan software code for vulnerabilities and propose fixes. This represents just one of several future directions for the technology, Miller said.

As generative AI continues to advance, AWS sees automated reasoning playing an increasingly important role in ensuring the technology’s reliability and trustworthiness. “This is a really interesting field, and Amazon is well positioned to continue innovating for customers, leveraging automated reasoning,” Miller said.

The post AWS Turns to Ancient Logic to Tackle Modern AI’s Hallucinations appeared first on PYMNTS.com.