Chainlink’s Brilliant Solution to AI Hallucinations

In a delightful twist of events, Chainlink has unveiled a novel approach to address the vexing issue of AI hallucinations when large language models (LLM) generate incorrect or misleading information.

Laurence Moroney, a distinguished Chainlink advisor and former AI Lead at Google, elucidated how Chainlink reduces AI errors by employing multiple AI models instead of relying on just one.

Chainlink found themselves in a pickle when they needed AI to analyze corporate actions and convert them into a structured machine-readable format, JSON. Instead of trusting a single AI model’s response, they used multiple Large Language Models (LLMs) and gave them different prompts to process the same information.

The AI models generated different responses, which were then compared. If all or most of the models produced the same result, it was considered more reliable. With this process, the risk of relying on a single, potentially flawed AI-generated response decreases.

Once a consensus is reached, the validated information is recorded on the blockchain, ensuring transparency, security, and immutability.

This approach was successfully tested in a collaborative project with UBS, Franklin Templeton, Wellington Management, Vontobel, Sygnum Bank, and other financial institutions, proving its potential to reduce errors in financial data processing.

By combining AI with blockchain, Chainlink’s method enhances the reliability of AI-generated information in finance, setting a precedent for improving data accuracy in other industries as well.

Read More

2025-02-11 22:28