Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now
A new study by researchers at Google DeepMind and University College London reveals how large language models (LLMs) form, maintain and lose confidence in their answers. The findings reveal striking similarities between the cognitive biases of LLMs and humans, while also highlighting stark differences.
The research reveals that LLMs can be overconfident in their own answers yet quickly lose that confidence and change their minds when presented with a counterargument, even if the counterargument is incorrect. Understanding the nuances of this behavior can have direct consequences on how you build LLM applications, especially conversational interfaces that span several turns.
A critical factor in the safe deployment of LLMs is that their answers are accompanied by a reliable sense of confidence (the probability that the model assigns to the answer token). While we know LLMs can produce these confidence scores, the extent to which they can use them to guide adaptive behavior is poorly characterized. There is also empirical evidence that LLMs can be overconfident in their initial answer but also be highly sensitive to criticism and quickly become underconfident in that same choice.
To investigate this, the researchers developed a controlled experiment to test how LLMs update their confidence and decide whether to change their answers when presented with external advice. In the experiment, an “answering LLM” was first given a binary-choice question, such as identifying the correct latitude for a city from two options. After making its initial choice, the LLM was given advice from a fictitious “advice LLM.” This advice came with an explicit accuracy rating (e.g., “This advice LLM is 70% accurate”) and would either agree with, oppose, or stay neutral on the answering LLM’s initial choice. Finally, the answering LLM was asked to make its final choice.
The next phase of AI is here - are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows - from real-time decision-making to end-to-end automation.
Secure your spot now - space is limited: https://bit.ly/3GuuPLF

A key part of the experiment was controlling whether the LLM’s own initial answer was visible to it during the second, final decision. In some cases, it was shown, and in others, it was hidden. This unique setup, impossible to replicate with human participants who can’t simply forget their prior choices, allowed the researchers to isolate how memory of a past decision influences current confidence.
A baseline condition, where the initial answer was hidden and the advice was neutral, established how much an LLM’s answer might change simply due to random variance in the model’s processing. The analysis focused on how the LLM’s confidence in its original choice changed between the first and second turn, providing a clear picture of how initial belief, or prior, affects a “change of mind” in the model.
The researchers first examined how the visibility of the LLM’s own answer affected its tendency to change its answer. They observed that when the model could see its initial answer, it showed a reduced tendency to switch, compared to when the answer was hidden. This finding points to a specific cognitive bias. As the paper notes, “This effect – the tendency to stick with one’s initial choice to a greater extent when that choice was visible (as opposed to hidden) during the contemplation of final choice – is closely related to a phenomenon described in the study of human decision making, a choice-supportive bias.”
The study also confirmed that the models do integrate external advice. When faced with opposing advice, the LLM showed an increased tendency to change its mind, and a reduced tendency when the advice was supportive. “This finding demonstrates that the answering LLM appropriately integrates the direction of advice to modulate its change of mind rate,” the researchers write. However, they also discovered that the model is overly sensitive to contrary information and performs too large of a confidence update as a result.

Interestingly, this behavior is contrary to the confirmation bias often seen in humans, where people favor information that confirms their existing beliefs. The researchers found that LLMs “overweight opposing rather than supportive advice, both when the initial answer of the model was visible and hidden from the model.” One possible explanation is that training techniques like reinforcement learning from human feedback (RLHF) may encourage models to be overly deferential to user input, a phenomenon known as sycophancy (which remains a challenge for AI labs).
This study confirms that AI systems are not the purely logical agents they are often perceived to be. They exhibit their own set of biases, some resembling human cognitive errors and others unique to themselves, which can make their behavior unpredictable in human terms. For enterprise applications, this means that in an extended conversation between a human and an AI agent, the most recent information could have a disproportionate impact on the LLM’s reasoning (especially if it is contradictory to the model’s initial answer), potentially causing it to discard an initially correct answer.
Fortunately, as the study also shows, we can manipulate an LLM’s memory to mitigate these unwanted biases in ways that are not possible with humans. Developers building multi-turn conversational agents can implement strategies to manage the AI’s context. For example, a long conversation can be periodically summarized, with key facts and decisions presented neutrally and stripped of which agent made which choice. This summary can then be used to initiate a new, condensed conversation, providing the model with a clean slate to reason from and helping to avoid the biases that can creep in during extended dialogues.
As LLMs become more integrated into enterprise workflows, understanding the nuances of their decision-making processes is no longer optional. Following foundational research like this enables developers to anticipate and correct for these inherent biases, leading to applications that are not just more capable, but also more robust and reliable.
If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
Read our Privacy Policy
Thanks for subscribing. Check out more VB newsletters here.
An error occured.