Could artificial intelligence change the way we think about decisions made by humans? Florian Stoeckel presents new research showing that when citizens are prompted to think about AI, they become more aware of the limitations of human decision-makers.
With the growing use of artificial intelligence, debates often focus on whether such systems are biased. Yet the rise of AI may also change how citizens judge human decision-makers.
As AI becomes a visible alternative capable of making decisions that were previously handled by humans, people may begin to evaluate human decisions more critically. In a new study, we provide evidence of this pattern in the context of AI in public administration.
Does AI raise the bar for human decision-makers?
One might assume that citizens would resist increased use of AI by public administrations. However, with the availability of AI as an alternative decision-maker, citizens may pay closer attention to the limitations and potential biases of human decisions.
This comparison can increase concerns about human decision-making. In turn, it may increase public demand for AI-based decisions in some areas rather than simply producing resistance to AI.
Our research suggests that comparisons between AI systems and human decision-makers play an important role in shaping these evaluations. Specifically, we find that when people first think about AI decision-making, they tend to become more attentive to the limitations and potential biases of human decision-makers. The comparison with AI seems to raise the bar for how citizens evaluate human decisions.
Perceived discrimination in hiring decisions
To study this dynamic, we examined how people evaluate the risk of discrimination in public-sector hiring decisions. We asked respondents about a selection process that would either be conducted by AI or by human recruiters. Public-sector hiring provides a useful test case: these decisions involve public resources and affect services that citizens rely on, making non-discriminatory decision-making particularly important.
Respondents were asked how likely they believed they would face discrimination in hiring decisions made either by AI systems or by human recruiters. We randomly varied the order of the questions. Half of the respondents evaluated AI first, while the other half evaluated human recruiters first.
This simple variation allowed us to examine how the presence of AI as a reference point affects how people judge human decision-making. When respondents first evaluate AI-based decision-making, their concerns about discrimination by human recruiters increase. Thinking about algorithmic decision-making seems to make people more attentive to the potential limitations and biases of human decision-makers.
The shadow of AI
Our data come from a preregistered survey experiment with more than 11,000 respondents across eight European countries: Austria, Germany, Hungary, Italy, the Netherlands, Poland, Spain and the United Kingdom. The survey was conducted online by YouGov using samples that approximate national populations on key demographic characteristics.
Across countries, levels of concern about discrimination by human recruiters and AI systems are broadly similar. In most countries in our sample, citizens express roughly comparable levels of concern about discrimination from both sources. The Netherlands is an exception, where respondents are somewhat more concerned about AI, while in Hungary concerns about human recruiters are somewhat higher.
The key finding, however, is that the comparison itself matters. When respondents evaluate AI first, they subsequently judge human decision-makers more critically. Once AI becomes a salient reference point for people’s considerations, they seem to pay closer attention to the potential shortcomings of human decision-making. In this sense, evaluations of human decision-making may increasingly take place in the shadow of AI.
Human limitations
Research on what scholars call “algorithmic appreciation” offers one possible explanation. When algorithms are compared to imperfect human decision-makers, AI may seem more consistent, impartial or rule-based.
Human decision-making, by contrast, can be associated with subjective judgments, biases or fatigue. When these human limitations become salient, algorithmic systems may be viewed more favourably, even if people still have concerns about algorithmic bias.
Interestingly, this perspective differs somewhat from how experts often discuss AI. Debates among experts tend to emphasise the risks of algorithmic bias. Our findings suggest that many citizens do not approach AI primarily through that lens. Instead, views on AI seem to reflect more general assumptions about algorithms or computers, which are frequently associated with consistency and impartiality.
Our study also reveals a related comparison dynamic. When respondents are first prompted to think about the possibility that human hiring managers may discriminate, their concerns about discrimination by AI systems decrease. Thus, once people consider the potential biases of human decision-makers, AI tends to look less problematic by comparison.
The power of comparisons
These comparison effects have important implications for debates about citizens’ expectations of AI in public administration. As AI becomes a more visible alternative, citizens may increasingly evaluate human decision-makers in comparison with algorithmic systems. In such comparisons, human decisions may seem less consistent, slower or more prone to bias.
Citizens may initially prefer decisions to be made by humans, partly because human decision-makers have long been the default and may be seen as more trustworthy. But long waiting times, administrative costs for taxpayers or perceived inconsistencies in human decisions may draw particular attention to the limitations of existing systems. In such situations, the availability of AI as an alternative could increase demand for algorithmic decision-making.
Importantly, these dynamics point to a broader implication. Once AI becomes part of the reference point in citizens’ evaluations, they no longer assess human decision-making in isolation. One part of our experiment suggests that when AI becomes salient, people may evaluate the limitations of human decision-making more critically.
In practice, this may reduce support for exclusively human-based decision-making and increase openness toward AI-based decisions. The reciprocal finding from our experiments reinforces this point: AI can appear more acceptable when people first reflect on human limitations. In both cases, the comparison shifts perceptions in the same direction. Algorithmic decision-making may therefore come to be seen as a more viable or even preferable option relative to human decision making.
Unanswered questions
Our results come from a specific context, namely decision-making in public-sector hiring. If similar dynamics emerge in other policy areas, the spread of AI could create new expectations of the state. Rather than seeing increased use of algorithmic decision-making sceptically, some citizens may even begin to expect governments to use more AI, especially in areas where it promises not only to be faster and cheaper but also to make more consistent decisions.
In practice, public-sector decision-making will likely combine human and algorithmic input, with many differences across countries, regions and levels of government. For policymakers and researchers, this raises several important questions.
In which domains might citizens support a greater role for AI, and where is the use of AI likely to face strong resistance? How much do citizens’ views depend on their own attitudes and experiences, and how much do they depend on the characteristics of the AI systems being used? And how can hybrid decision systems that combine human and algorithmic decision-making be designed in ways that are transparent, accountable and democratically legitimate?
Florian Stoeckel is the author of The Power of the Crowd (Cambridge University Press, 2025).
Note: This article gives the views of the author, not the position of LSE European Politics or the London School of Economics.
Image credit: fizkes provided by Shutterstock.





























Discussion about this post