A recent study reveals a curious phenomenon: people perform better on tasks when they believe artificial intelligence (AI) is assisting them, even when informed that the AI could degrade their performance. This "AI placebo effect" persists regardless of the system's purported reliability.

Researchers conducted an experiment involving two groups tasked with matching pairs of letters appearing at varying speeds on a screen.

One group was told they were being aided by a reliable AI, while the other was informed that an unreliable AI was hindering their performance. Both groups also completed the task without any AI assistance. "In reality, there was no AI at work; it was just a scenario we presented to the participants," explains Agnes Kloft, a doctoral researcher at Aalto University.

Contrary to expectations, both groups improved their performance—faster and more attentive—when they believed an AI was involved. "This shows that people have high expectations for these systems, and simply telling them that the AI is faulty does not alter their expectations," says Assistant Professor Robin Welsch.

The experiments, which were later replicated online, included surveys where participants described their expectations and attitudes towards AI. Results indicated a generally positive outlook toward AI, and surprisingly, even typically skeptical individuals held positive expectations about AI capabilities.

This finding presents a significant challenge in evaluating the actual benefits of new AI programs. "Our study reveals that due to the placebo effect, it's incredibly challenging to ascertain whether AI tools truly offer us real assistance," notes Welsch.

While technologies like advanced language models undoubtedly enhance certain tasks, the placebo effect could exaggerate minor differences between different versions of these programs, a detail not missed by marketers.

The results also highlight a major challenge in the field of human-computer interaction research. Conclusions drawn without controlling for the placebo effect may lead to misleading interpretations. "Our research suggests that many studies in the field might be biased in favor of AI systems," Welsch concludes.