Speaker
Description
The illusion of explanatory depth (IOED) is a cognitive bias in which people overrate their ability to explain phenomena. It manifests as the gap between a self-rating of explanatory skill and a later self-rating of the explanation produced. Experiments show that searching the Internet for explanations increases IOED. We tested whether interacting with an AI-chatbot has the same influence. University students (N = 102) answered four general-knowledge questions (e.g., “How do zippers work?”). For every question, a detailed supplementary material with an explanation was pre-generated with ChatGPT-4.5. Participants were randomly assigned to one of three conditions: GPT (they questioned a customized ChatGPT that returned the pre-generated materials), no-GPT (they read the same materials on screen without knowing their origin), and control (no materials provided). After reading, participants rated (7-point scale) how good an explanation they could potentially give (pre-rating). They then wrote their explanations and rated them again (post-rating). A LMEM (with GPT/pre-rating as intercept) predicting ratings from condition × time with random intercepts for participants and items and random slopes for time confirmed a robust IOED. Crucially, the interaction was significant: the GPT group showed the largest pre–post decline, bigger than no-GPT (β = 0.54, se = 0.25, t = 2.21, p = 0.030) and control (β = 0.90, se = 0.25, t = 3.61, p < .001). Thus, interacting with ChatGPT magnified IOED beyond simply reading the same AI-generated texts. Moreover, explanations written in the GPT condition were significantly shorter than those in the no-GPT group.