Generative AI has a mixed impact on how we think critically. On the one hand, it streamlines routine tasks like gathering and organizing information, which can free up time. On the other, it can reduce the mental effort users invest in verifying and questioning the results. A recent Microsoft study surveying knowledge workers found that when users have high confidence in AI’s abilities, they tend to engage less in critical thinking. At the same time, people with strong self-confidence are more likely to scrutinize AI outputs, suggesting that overtrusting the tool may lead to cognitive offloading.
In practice, generative AI changes the focus of critical thinking. Instead of spending time searching for data, users now spend more effort on tasks like double-checking facts, integrating AI-generated information with their own insights, and supervising the overall process. This shift can be beneficial if it prompts us to work as active overseers rather than passive recipients. But if AI is trusted too much without proper checks, it may discourage independent problem solving and weaken critical evaluation.
In educational settings, some teachers are adapting by teaching students how to generate strong prompts, evaluate AI responses, and refine their work through iterative feedback. This approach turns the technology into a tool for active learning rather than a shortcut that undermines skill development.
While generative AI can improve efficiency in everyday tasks, its impact on critical thinking depends on how we choose to use and supervise it. Balancing trust in AI with active, independent evaluation remains a key challenge.