Quick Thoughts on AI and Sloppy Thinking

The rise of generative AI (GAI) is reshaping how we engage with information. While GAI holds immense promise, I wonder if its excessive use erodes our critical thinking skills. The act of thinking critically demands a nuanced approach involving deep mental engagement and a questioning stance on the information we encounter. GAI, by design, minimizes…


The rise of generative AI (GAI) is reshaping how we engage with information. While GAI holds immense promise, I wonder if its excessive use erodes our critical thinking skills. The act of thinking critically demands a nuanced approach involving deep mental engagement and a questioning stance on the information we encounter. GAI, by design, minimizes this mental workload, offering us pre-packaged outputs that reduce the need for such rigorous engagement. While undeniably useful, could this convenience harm our ability to think deeply and critically?

While GAI is widening its modalities, let’s consider just one – text and the writing process. Critical thinking and composition are intrinsically linked. Writing isn’t merely about conveying information; it’s an intellectual exercise in constructing ideas, evaluating perspectives, and synthesizing insights. GAI’s tendency to produce complete text bypasses this process. When we allow AI to craft ideas on our behalf, the resulting detachment can numb our ability to wrestle with concepts, reducing our exposure to the mental rigor that builds critical thinking skills.

The analogy of the calculator loosely illustrates this point: calculators automated calculations, leading us to rely less on manual computation. Over time, many of us lose confidence in our arithmetic skills and turn to the machine, even for simple calculations. This is because calculators, as narrow-domain technologies, excel in numerical computation alone. They eliminate the cognitive friction involved in performing calculations by hand, reducing the incentive for individuals to refine these skills. Generative AI, however, is a broad-domain technology that spans mathematical tasks and language, communication, creativity, and knowledge synthesis. Because its applications span domains requiring judgment, analysis, and even empathy, the effects of habitual reliance on GAI are potentially more pervasive. For instance, in a society where GAI is ubiquitous, individuals may lose the ability to distinguish between high-quality and low-quality arguments or fail to critically assess the provenance of information. By default, GAI delivers information as truth without a built-in mechanism to verify accuracy or provenance. This fosters a passive consumption of information, where users accept AI-generated content without scrutinizing its origins, credibility, or context. The result is a potential increase in “sloppy thinking,” where people accept or distribute information without examining its implications or validity. Over time, this lack of intellectual curiosity can weaken our capacity to engage in critical inquiry.

Moreover, GAI has become a tool for forging relationships and expressing sentiments, from writing fan letters to crafting heartfelt messages. While AI can draft a grammatically correct and polished letter, it lacks the nuanced understanding of personal experiences and emotions that makes human communication unique. A letter generated by AI may be eloquent, but it lacks the individuality that makes human connection genuine. If we depend on AI to handle even these personal aspects of life, we risk reducing our capacity for empathy and diminishing the human touch that builds meaningful relationships.

Another way to think about this is in terms of passive and active intelligence. If we are mindlessly scrolling on our smartphones (e.g., social media feeds) and responding to algorithmic cues, as many of us tend to do – then we are engaged in passive intelligence, accessing information while training the models. Passive intelligence can be helpful as it extends our knowledge, but it does not require much cognitive effort on our part.   The major benefit of passive intelligence goes to the machine.   On the other hand, active intelligence involves cognitively engaging with GAI as a tool to leverage it better. For instance, active intelligence would involve cognitive effort in the best problem formulation to extract value from the AI, asking the right questions, probing through conversational interactions, and using critical thinking in evaluating the outputs.  Active intelligence leads to better alignment between inputs (what is being sought) and outputs (what the AI provides).  It leads to deeper and more sustainable understanding and benefits humans at least as much, if not more, than machines.

So, how should we engage critically with GAI output?  Consider driving. Just as we don’t need to understand the coefficients of friction or the complexities of torque to operate a car, we don’t need to be AI experts to use GAI responsibly. However, we need heuristics to understand how GAI works at a conceptual level and recognize when it might introduce biases or inaccuracies. Just as we know how to slow down when it rains while driving, we should know when to question GAI outputs, understand their limitations, and be vigilant about when AI reliance might compromise our thinking processes.

We are already suffering as a society from susceptibility to misinformation, manipulation, and superficial understanding.  Technology trends will increase the agency of AI – which can be immensely beneficial.  However, we need to be cognitively alert.  If we allow AI to take over too much of our cognitive labor, we can exacerbate the problem. By fostering awareness, encouraging questioning, and preserving the human touch in our communications, we can ensure that GAI remains a tool for enhancement rather than a crutch that erodes our analytical and reflective faculties.


Leave a Reply

Your email address will not be published. Required fields are marked *