Quick Thoughts on the Limits of Artificial Intelligence

The buzz on artificial intelligence and its potential is loud, and, with the visible demonstrations of general-purpose AI tools like ChatGPT, is increasingly pervasive. While the naysayers on AI are a strong minority, their claims are largely around the ethical issues with unbridled AI or biases in training data. However, even if we overcome these issues (a…


The buzz on artificial intelligence and its potential is loud, and, with the visible demonstrations of general-purpose AI tools like ChatGPT, is increasingly pervasive. While the naysayers on AI are a strong minority, their claims are largely around the ethical issues with unbridled AI or biases in training data. However, even if we overcome these issues (a big if), does AI have fundamental level vulnerabilities that limit its ability to meet the ideal of the human mind?

Consider the following scenario. An adult drops a toy and a baby crawling behind picks it up. If an AI is trained with lots of video data, it can predict that the baby will pick up the toy. However, if the adult drops the toy in anger, the baby does not pick it up. The only way the AI can pick up this nuance is if the anger can be recognized, categorized, and captured in the video. If not, the AI will fail in this prediction.

While categorization is the essential way in which humans make sense of the world (i.e., how we can distinguish between a bat and a phone), AI only works with data that can be categorized. However, humans also have feelings and values that they may place on the categories and the ability to self-reflect in a situation. These are not always visible to the AI or reflected in the data used to train the AI. The context (complex social situations) may also not be categorized and picked up by the AI. So, while AI will pick up categories from the data (labeled or unlabeled), the human can reflect not only on the data, but can also reflect on what is not in the data and incorporate the consideration of the unknown into their decisions. Imagine the difference between a person’s algorithmic health, where all the categories sensed and measured are computationally combined into a health score.  Real health however may involve reflection of the importance of the categories or situational tradeoffs between health and quality of life or self-reflections of physiological and psychological well-being.

A part of AI’s power is its ability to conduct unsupervised learning and identify its own categories from training data. However, such automation may be sub optimal. To truly enhance AI’s predictive ability, we need to categorize context and non-observables by human intervention in the data itself…and given the complexity of humans embedded in situations, even this may not be enough. So, despite the buzz of AI, and its tremendous benefits — for the foreseeable future, total replacement theory is still fiction.


Leave a Reply

Your email address will not be published. Required fields are marked *