The ongoing discourse on generative AI seems to extol the promises of AI (utopian visions of a world free of drudgery and want) and the dangers (dystopian visions of machines taking over the world). If history teaches us anything, we have seen this story before. It is well chronicled for many technologies in the Gartner hype cycle. New technologies spawn overenthusiasm, followed by disillusionment and eventually enlightenment as we learn to use the new technology. In every case, the technology is heralded as incredible, and predictions, both constructive and destructive, run rampant. With the recent demonstrable usability of generative AI, we seem bombarded with constant hype that inevitably causes us anxiety. However, with a retrospective look at the major technology revolutions like the industrial revolution, the computing revolution, and the Internet, it seems like some of this hype – like predictions of massive job losses and economic transformation are par for the course. Why does this seem different? Are these fears real, and should we be anxious?
Fears could exist in both the supply and demand side of AI. On the supply side, there is the fear of moving too fast with tech companies providing a continuous supply of AI tools that are being embedded into new products and services. The cry for a moratorium expressed by some tech leaders is subverted by the drive for “being one step ahead” in the race for profitability. So, we can envision the supply of AI products advancing at a furious pace, because despite their complexity, the software engines for these products are small, and data for training is growing exponentially.
On the demand side, there is the fear of where this is all going, especially given our inability to understand the basis for generative outcomes. We might marvel at the outputs of these systems but not even their developers can explain the black box computations that get us to that specific outcome. Just as it is tough to explain how humans might choose to say a certain word – based on millions of neurological connections, explaining AI is a challenge. So, the fear is we are dealing with a generative technology that we cannot predict and therefore, cannot control. The fear of job loss also has a twist – it includes many “creative jobs,” where it is easy to see replacement (over augmentation) and difficult to envision any new jobs created. This leads to fear of how people will survive and the corresponding fear of economic overhaul (e.g., UBI) and exacerbation of economic disparity. There is also a big fear of bad people using these products. With current societal problems caused by the inability (or unwillingness) to distinguish real from fake – what happens when fake pictures, videos, audio, and news can be spit out automatically at a quality level and scale that makes them impossible to detect and easy to spread? People with nefarious motives can weaponize anthropomorphism by creating familiar human-like interfaces to engender trust and swindle people. Further, with the unabated use of data by generative AI, we have issues of privacy, security, and copyright that are far more ominous than what we have seen before. Moreover, just one quick look at Congress, and there is the fear that they may not be up to the task of putting any serious guardrails on this burgeoning technology.
However, the biggest difference between this revolution and the ones that preceded it is the fear that AI may devalue our humanness. What distinguishes humans (from animals, for instance) is humans are aware of their existence. This idea of consciousness is the ability to encompass a feedback cycle of both internal and external information that allows us to exist and position ourselves in the world. Isn’t that where the AI may be going?
Leave a Reply