Generative AI like ChatGPT uses models trained on the Internet to predict the “next” word (i.e., like a fancy autocomplete). While some have criticized the mundane nature of their prose, in general the performance of these tools is being extolled.
When a request is made to ChatGPT (for instance) to write the story of Sleeping Beauty using 26 words that follow consecutive letters of the alphabet, it does it fabulously. However, the result is not particularly remarkable because the task has specificity…. find the right words and then make them coherently string together. However, pushing the AI to write a story after you give it a sentence (i.e., the boy went up the hill with his little dog) – and then rewrite the story for different age groups and even adults, the AI comes up with remarkably different stories. Then, asking it to make the ending more interesting, or to change the voice or tone – it does. So, at some point the tactical concept of predicting words is giving rise to an abstraction – where the AI can “see” the story more holistically and manipulate it within a context. Isn’t this what we call creativity? So, there is some merit to the hype about AI’s ability to be transformational. When the AI is no longer a tool but lives up to its concept of generating new ideas, that’s when we start questioning how much “human” do we need in the loop – especially when with time (=data), the AI will only get better.
The problem is that the AI even surprises its creators – since they themselves cannot untangle (understand) the logic of the innumerable linguistic connections being made (it is like untangling the neural connections made in the human brain when they espouse a thought). This is in sharp contrast to how programmed systems bridge the logic between problem and solution. The dilemma is that we need such transparency to design an appropriate legal and ethical framework around AI before it runs away from us in undesirable directions.
Leave a Reply