ChatGPT has attracted more attention globally than Taylor Swift’s recent album launch. A good old Google search result revealed, “Taylor Swift has 36.6 billion combined streams of her music and 22.4 million album-equivalent units to date in 2022.” Take a moment to digest that. So promising is its rise that ChatGPT is now being sold to us as a tool of liberation that could potentially elevate the collective capacity of people by building a shared cloud of language. This, we are told, will enable comprehension of information and knowledge, help save time, and increase the productivity of workers. In my field of academic research, it is being touted as a silver bullet to weed out research inconsistencies.
synthesising their publications to the chatbot which not only does that but also helps them find gaps in their theoretical explorations and suggests possible pathways into new inquiry by mapping the existing work in the field. This has caused a moral panic and posed an ethical conundrum regarding plagiarism with Noam Chomsky (2023) denouncing “AI-assisted high-tech plagiarism.” Well-meaning people have expressed concern over ChatGPT’s enablement of the stunting of individual growth and skills which could potentially wither human intelligence. Humans are more likely to imitate ChatGPT’s writing style in the future than be bothered that their own intelligence is suffering. However, a science fiction writer, Ted Chiang, made an observation that puts this fear in perspective. He compares ChatGPT with a blurry JPEG image of some text. The JPEG image, although blurry, communicates something of the original and it is in the blurriness or the gap between artificial intelligence and human intelligence that our subjective agency is activated to “catch” any mistakes in approximation. This observation sets the stage for this piece.
Comments