Several sources have gleefully asked me if I’m worried about Chat Generative Pre-trained Transformer (GPT), which can supposedly take over many writing tasks, and put us reporters and editors out of business. Released by OpenAI last November, ChatGPT is described as a large-language-model chatbot that lets users massage written material into preferred formats, styles, detail levels and lengths.
I had to say nope, I’m not worried. My response surprised me, too, because I and some coworkers used to speculate how long it would take for shrink-wrapped software to kill the few of us that hadn’t already been laid off. Part of my blasé attitude comes from getting older, but I’m also unconcerned because I’ve learned more about what machine learning (ML) and artificial intelligence (AI) are and what they aren’t.
I think some of my sources’ initial glee was the usual reaction to slapstick—we’re glad the Three Stooges’ injuries aren’t happening to us. However, I also believe many people just like name-dropping ChatGPT for the same reason they enjoy talking about AI, ML, Big Data, Industrial Internet of Things (IIoT), Industry 4.0 and all the other buzzwords that come down the pikethey’re novelties. Even if they’re not good for much else, we enjoy knowing about and citing the latest, fashionable hype because we hope it will make us appear smarter than our colleagues, neighbors and other members of our tribes Because it’s used for oneupmanship and temporarily enhancing status, I suspect that Chat-GPT—and even AI and others—may not be the earth-shattering forces they’re claimed to be by their many promoters. ChatGPT and AI tools are proving to be useful, and more will doubtless follow. However, as I’ve mentioned before, beware of the 24-hour news vacuum that’s financially incentivized to keep audiences in crisis mode.
I’m also less concerned about ChatGPT and AI because I don’t think they’re genuinely intelligent. Heresy, I know, but I’ve been covering industrial computing for a long time, and I still haven’t been disabused of the notion that software is just a digital version of the recipes in the little metal box in Grandma’s kitchen. Oh sure, instructions and algorithms can reshuffle those index cards, and deliver previously inaccessible data in new and useful ways. However, all of this is still data processing that’s founded on precedents and benchmarks, and relies on preprogrammed configurations and change-of-state detection to perform many tasks.
Is this intelligence? I doubt it. Simply returning data faster isn’t intelligence, no matter how sleek and useful a format it’s presented in. I believe the same goes for ChatGPT and AI.
For instance, one often-mentioned example of AI is software that can mash together image concepts based on verbal or written queries.
The software smooths the interface between requested concepts and quickly provides a seamless image. While this visually impressive, I don’t think it’s intelligence because it’s just recombining existing content, which really isn’t creating something new or innovative. It’s just rechewing cud that’s already been chewed.
I believe the same is true for ChatGPT, which takes existing content, parameters about how someone writes and other details, and digests them into passable text. However, I don’t think it can learn what’s going on from a source, grill them to determine what’s occurring below the superficial, and crystalize that input into original material that's useful to readers.
I was taught that intelligence is the capacity for abstract analysis and critical thinking. All that’s needed is a brain and the willingness to use it. I’m sure ChatGPT and AI will strive get there soon, but they still have a long way to go.
If you think I’m a little or completely off base, there’s always a good chance that I am. So, because the bar for proving me wrong isn’t very high, please chime in at any time. Or fire up your ChatGPT and tell it to write me some scathing criticism or even a poison pen letter—if you’re both up to it.