As the former editor-in-chief of Wired, Nicholas Thompson, CEO of The Atlantic, has long had a front-row seat to the technology and societal changes that impact us personally and professionally. This vantage point provides him with a unique perspective on what he called this “time of profound exponential change” and made him an intriguing keynote speaker at Rockwell Automation’s Automation Fair 2023.
Thompson is not a futurist who predicts what technologies will look like five or 10 years from now. Instead, he connects the dots between past and present technological change and highlights the current developments that will likely have the most impact on our lives.
A key technology transition pointed out by Thompson, occurred during COVID. “People underestimate what happened during COVID in terms of how much it changed the trajectory of artificial intelligence (AI). Suddenly we’re all at home and instead of having conversations at the water cooler, we’re communicating with Slack and Zoom and everything we do is recorded and transcribed and turned into data.”
This aggregation of data that’s being analyzed by AI creates a lot of fear, he said. The fear is that we’ll soon reach a point where machines are more intelligent than humans and the question then becomes: What will happen to my job?
While AI will, of course, impact certain professions as nearly every form of technology has at some point, he said that technology generally increases the amount of work to be done by increasing opportunities. To illustrate a few past examples, he pointed out that though travel agents have disappeared as a result on do-it-yourself online travel booking, the job of flight attendants has remained stable and even increased. Likewise with the advent of automated teller machines (ATMs), many expected the job of bank tellers to disappear, yet bank tellers still exist decades after the introduction of ATMs, but their jobs have changed and increased their value to banks’ operations.
The limitation of LLMs
The power of AI large language models (LLMs) is clearly not insignificant. Thompson points out however, that LLMs are more like an advanced version of Mad Libs than a thought process that considers an array of experiences as a human would.
In addition, as LLMs get trained not just on past human inputs but on synthetic data created by AI in the past year, there is a lot of concern about the results it can provide.
He showed a recent prompt entered into Google’s AI that asked: What is an African country beginning with the letter K. The answer from Google was: “While there are 54 recognized countries in Africa, none of them begin with the letter ‘K’. The closest is Kenya, which starts with a ‘K’ sound but is actually spelled with a ‘K’ sound.”
This nonsensical response was placed into a conversation on the Hacker News site, which is indexed by Google, meaning that this data will get fed into its LLM. When an AI-generated nonsense answer is used to train AI, those mistakes will be used to build other mistakes.
So, while there exists significant upsides and downsides to our increasing use of LLMs, Thompson said that, as good as LLMs can be in a number of areas, it is highly unlikely to approximate human capabilities because it is based only on text, and that’s only a portion of our experience and what influences our perceptions, decisions and actions.
That’s why he contends that “the more creative and/or social the job is, the more stable and resilient it will be to technological change.”
What to watch
Amid all the concerns and hopes for how AI could change our future trajectories, Thompson noted three areas that bear watching as we move forward—AI impersonations, the convergence of humans and AI and legal impacts on open-source AI.
Regarding the increasing amount of AI-generated deep fakes, everyone doubts information now, he said. But high-trust sources remain available and could benefit from this. He expects the internet to become an increasingly low-trust place as deep fakes proliferate.
On the subject of AI and human convergence, Thompson cited a study that compared radiologists’ ability to identify tumors to AI’s ability to do the same on its own and as an aid to humans. The results showed AI was better at image recognition than humans and that humans assisted by AI performed the worst of the three. “This is totally counterintuitive,” he said, noting that humans assisted by AI would be the expected winner in this contest.
“It turns out that there's something about the AI results that made the actual radiologists less confident in their correct decisions and more confident in their wrong decisions,” said Thompson. “So, when the humans disagreed with the AI, it led them to the wrong place, which suggests that there's some deep psychological issues and real complexities with the way we work with machines and the way machines augment us.”
As for open-source AI, where much of the innovation around AI has occurred, Thompson pointed out how the Biden Administration executive order on AI could have good and bad effects. By requiring audits of AI technology in development, the order will require AI developers to rely heavily on lawyers and lobbyists, which will make it hard for small companies in this space to compete. Thompson worries that this could lead all the value from AI developments to accrue to the biggest companies. “It could be that just a small number of companies get all the value,” he said.
Ultimately, a big factor we’ll have to deal with is as we move forward is that AI companies made a huge mistake early on. This mistake, Thompson said, was to focus on making a technology that could win the Turing contest—in which a machine’s responses are so human-like that a person cannot tell that they’re interacting with a machine.
Though it seems like this would be the right idea for AI companies, it’s not the right goal because “what’s happened is that, by creating these systems that act like us, they missed the idea of figuring out slices of human intelligence and ingenuity that could be made much better with the help of AI instead of trying to be completely like us,” he said. “We should have a sphere of AI-ness and sphere of humanness.”