I'm breaking the traditions of scientific articles by writing about issues that go beyond science. The tradition of engineering articles is to present material that the author is fairly sure of, but when I write about artificial and super intelligence here, I'm in uncharted territory.
Automation opened a new chapter in human evolution, because while it started out as just another tool to make our lives easier, through the development of robots, instant communication and artificial intelligence (AI), it is becoming much more. This can be a great achievement, but can also become a slippery slope for human civilization.
Throughout the ages, humanity was not only struggling for survival, but was also struggling to understand the universe and its purpose. This search used two roads, the spiritual and the scientific. Those on the spiritual road assumed that understanding the universe is beyond the abilities of humans, while the ones on the road of science decided to try it anyway. Scientists focused on learning the laws guiding the universe and thereby learning something about its Creator.
The ones traveling on this second road included Aristotle, Copernicus, Galileo, Newton, Einstein, and now people like Stephen Hawking. It matters little if Galileo discovered gravity because an apple fell on his head or because he climbed the leaning tower of Pizza and noticed that bodies of different weights increased their velocity at the same rate. It matters little how the the existence of relativity, black holes or the continuous expansion of the universe was proved. What matters is that by proving them, we have shown an ability to understand some of the laws that give order to the universe.
If we see a painting, we know that there was a painter who created it. Some might argue that the existence of the universe proves that it, too, had a Creator. We have to study the painting to learn something about the painter, and over the millennia, we have also studied and gained a bit of understanding of the universe through science. Over this same period, some have also developed the view that the spiritual and the scientific roads lead to different conclusions—that they do not merge, but contradict each other.
Well, it seems that they were wrong. Today it is science that has proven that neither time, nor space existed before what we call the Big Bang, and just as Newton's apple and Einstein's relativity represented a quantum leap in our understanding of the universe, the Big Bang proves that the spiritual and the scientific roads can merge.
Automation, robots and AI
So what has all this to do with automation? Well, we might not realize it, but automation opened a new age for mankind. First it was just a tool that served our comfort as it substituted for our muscles, and later, for the routine functions of our brain, but today we're beginning to realize that we've "created" something much more. When we designed the first gadgets that made industry safer and more efficient, we didn't realize where it would lead. Next, we designed smart instruments, so they'd self-diagnose if there was something wrong. This road eventually led to robots, and today we're beginning to realize these human creations are more than mechanical slaves.
First, we believed robots are good because they can do things that are boring or they can do things better and faster. Later, we realized that they can also go to places that are inhospitable for us to visit, such as Mars or war zones. And now, we're beginning to realize that AI can also change our life styles. Today, when our AI-brained robots can not only build cars, but can also drive them, we begin to ask, will this creation of ours make its creator unnecessary? And by this I do not only mean that they can create unemployment.
When we in the automation profession created these machines of practically unlimited memory and speed to analyze data and execute logic, we seem to have also created a machine that could eventually improve its own intelligence. It seems that AI can not only build and drive cars, but can also invent better ones. If thats so, why could they not also design smarter robots? Why could they not improve their own software? Why could they not gain superhuman intelligence?
Naturally, we've just started on this slippery slope, but we have started! We're beginning to become keyboard clickers and intellectual garbage consumers, are we not? Do we know where this road leads us or the generations to come? Yet we do know that AI robots can not only spread valuable information (or fertilizer), but they can also spread falsehoods (or the Ebola virus). Yet, to today's AI, they are the same.
You might say that this is farfetched. You might believe that AI will only serve to analyze our genetic code to unlock the secret of eternal youth and make inheritable changes to the human genome. Well, this could turn out to be only wishful thinking. There is a big difference between us and them: we were created with a conscience, and machines have no "inner man." AI and robots are not necessarily beneficial.
We don't know if it would be necessary or possible to bestow morals into AI before we let them loose. What we do know is that people like Stephen Hawking and Bill Gates are worried. They consider uncontrolled AI more dangerous than global warming, unless they can share human values.
I don't know the answers to these questions. Obviously, I know robots could replace us in today's workforce, but that isn't necessarily bad because technological unemployment could just free us to have more time for improving the quality of life on this planet. Unfortunately, I'm also beginning to believe that AI could also cause humans to become intellectually lazy and detached from culture itself. Obviously, AI in the wrong hands can also do immense harm, but that doesn't worry me because that danger is not new. Mankind faced and survived many evils in the past, including tyrants, fanatics and the dangers posed by nuclear weapons.
One would hope that we will figure out a way to live with AI, and we will figure out a way to stay in control and keep it benevolent or at least neutral. But is hope good enough? Can we just keep developing a machine that has a superior intelligence and no conscience? I don't think so.
Now that I'm working on the 5th edition of the Instrument and Automation Engineers' Handbook and see that it will probably grow by a full volume because of AI (if I live to finish it), I feel a bit responsible to sound a warning. I have no idea where AI will lead, but in any case, we must realize that AI is much more than just a tool, and that it is up to us to overcome the potential risks it can pose before we let it loose. It is up to us, the automation engineers, to give our "child" the upbringing it deserves, so that when it reaches the state of true super intelligence, it will also be sage and sapient.