Followers of this column might remember that one of my sons—we call him the good son—is an aerospace engineer, working for a company that often claims to “give our warfighters an unfair advantage” by supplying superior technology. He’s not allowed to tell us much about what he does, but assures us it’s a lot more advanced than we imagine. Interestingly, his superiors tell him the same thing about other company activities they’re not allowed to tell him about.
Our other son, Benjamin, studies politics. He tells me his guiding principle is that government should strive to maximize people’s well-being—their satisfaction in life and how they're being governed. Democracy helps government serve this purpose by providing a feedback loop: the governed can influence the direction of government by voting.
It’s an imperfect system for many reasons, mostly because people aren’t perfect. Like operators in a process plant, they're inattentive, uninformed, misinformed, emotional and, well, human. It doesn’t help that the extent and complexity of government activities obviously exceeds even the largest industrial facility, company or conglomerate.
In the process control world, we’ve largely freed the operators from the tedious task of closing control loops, and have steadily raised their role to monitoring, optimizing and intervening only during alarms and abnormal operations. I like to irritate Benjamin by suggesting perhaps some degree of automation could similarly ease the burden of being an informed and effective voter, with better results.
Well-being could be assessed through universally understood and accepted standards, starting with Maslow’s hierarchy of human needs and including such things as per-capita income, wealth, health and longevity. Along with the usual statistics, measures would derive from algorithms like the ones currently used to forecast stock markets and applied to the ebb and flow of human expression found in places like conventional media, Facebook and Twitter. It should be straightforward to determine the aggregate condition of the electorate, and not too difficult for the computing power and artificial intelligence (AI) of today’s cloud to make the cause-and-effect correlations necessary to fine-tune a society that maximizes our satisfaction with life.
Benjamin tries to smack me down by telling me that not only would such a system be opaque (because no one would fully understand it), authoritarian (because it would have to wield the power of the state to be effective), and unstoppable (or someone would stop it), and thus unacceptable to almost everybody, it would also rob human beings of their essential need to assert power and influence (according to Aristotle).
I tell him he sounds like someone who engages in politics. Not everyone needs endless power struggles. Some of us would just like our government to run like a well-automated plant, with some advanced control where we need it and the right amount of simulation to help us make improvements. And by the way, nobody is asking me about my essential need to use a clutch pedal or craft a print magazine—maybe it’s time for automation to also put some politicians on the street.
“But who will decide what goes into the algorithms?” he says. “And in the event of an impasse, what will be an acceptable exercise of AI power?” One of the ways our current political system deals with impasses is by allowing a currently repressed faction to maintain hope of gaining power. The best, all-knowing AI couldn’t do that.
“What if many people think God values all human life and prohibits killing, but a staunch minority thinks God requires human sacrifice?” he asks. “Will your AI sanction genocide to solve that problem?”
With that, I realize I’m out of my depth. Don’t forget to vote.