While heading home from a weekend of autumn leaf gawking, Miguel and friends trekked across hundreds of miles of hinterland highways. Speed limits in the sparsely populated flatlands were generously high, but Miguel was vexed by an optimization problem. At maximum legal velocity—the fastest he could travel without risking a citation—his mileage and range suffered. Instead, it might have been better to maintain a slower speed that optimized his range, so a stop for fuel would be delayed. This algebra was a bit much to solve while dodging travel-trailers and would-be venison—but Miguel daydreamed of a future augmented by artificial intelligence (AI).
A plant manager might ask, “Hey AI-engine, if I run my plant 5% harder, will I be forced to take a maintenance outage sooner than planned?” Large enterprises have individuals whose job is to manage such risks—forewarn us if we’re pushing some reactor or machinery too hard. There’s nuance to their perspective, as in “the compressor failed prematurely after running a few months on the rebuilt bearing.” How will AI be attuned to such phenomena that shade the likelihood of a failure? If the AI engine has been digesting past data dredged from a vast data lake, we may not have much confidence that every outcome’s cause has been adequately recorded.
Perhaps AI will be able to read procedures and work orders. How is your facility documenting the actual malfunction the work order seeks to correct? When a work order is completed, is it apparent what was done—if anything—to remedy the issue? There are duplicates and cryptic entries, such as “replace broken hand wheel at east loading shed,” which are often recorded. Some work orders might languish for years. Does preventive maintenance indicate if anything was fixed?
A favorite programmers’ adage from the last century was, “garbage in, garbage out.” However, there’s hope that this century’s AI can learn what is garbage and what isn’t, and keep anything useful. If a plant is staffed and disciplined enough to have such things in order, can AI tell them anything they don’t already know?
An active AI engine may be better suited to monitor the noise emanating from smart devices. Device data and variations in noise are like the proverbial tree that falls in a forest—if no one’s listening, does it make a sound? Every switch and network card in a facility’s process control network tallies throughput data, dropped packets, collisions, retries and more. Modbus and fieldbus churn out statistics as well. Do we even know how one might access and integrate such data? Right-click dialog boxes don’t necessarily lend themselves to an OPC interface, but there can be useful forewarnings of degradation—or even nefarious exploits—if we can get an ever-awake silicon assistant to pay attention.
An AI engine can apply analytics dispassionately, and potentially reveal blind spots or overturn long-held beliefs. AI also has the potential to liberate conservative operators from strenuously “staying between the lines”—an operations culture that abhors exploration or experimentation. If we say, “please try the moves recommended by Watson,” the board person feels less accountable if some rough consequences ensue. Machine learning must be equipped to “learn on the fly,” digesting and incorporating unforeseen results into its models. All models are starved by steady-state operations, and my DCS’s adaptive control platform even offers to inject variability until a valid model is derived. I recommend only doing this during day shifts, while you’re within earshot of your operator’s cursing.
Miguel’s road trip optimization remained unsolved, and would have been foiled as the appetites, muscle cramps and biological needs of his cohorts proved to be more compelling than besting his fastest commute from northern latitudes. Multiple “intelligence” competes and negotiates to derive an outcome, and perhaps this is the future of advice from diverse AI. So, for now, Miguel will stay at the helm.