Ever since industry latched onto digitalization as the way forward to more productive, sustainable and reliable operations, the potential of data analytics and artificial intelligence (AI) to automate repetitive tasks and optimize complex processes has captured the collective imagination as a key means to furthering these goals.
But even dating back to the excitement over neural networks back in the early 1990s, automation industry veteran Brad Radl saw there was too much of an emphasis on the algorithms themselves—and not enough on what it would take for these technologies to gain acceptance among operators and engineers. Indeed, even as the technology needed to streamline the implementation and maintenance of AI solutions in the real world continued to evolve, Radl cofounded Griffin Open Systems in 2013 to bring to market a graphical programming toolkit for rapid application development and deployment of real-time solutions—including those based on AI principles.
Control recently caught up with Radl to discuss why early efforts to capitalize on closed-loop AI were less than fully successful, and how an open, low-to-no-code approach could dramatically streamline the time, effort and costs needed to implement and maintain such solutions.
Co-Founder, Griffin Open Systems
Q: Griffin offers what you’ve described as a system-agnostic, fourth-generation AI Toolkit for implementing real-time optimization strategies. It sits just above the closed-loop regulatory strategies implemented in distributed control systems (DCS) and programmable logic controllers (PLC) and interfaces with other enterprise systems. As such, it’s reminiscent of a previous generation of model-based advanced control packages that required a lot of domain expertise to set up—and even more care and attention when it came to maintaining those models. How is the Griffin approach different?
A: With the advantage of hindsight, we saw that most early AI solutions were brought to the market as black-box, monolithic applications: a single algorithm supported by a lot of code. With Griffin Open Systems, we decided that the best way to go was an open toolkit for engineers and operators on which you could deploy any type of algorithm, and trust the system to deliver real-time performance and reliability. We took away the need to worry about creating, maintaining or compiling code, and allow on-the-fly changes through a graphical, no-code interface. We substantially lowered the barriers to implementation, and allow engineers to deploy many different types of algorithms on one platform.
The system architecture is also many-to-one and one-to-many. You may be talking to a DCS for certain regulatory control requirements, but you may be talking to PLCs in another area. You can talk to business systems, which perhaps are the source of goals that you'd like to have incorporated at the control layer. And, if one of our users has a proprietary concept or technique, then it can be added to the toolbar and managed in the same platform. We made it as open and essentially vendor agnostic as possible. In addition, reliability is a key aspect of our design. Once it’s booted up, it should run for years at a time, and you should never need to shut it down.
Q: You’ve coined the term Adivarent Control to describe what the toolkit enables. Can you explain where the term came from?
A: Adivarent is a take on the Latin word for assistant. When talking to customers, I find they keep trying to either put it in the DCS bin or the IT bin of data analytics, while in fact we operate in the gray space between those layers. What we're trying to do is make operators’ lives easier by automating a lot of the mundane tasks that distract them from their main job of overseeing process operations. Meanwhile, we also assist process control engineers, who otherwise find they have to continually fine-tune control models created using ideal assumptions.
Q: Adivarent Control fits nicely at level 2.5 of the venerable Purdue model for industrial automation—one step above regulatory controls, but still closing optimization loops in real time. What are the advantages of having computing resources at this level versus migrating them up or down the control pyramid?
A: Data analytics are great, but in general they’re of very limited value if you just analyze the data and you're not able to turn them into an actionable insight, such as being able to bias a control loop, or arbitrate between multiple goals or among multiple islands of automation. DCSs are great at what they do, so you really don’t want to risk messing with their regulatory control or safety functions. But if you’re coordinating the actions of multiple DCSs, taking into account environmental or quality data that are normally stranded on some PLC somewhere, you open up a predictive, diagnostic capability that isn’t practical to implement at a lower level.
Q: While optimization of controlled processes using AI is one key application of the toolkit, it can also be used to automate mundane and repetitive operator tasks. Can you describe a use case of this sort and the benefits delivered?
A: Returning to the interaction of emissions and process controls, powdered activated carbon (PAC) injection is often used to limit mercury emissions in coal combustion. But activated carbon, in turn, can increase the opacity of the flue gases—another undesirable outcome. It’s not rocket science, but a human operator might spend a goodly portion of his day iteratively tweaking PAC injection rates to stay in compliance on both measures. But when you can combine a few simple rules with a couple of predictive models, both opacity and mercury emissions can be automatically controlled in the background. Something that used to require 10 or 20 manual tweaks a day now requires none. It's a big time savings and a real assist to the operator.
Q: When it comes to gauging return on investment (ROI), can you describe how users of yours have justified projects with you—and the feedback that you’ve gotten from them once the toolkit’s up and running?
A: Similar to any other product or service, potential customers are always looking at promised ROI, such as overall efficiency improvement, or savings on fuel use, heat rate or some other process consumable. But once you cross that hurdle and get the toolkit into use, they’re fine-tuning the process and doing knowledge capture from their smartest operators—making operational improvements that would have been hard to even imagine before the capability was available.
The first thing I do when installing our system is to talk to the senior operator that knows how to run the plant best. Then I ask the engineers, “what's been driving you crazy for last couple of years?” Or, “what recurring issues call you to the control room to resolve again and again?” Use this input to guide your first projects, and you’ll build acceptance in the long term.
While ROI was the immediate justification, the primary feedback we get after the fact is that life is now easier for the operators and engineers. Time spent on repetitive, frustrating problems is down, allowing them to use their skills on higher-order problems. They can pursue a culture of continuous improvement because their minds are freed up to work on more meaningful activities. Meanwhile, management is happy, too, because they’re getting their ROI and then some.
Leaders relevant to this article: