CT1811-podcast-cover1-300-compressor2

Solutions Spotlight: No-code approach lowers AI roadblocks

May 21, 2021
Keith Larson speaks with Brad Radl, co-founder of Griffin Open Systems

Griffin Open Systems was founded in 2013 to bring to market a graphical programming toolkit for rapid application development and deployment of real-time solutions. In this episode, Keith Larson is joined by Griffin co-founder Brad Radl to discuss the delayed adoption of digitalization technologies like data analytics, artificial intelligence and Industry 4.0, and how Griffin's system-agnostic fourth-generation AI toolkit aims to help overcome the barriers to further digital transformation.

Transcription

Keith: Hello, this is Keith Larson, editor of Control magazine and ControlGlobal.com. Welcome to this Solution Spotlight episode of our Control Amplified podcast, sponsored today by Griffin Open Systems. With me today is Brad Radl, one of the automation industry veterans who in 2013 co-founded Griffin Open Systems to bring to market a graphical programming toolkit for rapid application development and deployment of real-time solutions.

Welcome Brad, It's a real pleasure to talk with you today.

Brad: Oh, good morning, Keith. Thank you very much for the invite and opportunity to chat with you.

Keith: Well, back in 2013, eight years ago, but when you first started the company, data analytics, AI and Industry 4.0 were really just starting to get recognized as that new set of tools and models that would maybe allow the process industries to reach new levels of productivity, reliability and efficiency, but that digital transformation hasn't always gone, how should I say, as smoothly as some of the pundits predicted. Were they wrong, or what has stood in the way of broader adoption and success in the digital transformation arena in the process industry?

Brad: You know, that's a excellent question. I've been working in this industry space for many years. I think the term was, you used "experienced" since the early '90s utilizing a neural nets. The challenges we've seen through the years is that there's been a heavy focus on say, the algorithms themselves and what they can do, and they sort of neglected operators and what it would take to have acceptance by them, engineers, how best to incorporate their process knowledge into the process.

And also just recognizing the limitations of  a given algorithm. An algorithm needs to be fitted to the problem and not trying to cram your problem into the algorithm. So, there's been a lot of limitations built into those toolsets as they have been developed through the years. And ultimately for the technology to be successful, you need to be looking at minimizing the dollars, you need to minimize the time spent to support those solutions, and almost always the best technology is that which is the simplest, maybe complex behind the scenes, but it's simple for the users to maintain and utilize.

Keith: Yeah, that makes a lot of sense. Griffin offers what you've described as kind of a system-agnostic fourth-generation AI toolkit for implementing real-time optimization strategies sitting just above the world of DCS and PLCs and also interfaces with other enterprise systems. Certainly, reminds me a lot of the previous generation of model-based advanced control packages that required a lot of domain expertise to set up and even more care and effort, and attention when it came to maintaining those models. Often they were turned off and not used because they were out of sync with that process. How is the Griffin approach different than the world of MPC that many of our listeners are maybe more familiar with?

Brad: Well, one of the big things we did with Griffin Open Systems is had sort of the advantage of hindsight of, you know, what had gone wrong through the years, why are automation solutions not getting adapted? And our viewpoint was that a lot of them became sort of monolithic-type applications. You're building an application around solving one problem, or you have one algorithm and you're building a lot of code around that.

With Griffin, we decided that the best way to go was focus on a toolkit for engineers and make it an open-type design so that you can put any type of algorithm into the system, focus on real-time and reliability, so that once you have a system up and running, you basically don't have to worry about ever compiling code, assembling code, but through a no-code interface, you can now make easy application changes. So, you're lowering the barriers to basically incorporate in the engineer's or operator's knowledge into that solution.

And so, by having a more of a platform or a toolkit-type setup, it's a very different look than bring in say AI to a particular solution. So, you now have one platform that can do many solutions versus a solution tailored to one specific problem, and it gives you a lot more flexibility. It makes it a lot easier to support in the long term.

Keith: And you've mentioned also that it's kind of agnostic. So, it's agnostic on the control layer, but also to other enterprise systems where you may need to feed information or bring information in. Is that fair to say?

Brad: Yes. So yeah, one of the key things was that we didn't want to presume which vendor systems we're talking to, even how many different vendor systems we're talking to. So, the architecture is kind of a many to one, one to many. So, you may be talking to a DCS system for a certain regulatory control. You may be talking to PLCs on another area. You can talk to business systems that might be coming up with the goals that you'd really like to have incorporated at the control layers.

So, it's agnostic in that you just need to have a means of getting the data flow and, you know, certainly using many industry-standard type links, OPC, Modbus, etc. You can get the data in there and then send it out where it needs to go. And with the open design, if somebody has proprietary concepts or techniques, they just add them to the toolbar and then they can still use the same platform, but now they have their own unique capability built into it. So, we made it as open and essentially vendor-agnostic as possible.

Keith: And I think another maybe aspect to emphasize is the reliability and robustness, I think, that you've been striving for. Obviously, you're working in real-time and closing real loops, and doing things. And so that with your background and certainly in the nuclear and the power industry where liability is paramount, that's certainly you've brought that to the table as well.

Brad: Yes. You know, that definitely was sort of a key design characteristic. There's nothing worse than working on a system and then get the blue screen of death type thing. So, our chief software architect did a great job of tracking down a lot of the little nuances that one runs across in the world. We design our system so that, I call it a boot and go, once you've booted it, it should be able to run for years and you should never have to shut it down type of thing. And to me, that's part of the user satisfaction is not ever having to worry about that type of computer aspect.

Keith: Yeah. So it's more on the traditional OT model versus the IT world in that sense anyway.

Brad: Yes, very much so.

Keith: I'd be remiss if we didn't talk about this term, Adivarent Control that you use to describe what the toolkit enables. Can you explain a bit where that term came from and why we've chosen it?

Brad: Yeah. The term Adivarent Control is something I came up with mostly because when talking to customers, I find they keep trying to either put a platform into either the DCS bin or they put it onto the OT bin of data analytics. So it's like, okay, we have to come up with a way to try and differentiate the fact that we're really operating in the gray space between those layers.

And so, I guess being an engineer just kind of went back through the Latin and Greek and trying to find a word that was assistant and found something that was reasonably close to the Adivarent, which just sounded nice. And basically, Adivarent is a loose root of the word assist. Because what we're trying to do is assist the operators, make their lives easier, automate a lot of the mundane tasks that are distracting them from the main job of when they're operating the units. And at the same time, it's an assistant to the DCSs. You have your control curves in there, but the control curves are all based on sort of ideal assumptions. And so there needs to be continual fine-tuning and biasing of the control system.

So we came up with the term Adivarent because we believe it's set up to assist operators on one side and assist the DCS on the other side. And it just kind of fits in that layer, the traditional layers on that Purdue model.

Keith: So kind of in between the levels 2 and 3 more or less?

Brad: Yes.

Keith: What are the other specific advantages of having binning resources at that spot versus migrating them up or down the control pyramid?

Brad: Well, the large thing is that as we have more and more say, AI type of capabilities out there we find that the data analytics are great, but in general, it's a very limited value if you just analyze the data and you're not able to turn it into an actionable type of data such as being able to bias control system or arbitrate between multiple goals or multiple islands of automation. And so it just fits better above the DCS. You put it in the DCS, you have the challenges of all the steps and risk associated with modifying DCS controls. The DCSs do a great job, they're regulatory control. They do a great job. For their safety functions, it's best just to let them do their thing. And so it's just kind of a natural fit to bridge tools on the OT side and the capabilities you have on the IT side.

Keith: And that it would certainly allow you to bridge if you had say, DCSs from multiple vendors. Having it in the DCS would cause problems, we'll just say. Having it above makes a lot of sense.

Brad: Yeah. Well, that's kind of one interesting thing is that we just find being able to tie in the... Quite often, it's like environmental systems that are running on PLCs out there that are kind of just lost or forgotten about. And then we can bring those into the Griffin layer and then bring in other data coming in from the DCS and look at three or four different things. Quite often, you can find that they're kind of chasing in circles. And by putting us at that higher layer, we're able to look at all things simultaneously and quite often use tools to come up with a good solution.

Keith: So, really bringing in data points that aren't maybe traditionally the closed-loop data points, but things like environmental, other quality parameters, and things like that, and being able to loop them in to optimize operations.

Brad: Yes. And that's where, yeah. You get into some of the... You can add some nice sophistication in the sense of, say you have an environmental solution and it's a 30-day rolling average. Well kind of hard to put a 30-day rolling average into the DCS. At the same time, they're trying to control within a 32nd PID loop particular emission level. And by having this layer in here, we can either tie into that 30-day rolling average for many cases. Because of cybersecurity, you just sort of duplicate the look of that system and what the data has generated, but it's nice to know if you're going to  be rolling off high values, low values types of things from those systems, it gives you a predictive capability that you wouldn't be able to put at the lower level.

You're very explicitly looking at the environmental constraints, keeping systems out of trouble and you can build up margins. So instead of, "oh, we're going to be running a new say a fuel test in a couple of weeks and then dealing with the problem then," hey, build up some margin and let the system automatically kind of put you in the position for a good test run.

Keith: Yeah. That makes a lot of sense. Makes a lot of sense. We focused quite a bit around optimization of control processes so far, but you've also talked about using the toolkit to automate on the operation side, the mundane and repetitive so that operators can focus on more value-adding tasks. Can you describe maybe a few use cases of this sort and the benefits delivered?

Brad: Sure. I think we'll probably pick on sort of the emission and process interaction side again. So, one particular process they are dealing with is opacity alarms. So, they have a emission limit on opacity. They can't exceed it, and it lets it go into alarm if it's high for six minutes. And so, this is for a mercury control process, and they're putting PAC injection in. Problem is as you increase PAC, you increase opacity. So, it might help your mercury emissions, but you're not helping your PAC emissions.

And for the operator, in some degree, it's easy. Okay, I have an opacity alarm. I'm just going to turn back on my PAC injection. Problem is, if they keep turning back on that, eventually they get the mercury alarm. And then, okay, then they might take another action. Overall, those are really kind of simple tasks. They're just kind of adding subtracting here and there. And it's, we call it somewhat of a mundane activity. It's not like it's rocket science. So, opacity is high, I need to do this. This is low, I need to do that, but we can build that decision tree in there and then also build in a little more intelligence, so just exactly how much adjustment needs to be made. It's a nonlinear process, and a nonlinear process is affected by weather, it's affected by your fuel stock.

There's a lot of nuances in there that even though it's a simple task for the operator, it's hard for them to figure out. So, we can just put a little decision tree in there, combine it with a little more analytics, combine it with basically looking at those 30-day averages, and now it's doing all those little tweaks both ahead of time because it can anticipate opacity as it's coming up.

You have things like soot blowers that cause an opacity spike. If you know a soot blower is about to run, you take a preemptive action. So, the operators don't have to worry about taking that preemptive action. So at the end of the day, when you put in a few simple rules, combine it with a couple of predictive models, basically that opacity or mercury control system just becomes something that's running in the background and something that they use to do 10, 15, 20 tweaks during the day, now they have to do none. It's a big savings and that's where we get back to it be an assistance to the operator.

Keith: That particular application sounds a lot like the promise of the neural networks that we were talking about 20 years ago and a lot easier to implement perhaps.

Brad: Well, actually, and part of it is an outgrowth of that because with one of the biggest challenges of the neural nets is there's a strong tendency to let the neural net do everything. And one of our mantras is it's just something that's a simple if, and, but type of rule, just put it in like that, then make the life of the neural net much easier. That's where you start to get better acceptance of the systems because it does things much more deterministic where you expect it to be deterministic. And then the neural net can do the fancy stuff when you're trying to do a hundred-dimensional optimization question.

Keith: That makes sense, use the right tool for the challenge. So, when it comes to engaging return on investment, can you describe how users of yours have justified engaging with you on projects, but then also the feedback that you've gotten from them once they've put the toolkit to work, and are up and running? What's the feedback then?

Brad: Well, it's kind of interesting because to sell pretty much any product or service, they're always looking at the return on investment. And so our customers quite often are looking at, okay, what's our savings on fuel usage? What's our savings on the heat rate or some process consumable? So that they're basically trying to find that dollar savings, which is a reduction in something being used or increase in the overall efficiency of the process.

Keith: True.

Brad: So you cross that hurdle and then you get into the process installation side, and then using the platform and the toolkit now you are fine-tuning. You're trying to basically do knowledge capture from their smartest operators. So to some degree, I suppose that can be considered cheating. But the first thing you want to do is talk to that senior operator that knows how to run the plant best and get that knowledge into the system.

And then you talk to the engineers and say, ''Well, what has been driving you crazy for the last couple of years?" Or, you know, "Why are you constantly having to go into the control room?'' And you deal with those issues, which may or may not necessarily be directly related to your ROI, but they are directly related to the acceptance in the long term. And you remove all those little nuances, kind of getting back to automating some of those mundane tasks, and in doing so the feedback after the fact is generally what they are happiest about is the fact that life is easier for the operators and life is easier for the engineers, that they actually have a reduction in their time on things that they used to find very frustrating and allows them to use their skillsets on the sort of higher-order type problems, and moving on to some degree of the process of continual improvements, because their mind is freed up now to work on new activities. So that savings in their time and unburdening of the task, I would say by far is after the fact the biggest positive from these types of projects.

Keith: So, it's maybe not the most quantitative thing you can estimate in advance, but once it's up and in place, they're finding new use cases and applying it in multiple different areas.

Brad: Yeah. Yeah. That's a very good point. Because, it'd be nice to be able to sell it on that, but just really difficult to quantify those savings.

Keith: But once it's up that they speak for themselves, huh?

Brad: Yes, they do. And it is nice because then you have the champions in the house because they know if they do the next thing, their life gets better on the next project, and management's happy because they're getting their ROIs.

Keith: Okay. Well, great. Well, certainly wish you and the rest of the team there at Griffin continued success and continued success for your customers, obviously as well. Really, thank you also for taking the time to share your insights with us today.

Brad: Well, thank you very much for your time and thank you again for the invite.

Keith: For those of you listening, thanks also for tuning in today, and thanks also to Griffin Open Systems for sponsoring this Solution Spotlight episode. I'm Keith Larson, and you've been listening to a Control Amplified podcast. Thanks for joining us. And if you've enjoyed this episode, you can subscribe at the iTunes store and at Google Podcasts. Plus, you can find the full archive of past episodes at controlglobal.com. Signing off until next time. Thanks again, Brad, for joining us.

For more, tune into Control Amplified: The Process Automation Podcast.

About the Author

Control Amplified: | Control Amplified: The Process Automation Podcast

The Control Amplified Podcast offers in-depth interviews and discussions with industry experts about important topics in the process control and automation field, and goes beyond Control's print and online coverage to explore underlying issues affecting users, system integrators, suppliers and others in these industries.

Sponsored Recommendations

2024 Industry Trends | Oil & Gas

We sit down with our Industry Marketing Manager, Mark Thomas to find out what is trending in Oil & Gas in 2024. Not only that, but we discuss how Endress+Hau...

Level Measurement in Water and Waste Water Lift Stations

Condensation, build up, obstructions and silt can cause difficulties in making reliable level measurements in lift station wet wells. New trends in low cost radar units solve ...

Temperature Transmitters | The Perfect Fit for Your Measuring Point

Our video introduces you to the three most important selection criteria to help you choose the right temperature transmitter for your application. We also ta...

2024 Industry Trends | Gas & LNG

We sit down with our Industry Marketing Manager, Cesar Martinez, to find out what is trending in Gas & LNG in 2024. Not only that, but we discuss how Endress...