Though artificial intelligence (AI) has been available and used in the industrial sector for more than two decades, it's only relatively recently that industry started to adopt the technology at larger scale. One reason for the slow adoption has been the required computing resources, and the other is in the tools to develop the models. The reason tools are important is because the key word to remember in artificial intelligence is artificial. AI needs data scientists to add the intelligence.
Most data analytics (analyzing data sets to find trends, answer questions and draw conclusions) and AI (using data sets/analytics to make decisions) applications are in IT and the business world, including detecting changes in network traffic patterns as indicators of cybersecurity breaches, or monitoring electronic transactions to manage financial services risk.
For many AI installations, development is done using the large processing power of arrays of computers, and then converting the results into a simpler algorithm that can run in a typical computing environment found in the field/end device. All of us are also familiar with AI in our everyday as voice and facial recognition proliferates on handheld devices, vehicles and smart speakers.
Industry has also been using AI for years. Some examples include visual pattern recognition for murky or unclear interfaces, diagnosing failure of pump seals from even small leaks, and optimizing the operation of a pipeline to balance throughput against energy consumption. Projects of this type all have large financial returns.
These examples are, in my mind, asset management at the “macro” level because they're deployed across systems of multiple components or subsystems to make processes or facilities run better or identify abnormal situations.
The next level of asset management is applying of AI on one machine or piece of equipment, perhaps for vibration analysis or electrical harmonics analysis of a pump or motor to predict when maintenance is required. These are other instances with significant financial returns because of the potential impact of unscheduled failures.
Most AI applications in the automation sector are presently deployed and used at this middle level, where machine learning (ML) has started to make inroads. ML is a subset of AI that uses computer algorithms to learn automatically to make predictions and decisions without being directly programmed to do so. As with most things digital, the algorithms are getting better even as the required processing power is available at ever lower levels of the Purdue model—closer and closer to sensors in the field.
Indeed, AI is now being applied at the micro level, using even internal, raw signals in sensors to provide added information on health of the device, quality of the signal, or to indicate a process abnormality. One example of a simple type of (nano) AI application is the plugged impulse line indicator for pressure transmitters that was introduced 15 years ago.
AI techniques are also being used in control valves to continuously monitor valve responses to command changes, comparing how valves move, and how much air is needed to make that move. Other valve AI measures include tracking cycles to predict for wear, “vibration” in the shaft when there's no motion as an indicator of cavitation, or the position of the moving part of the valve in relation to its seat as an indicator of necessary maintenance.
Today’s field sensors have limited spare processing capacity for these types of analyses, hence the need to simplify the results to an algorithm. However, digital communications networks can pass raw signals along to another device or, for a short period, the controller for capture and subsequent analysis or processing.
Implementation of ML at the micro level isn't possible yet, but ML platforms for implementing AI are now available on single-board computers, which means it won't be too long before it can be done—and anyone will be able to use these small, low-cost processors to get started on implementing AI. More on that next month.