We all can be data analysts

June 7, 2018
Plants are finding ways to improve quality and productivity by simply automating the procedures of the past century and applying them to data collected from the same sensors used to control the process

Back in the 1980s, I was in charge of process definition and quality control in a factory making rare-earth magnets. It was a powder process involving milling, pressing, sintering and heat treatments, and it often did not go well. My job was to define specifications for powder size, compression psi, sintering cycle, etc. to maximize the amount of product that met specs.

The procedure was to guess what mattered, then use design of experiments (DOE) and a series of trials to find the sweetest spots for Production to set grain size, times, temperatures, etc. DOE is a powerful and relatively efficient way to explore multiple parameters—its Achilles heel is that it depends on all the uncontrolled variables being held constant.

So many uncontrolled variables. From unanalyzed minor composition variations and contaminants to technician skill level, ambient temperature and humidity, and whether a train might have shaken the building during a critical step, we were challenged to even know what really happened to our DOE samples. Then there was the simple labor of taking measurements, recording, plotting and analyzing the results, with its attendant possibilities for error: “Is this a breakthrough, or did someone just mislabel a sample? I guess we better repeat the experiment.”

Now we’re in the midst of a “digital revolution” in manufacturing, and the Industrial Internet of Things (IIoT) is connecting millions of sensors and providing Big Data. To get something from all that data, we’ll need a corps of Data Analysts, right?

Not necessarily. Plants are finding ways to improve quality and productivity by simply automating the procedures of the past century and applying them to data collected from the same sensors used to control the process, plus additional information on those uncontrolled variables (collected using IIoT technology), and maybe some information from business systems or off the Web in general.

Instead of just some DOE sample sets, readily available software can look at the natural (and unnatural) variations in everyday production, what’s going on in the environment (such as indoor/outdoor air temperature/humidity, line voltage, who’s in charge, train schedules, etc.), raw material lots, compositions, and more. Software can analyze for correlations, express them with confidence levels, and present the information quickly and easily with charts and graphs you don’t have to draw yourself.

At ARC Forum in February, analytics specialists from Falkonry, Seeq and others participated in multiple user presentations where the ability to collect process control, plant and supply chain information in real time led to solving persistent problems in unplanned downtime, quality and productivity. The examples I saw still had a significant human component—it takes people who understand the process to guide collection and analysis—so we’re not ready to toss a ton of data salad at an artificial intelligence and get magical results. But if a problem can be defined or an improvement targeted, the technology is impressively ready to help.

Next on this year’s event calendar is Rockwell Automation TechED. Last year, Rockwell Automation’s Phil Bush laid out the company’s plans to add Remote Monitoring and Analytics Services [Predictive services turn big data into reliability] to its offerings, “a scalable solution that has predictive capabilities, to help plants prevent failures and keep equipment running as long and continuously as possible,” he said.

The service works by connecting asset data and information in existing databases to analytics and engineering teams. The analytics focus on identifying data patterns and creating “agents” to recognize them in real time. “An agent is a little block of code that looks for a specific pattern in the data and recognizes it as an indication that something needs attention,” Bush said. For example, an agent could recognize that the pressure drop across a filter is increasing and trigger a work order.

To create an agent, “We look at all the data and determine what is normal and what is not,” Bush said. “An abnormal pattern that we can associate with a failure is a failure agent.” An anomaly agent notifies on anything that’s abnormal.

The service is intended to work with any kind of connected asset. “It’s agnostic,” Bush said. “We’re not just talking about Rockwell Automation assets, but about any kind of asset where we can capture data and build on it using our neural network, machine learning and predictive algorithms.”

That was last June, when Bush said the plan was to roll out Remote Monitoring and Analytics Services over the next six months. We’ll be at TechED in San Diego June 10-20, very interested in seeing how successful it’s been.