GE-Banner

Choose the right metrics to power improvement

Oct. 1, 2015
The right data and analytical tools can make invisible patterns visible at scale
About the author
Dave Perkon is technical editor for Control Design. He has worked with a wide variety of fortune 500 companies in the medical, semiconductor, automotive, defense and solar industries.

"I've got to tell you, I have the coolest job in the world," began Ted Boyse, division chief for community radiology, Duke Medicine. As a radiologist, Boyse gets to use state-of-the-art imaging technologies to peer deep into the human body at work. But as fascinating as that work is, it's not what brought him to the GE Minds + Machines 2015 conference this week in San Francisco. "Instead, we are going to talk about the operational side," Boyse said. "And in that regard, I think we have a lot in common with industry."

Boyse discussed Duke Medicine's growing medical imaging operations and how metrics are used to manage equipment and staffing logistics for both computerized tomography (CT) scans and magnetic resonance imaging (MRI). "Our most expensive and valuable assets aren't the GE imaging equipment, but the doctors that interpret the images," Boyse said. Increasingly, Duke relies on data—rather than tradition—to make sure that its staff and assets are optimally deployed.

He cited one simple example where they were able to reduce service delays by analyzing demand data by urgency, by time of day, and by day of week. A few relatively simple staffing adjustments eliminated the backlog. And when Duke merged with a large regional practice, they added smart routing to optimize time spent with patients by physicians and content experts. Data analysis also helped the slow turnaround of emergency room CT scans. Understanding all the process steps and how long everything took led to policy changes and technology to monitor each step along the way.

Boyse noted that even in operations, many healthcare organizations rely on prior training, past experience and intuition to make decisions. This only works in a predictable and stable environment, neither of which applies to the practice of healthcare today. "Efficiency is the new priority," Boyse said. "To be cost effective, informed, creative and daring, we need to know the data."

Metrics help manage pipeline risk

Duke Medicine's Ted Boyse (left), together with Matt Fahnestock of Columbia Pipeline Group (center) and Deloitte's John Hagel III discussed the importance of data analytics and metrics at this week's Minds + Machines 2015.

The importance of data collection was also discussed by Matt Fahnestock, senior vice president and chief information officer of Columbia Pipeline Group. Access and analysis of data on their 15,000 miles of pipeline is what keeps their business moving, Fahnestock said.

Two years ago, Columbia partnered with GE to develop data analysis and decision support tools to boost reliability, manage risk and prioritize operations of its pipelines.

"With our GE Integrated Pipeline Solution (IPS), the likelihood of failure and consequences of failure, using both static and dynamic factors, can be displayed, as can risk and leak analysis. Operating pressures, historical and active work orders, overall risk rating and how risk has changed over time are all important metrics."

Make the invisible visible

With a step back from specific industry examples, John Hagel III, co-chair of Deloitte Consulting's Center for the Edge strategic research and analysis group, provided a broader context of this need for data, analytics and appropriate metrics. "With intensifying competition, accelerating pace of change and extreme events, it's not a choice. If we don't apply this technology, we will be marginalized," he said.

Much of operations' time and effort are focused on managing exceptions to the routine, such as pipeline leaks or missing inventory, Hagel said. "The analytical ability to predict when an exception is going to happen and to tell people what to do beforehand can lead to performance improvements. Finding these patterns of exceptions in the data will reduce them."

Hagel also noted that to see and understand the frequency of exceptions helps facilitate the redesign of operations to reduce their occurrence. "With the rich diversity of technology, one can target the driving metrics and find the pain points. Then, at an operational level, determine why these exceptions are happening. Regardless of the reason, find the target and make the invisible visible, anticipate and then redesign around the breakdowns focusing on what really matters."