Paul Whittaker is from Freeport-McMoRan Copper & Gold Inc. (after the merger with PhelpsDodge a couple of months ago). His paper won an award for best project at the awards dinner on Tuesday night. He began with a famous quote. Albert Einstein said that insanity is doing the same thing over and over again and expecting different results. So what do you call not knowing what you are doing and doing it over and over again? That's why the title of this talk is "Beyond Insanity." The problem was inconsistent capture and reporting of circuit downtime events, difficult or unable to perform valid cross site comparison of results and drill down to influencing details, and time consuming inability to see analytical details. We had robust CMMS processes within our ERP system, but where it becomes less consistent and less clear when it becomes a productive event. This varied by site as spreadsheets, custom programming, and so forth. What we wanted to do was provide a link back to the initiating field event so that we could improve our discrete event resolution. Within the ERP, we know when the work order starts and ends, but does that coincide with the time the pump shut down and went back on line? What the ERP did not capture was the downtime component of the process. We had no idea what our effective downtimes were-- when the circuit is performing below target efficiency and still not meet your productive goals. So our solution was to automate the creation of downtime events, based on documented trigger conditions, and a browser-based application for classification by control room operator, using standardized methodology. We didn't want to overlap our existing functionality, either. Whittaker described in detail what a "downtime event" actually consists of, and how much data it takes from disparate sources, to accurately describe the entire event, start to finish. You need to know WHO, WHERE, WHAT, HOW and WHY the event occured and was cleared. He showed an interesting "process time model" that showed calculated time availability vs utilized time and downtime. In order for benefits to be realized, however, Maintenance and Operations must use these analytical tools to review and develop activity planning. At Freeport McMoRan, Maintenance and Ops perform this activity weekly. One of the tools with the biggest "wow factor" was the "tree map." Using the Pareto chart tool, we can determine which events we have control over, and concentrate on those. You can't really affect some things, like ore hardness, he noted. There were other canned reports in the Matrikon package that were good, but they wanted more. They created an Enterprise Data Warehouse. Data from ProcessMORe is sourced to EDW and available in Business Objects. Reprots merging all sites/circuits have been developed and posted to INfoView. We've created KPI reports that are using the ISA-95 compliant hierarchy of Enterprise-->Site-->Area-->Work Center-->Work Unit. That way we can see what work unit we have to go work on that can make an impact on that higher level KPI. We did a pilot project with ProcessMORe and we found that there were things we needed to do. "We are currently allowing dedicated two weeks per site to accomplish control room change management," he said. Operations/Maintenance planning process adopting the tool, and all sites have existing meetings to facilitate this adoption. Site departmental leadership is support consistend and complete coding of events, and this is identical to existing maintenance processes. We are converting continuous process data to discrete event data. This categorization using standard methodology and leveraging existing systems of record (ERP- Equipemnt Reginster, Maintenance Activities) Standardized method of capturing data, with standard analysis tools. The implementation of ProcessMORe clearly produced value by making data into information. There is a statistically significant difference between the time before the implementation of ProcessMORe and the time afterward.