- Competitive bids difficult to compare due to different proposed analytical technology, in-house and third-party capabilities, grouping of line items in proposals/bids, etc.
- Parallel analyzer system integration suppliers don’t coordinate effectively
- Integrated analyzer system and associated purchased items don’t pass factory acceptance test (FAT) the first time (and possibly not the second time)
- Supplier delays because of back ordering, internal reorganizations, personnel changes, etc.
Instruction and Constructability
- Overlooking required safety features until construction begins
- Construction delays caused by operations or maintenance that could have been anticipated during scope development
- Awkward accessibility to construction site due to other work in the area, lifting near hazardous piping, lack of elevator or hoist for heavy capital items and construction tools
- Changing on-site construction contractors during construction
- Failure to note the need for special construction safety precautions during scope development and estimating
- Multiple personnel changes of engineers, designers, and other key project team members through project construction, start-up, and commissioning
These lists aren’t exhaustive. The financial impact of each item in the above list is different for each project, company, and analyzer engineer, so it wouldn’t be helpful to categorize their respective impacts on safety, cost, schedule, and quality. If we look at the cost impact of a given analyzer project’s problems by size, the important point is that, in each case—small, medium, or large, there’s a considerable cost premium associated with discovering the problem and correcting it late during stages 3 and 4, instead of catching it early during stages 1 and 2. Just one each of the small, medium, and high impact items costs an additional $23,200 over the lifecycle of the project, and this doesn’t include the costs associated with their unpredicted occurrence during Stage 4.
In addition, another $10,000 in capital costs can be depreciated at $1,000/year over a project’s 10-year lifetime. This $10,000 can be justified if it will prevent a $100 per month, or $12,000 per lifecycle, service action over the same 10-year period. You can perform a similar analysis that includes interest, inflations, cash flow, tax consequences, etc. The point is that “a penny saved now is not necessarily a penny earned later.”
The issue isn’t ignoring nor minimizing the legitimacy of these concerns. Instead, the task is to know, understand, and consider their inclusion in the analyzer project scope as early as possible, and decide how best to include them, or to address them outside the scope of the project. Obviously, the correct metallurgy for an analyzer system must be in the scope, while completing incidental work not required for the analyzer system may often be addressed better and at lower cost outside the analyzer project’s scope. Nor is it practical to avoid all such oversights. Sometimes a decision must be made for safety or financial reasons to change construction contractors. Sometimes it’s necessary to work around unscheduled shutdowns and turnarounds for process safety or economic reasons. However, many oversights can be avoided during stages 1 and 2 if the right questions are asked of the right personnel at the right time. It’s even better if the need for the information is recognized and volunteered early in the project lifecycle.
Historically, reliability was more associated with maintenance than with project engineering and management, as in the foregoing discussion. This notion is excusable because 1) an article or device that fails is judged to be unreliable, and requires repair or replacement, which are the customary functions of maintenance, and 2) a large body of practical input to improving the reliability of devices and systems comes from organizations that perform maintenance. More formal references to “reliability” often refer to its esoteric tools, such as mean time between failure (MTBF), mean time to repair (MTTR), design of experiments (DOE), redundancy, failure mode effects and criticality analysis (FMECA). These are only tools that can be used to measure the effectiveness of true reliability, which is lifecycle costing.
Unfortunately, reliability is sometimes substituted for “maintenance” just to achieve a favorable image, rather than to make real improvements in the reliability of installations such as analyzer systems. Fortunately, reliability is coming to refer to lifetime safety, quality, and costs as a means to safely and legitimately defer repair and replacement, rather than as post-failure data collection and analysis, or as a meaningless, dressy name.
- Dovich, Robert, and Bill Wortman, "The Certified Reliability Engineer Primer, Third Edition," The Quality Council of Indiana, West terra Haute, IN, USA, 2002, pg. II-7.
- ibid. pp. II 18-20.
- Smith, Ricky, "Everybody's Got an Excuse: The Top Five Reasons Why Companies Don't Measure Reliability," Plant Services magazine, December 2005, pg. 54.
- Peterson, S. Bradley, "The Future of Asset Management," CONTROL magazine, November 2004, pp. 34.
- Clambaneva, Stephen, "Transforming Your Business With Asset Liefecycle Management," Plant Engineering magazine, August 2004, pp. 29.
- Merritt, Rich, "Back From the Grave: Concurrent Engineering Died in the '90s, Is its Resurrection at Hand?" CONTROL magazine, October 2004, pp. 30.
|About the Author|