Ensuring your PAC-based control system is an integrated, robust and flexible information producer helps improve business performance, lower costs and uncover unique opportunities for competitiveness.
All companies seek ways to make their businesses grow for the long-term. Ask any manufacturer today what he/she needs in an increasingly challenging economy. It's likely to include cutting costs, improving yield, increasing functionality and becoming more competitive in the global marketplace.
Manufacturing convergence helps companies meet these business drivers - globalization, innovation, productivity and sustainability - by more closely aligning manufacturing technologies and production system operations with the rest of the enterprise. This convergence is enabled throughout the manufacturing environment with the technologies of convergence - control, power, information and communication.03/24/2010
The field instrumentation in process plants is beginning to come under more sophisticated metrological discipline. Most new field instruments are now smart digital instruments. One popular digital protocol is the HART (Highway Automated Remote Transducer) protocol, which shares characteristics of both analog and digital control systems.
This white paper talks about the maintenance and calibration of HART field instruments. To properly service these instruments, precision analog source/measure capability and digital communication are both required. In the past, this operation required two separate tools-a calibrator and a communicator. Now these capabilities are available in one HART documenting process calibrator. Download this white paper to learn more.03/23/2010
Predicting Control Valve Noise in Gas and Steam Applications: Valve Trim Exit Velocity Head vs. Valve Outlet Mach Number
Predicting and managing control valve noise has long been an important consideration in gas and steam applications, with the dual goals of protecting workers from potential auditory damage and preventing excessive vibration that could destroy equipment and piping, possibly leading to a catastrophic failure.
At first glance, it may seem that a logical way to achieve these goals would be to limit valve trim exit velocity head to a maximum of 480 kilopascals (kPa), and this indeed is how some have addressed the issue. In practical application, however, it is an oversimplified approach that, in many cases, will not produce the desired results. First, it typically requires the use of expensive multi-stage or multi-turn trim designs, which can cost up to 30 percent more than a simpler solution. More importantly, it also can create a false sense of safety.
This article will explain why the focus should instead be on keeping the valve outlet Mach number low. Practical examples will be used to illustrate that:
- Even if the trim exit velocity head is kept below 480 kPa, valve noise can be unacceptably high if the valve outlet Mach number is high.
- Even if the trim exit velocity number is above 480 kPa, valve noise can be kept to acceptable levels - without using costly trim designs - if the valve outlet Mach number is kept low.03/18/2010
Using video data to improve both safety and ROI.
Most companies are gathering trillions of bytes of data, day after day, at no small cost, and then doing very little with it. Worse still, the data often is not serving its primary function very cost-effectively.
The "culprit," so to speak, is video surveillance data, the information captured by the video cameras that are used throughout most modern facilities.
But the situation is changing rapidly, thanks to an application called Video Analytics. This white paper looks at the new software technology, and how it can be used to leverage video data for better security and business performance.03/05/2010
This white paper argues strongly that meeting greenhouse gas emissions targets set within the Kyoto Protocol will fail unless Active Energy Efficiency becomes compulsory.
Active Energy Efficiency is defined as effecting permanent change through measurement, monitoring and control of energy usage. Passive energy efficiency is regarded as the installation of countermeasures against thermal losses, the use of low consumption equipment and so forth.
It is vital, but insufficient, to make use of energy saving equipment and devices such as low energy lighting. Without proper control, these measures often merely militate against energy losses rather than make a real reduction in energy consumed and in the way it is used.
Everything that consumes power - from direct electricity consumption through lighting, heating and most significantly electric motors, but also in HVAC control, boiler control and so forth - must be addressed actively if sustained gains are to be made. This includes changing the culture and mindsets of groups of individuals, resulting in behavioral shifts at work and at home, but clearly, this need is reduced by greater use of technical controls.03/05/2010
Meeting the next great disruptive challenge of the 21st century.
Since the Industrial Revolution our society has been driven by an increasing pace of change in business and technology. Every decade or two we have faced a new and disruptive event that challenges business and creates opportunities-the locomotive, the electric light, the automobile, the airplane, the television and the computer, to name a few.
But the greatest disruptive event of the next 20 years may come, not from a single invention, but from the world around us-that is, climate change.
How your business responds to the climate challenge can either differentiate you from the competition and launch new and successful products, or make you the focus of consumer backlash and eroding margins.
This paper will explore the environment as a disruptive force in business, examine the consequences of inaction, and propose the benefits of a proactive environmental policy. It will describe increasing levels of investment that a small company, an enterprise or an industry can make to address the challenge and develop a business case. The paper ends with a concrete roadmap to lead you from today's "business as usual" to a long-term sustainable approach to growing a Green corporation.
After reading this paper, business leaders in every industry will have an understanding of how the environment will impact their business, how to make changes to mitigate the negative impacts and how to explore business opportunities in this new and exciting sustainable world.03/05/2010
As production runs ever closer to equipment and facility operating limits and new plants come on line in expanding and developing economies, the pressure to design and operate systems more safely and economically is increasing. A key to meeting this goal is having competent people who are knowledgeable and experienced in applying the IEC 61508 and IEC 61511 / ISA 84 functional safety standards. To develop and measure an individual's safety engineering competence, several personnel functional safety certification programs have been created. This paper will discuss why these programs are needed and the benefits they deliver to individuals and companies alike. It will also review the characteristics and differences of the various certification programs on the market today, things to watch out for, and some important questions to ask when selecting a certification program.03/05/2010
Moore Industries believes it is of vital importance to have third-party SIS evaluation for plant safety provided by a company with global coverage and reputation. Earlier designs for process control and safety systems typically used "good engineering practices and experience" as their guidelines. As safety awareness evolved new standards started to evolve. International standards such as IEC 61508/61511 and U.S. born standards like ANSI/ISA84 require the use of more sophisticated guidelines for implementing safety. Unfortunately for manufacturers, compliance with IEC 61508 standards requires enormous documentation. In addition, more complex products require a greater depth of analysis. Software-based products such as those from Moore Industries are complex with their inherent programmable and flexible features unlike previous generation single function analog circuits.
Some companies are actively attempting to bypass the vital third party certification by proclaiming self certification to IEC 61508. This is not in the best interest of end users or the safety industry in general. Self certification is analogous as someone proclaiming compliance without third party testing on a hazardous area approval (such as Intrinsically-Safe).
Moore Industries has been working for many years with customers who require products for safety systems, including those compliant with worldwide safety standards such as ANSI/ISA 84 and IEC 61508/61511. To assist customers in determining if their instruments are appropriate for specific safety systems, Moore Industries has been providing Failure Modes, Effects and Diagnostic Analysis (FMEDA) reports for key products, and has been involved in the evolution of the IEC 61508 standard. As this standard has become more widely recognized and adopted by worldwide customers it was clear that end users were looking for products which had been designed to IEC 61508 from their initial concept. Customers are demanding not only compliance to the standards but verification from an independent third party agency such as TUVRheinland.03/03/2010
Today's control system engineers face competing design demands: increase embedded system performance and functionality, without sacrificing quality or breaking the budget. It is difficult to meet these challenges using traditional design and verification approaches.
Without simulation it is impossible to verify a control design until late in the development process when hardware prototypes become available. This is not an insurmountable problem for simpler designs with predictable system behavior, because there are fewer sources of error in simpler control algorithms--and those errors can often be resolved by tuning the controller on the hardware prototype.
Today's multidomain designs combine mechanical, electrical, hydraulic, control, and embedded software components. For these systems, it is no longer practical to delay verification until late in the development process. As system complexity grows, the potential for errors and suboptimal designs increase. These problems are easiest to address when they are identified early in the development process. When design problems are discovered late, they are often expensive to correct and require time-consuming hardware fixes. In some cases the hardware simply cannot be changed late in the development process, resulting in a product that fails to meet its original specifications.
Traditional verification methods are also inadequate for testing all corner cases in a design. For some control applications, it is impractical or unsafe to test the full operating envelope of the system on hardware.03/02/2010
Enterprises with industrial operations typically utilize at least two types of computer networks Information Technology (IT) - a network that supports enterprise information system functions like finance, HR, order entry, planning, email and document creation; and Operational Technology (OT) - a network that controls operations in real-time. This second type of network supports realtime or control system products, generally referred to as Supervisory Control and Data Acquisition (SCADA) systems, Distributed Control Systems (DCS), Energy Management Systems (EMS) or Manufacturing Execution Systems (MES), depending on the industry.
There has been much discussion and debate around the convergence between Information Technology (IT) and Operational Technology (OT). In an effort to provide better visibility and information flow between revenue generating OT assets and enterprise applications, these systems have often been interconnected, in many cases without properly securing the control systems from cyber attack first. If the IT and OT networks are interconnected, yet not properly secured, a breach to one network can easily transverse to the other, leaving the entire computing infrastructure at risk.
At first glance, interconnected IT and OT networks appear to share similar technologies and so a common approach to cyber-security might be indicated. However, upon deeper inspection, many important differences in IT and OT networks will be revealed. The unique characteristics of OT systems and networks preclude many traditional IT enterprise security products from operating safely without impairing operations, and when introduced, can provide significant disruption and downtime to these real-time, revenue generating assets.
This paper is intended to educate IT professionals on the unique requirements of operational technology and what is required to properly secure these networks from cyber attack, so that organizations can assure security, reliability and safety of information and revenue generating assets.02/26/2010
Whitelisting is described by its advocates as "the next great thing" that will displace anti-virus technologies as the host intrusion prevention technology of choice. Anti-virus has a checkered history in operations networks and control systems many people have horror stories of how they installed anti-virus and so impaired their test system that they simply couldn't trust deploying it in production.
While anti-virus systems detect "bad" files that match signatures of known malware, whitelisting technologies identify "good" executables on a host and refuse to execute unauthorized or modified executables, presumably because such executables may contain malware. This is a least privilege approach of denying everything that is not specifically approved.
In this paper the Industrial Defender team performs an independent analysis of a variety of whitelisting solutions for their applicability to control systems. The paper closes with some recommendations related to this technology and areas for further research.02/26/2010
Training the Field Operator of the Future
Simulators are widely recognized as essential to process control training as they facilitate the propagation of a company's standard operating procedures (SOPs). This paper explores the use of process control simulators by Chevron Products Company to challenge existing corporate SOPs and to help achieve improvements in overall production performance.
Simulation software has proven highly valuable to modern computer-driven businesses. The growth of Computer-Aided Design technologies in the 1960s enabled engineering and architectural firms to quickly explore new products and novel approaches. The impact was a dramatic reduction in the time and cost associated with then-current best-practices for product innovation and design. Computers became more affordable in the 1990s and software became more powerful. This facilitated widespread acceptance of simulation tools within educational spheres, particularly within universities. Simulators allow an instructional designer to construct realistic tasks or situations that elicit the behaviors a learner needs to function effectively within a domain (Mislevy, 2002). Simulation tools have been used as a means of exposing students to complex concepts and have inspired higher level learning activities including novel research. Through the use of two- and three-dimensional models, the theoretical was more easily examined and the proven more readily understood. Similarly, simulation models can be used for individual or team-based problem solving. In their research, Mislevy, Steinberg, Breyer, Almond, and Johnson (2002) describe the importance of capturing data from a simulator that directly relates to real-world performance and production. This helps instructors to connect the student's interactive simulation experiences with known best-practices for advanced learning.02/23/2010
The manner in which a measured process variable responds over time to changes in the controller output signal is fundamental to the design and tuning of a PID controller. The best way to learn about the dynamic behavior of a process is to perform experiments, commonly referred to as "bump tests." Critical to success is that the process data generated by the bump test be descriptive of actual process behavior. Discussed are the qualities required for "good" dynamic data and methods for modeling the dynamic data for controller design. Parameters from the dynamic model are not only used in correlations to compute tuning values, but also provide insight into controller design parameters such as loop sample time and whether dead time presents a performance challenge. It is becoming increasingly common for dynamic studies to be performed with the controller in automatic (closed loop). For closed loop studies, the dynamic data is generated by bumping the set point. The method for using closed loop data is illustrated. Concepts in this work are illustrated using a level control simulation.02/23/2010
Real-time performance monitoring to identify poorly or under-performing loops has become an integral part of preventative maintenance. Among others, rising energy costs and increasing demand for improved product quality are driving forces. Automatic process control solutions that incorporate real-time monitoring and performance analysis are fulfilling this market need. While many software solutions display performance metrics, however, it is important to understand the purpose and limitations of the various performance assessment techniques since each metric signifies very specific information about the nature of the process.
This paper reviews performance measures from simple statistics to complicated model-based performance criteria. By understanding the underlying concepts of the various techniques, readers will gain an understanding of the proper use of performance criteria. Basic algorithms for computing performance measures are presented using example data sets. An evaluation of techniques with tips and suggestions provides readers with guidance for interpreting the results.
Over the past two decades, process control performance monitoring software has become an important tool in the control engineer's toolbox. Still, the number of performance tests and statistics that can be calculated for any given control loop can be overwhelming. The problem with controller performance monitoring is not the lack of techniques and methods. Rather, the problem is the lack of guidance as to how to turn statistics into meaningful and actionable information that can be applied to improve performance.
The performance analysis techniques discussed in this paper are separated into three sections. The first section details methods for identifying process characteristics using batches of existing data. The second section outlines methods used for real-time or dynamic analysis of streaming process data. These are vital techniques for the timely identification and interpretation of changing process behavior and deteriorating loop performance. The third section outlines techniques that aid in the identification of interacting control loops.02/23/2010
Two of the most popular architectures for improving regulatory performance and increasing profitability are 1) cascade control and 2) feed forward with feedback trim. Both architectures trade off additional complexity in the form of instrumentation and engineering time for a controller better able to reject the impact of disturbances on the measured process variable. These architectures neither benefit nor detract from set point tracking performance. This paper compares and contrasts the two architectures and links the benefits of improved disturbance rejection with reducing energy costs in addition to improved product quality and reduced equipment wear. A comparative example is presented using data from a jacketed reactor process.
The cost per barrel of crude oil has risen dramatically, increasing the burden on process facilities for both quality and profitable production. Adjusted for inflation, the cost of oil averaged $19.61 from 1945 thru 2003. October 2004 saw the per barrel cost of oil rise to $55.67, rising 70% over a 10-month timeframe and negatively impacting the profitability of companies across the process industries. According to the U.S. Department of Energy, 43% of all energy consumed by the average pulp and paper mill is production related. This percentage is small when compared to other industry segments such as chemicals (74%), glass (89%), and aluminum (93%). In all cases, the higher cost of energy suggests that all process companies need to examine ways of curbing energy consumption and unnecessary increases to their cost of goods sold. Improving disturbance rejection through cascade control or feed forward with feedback trim provides one way of achieving those objectives.
Improved disturbance rejection is linked to increased product quality and decreased equipment wear. These are important benefits, indeed. Consider the market value of high quality white paper produced by an average mill. On-spec production is sold at a premium of approximately $2,000 per ton whereas "seconds" are sold on the aftermarket at a discounted rate. Of the 6%-8% that fails to meet spec, only 2% is classified as "broke" and able to be re-pulped Next consider the investment in production facilities. With initial costs of $400-$500 million and annual maintenance budgets approaching 10%, mills must operate 24 x 7 in order to recoup the investment. Effective disturbance rejection provides a valuable means of achieving a return on those investments through increased quality and decreased equipment wear. Additionally, it offers significant value in terms of reduced energy consumption and lower cost of goods sold.02/23/2010
Is your company's electrical energy usage important to you? Whether still feeling the results of the recession or looking forward to competing as the global marketplace moves ahead, businesses are looking for ways to cut costs and increase revenues.
Trends in energy show utility companies raising rates and introducing more tiered rate structures that penalize high-energy consumers. And with all the talk about carbon footprints and cap and trade, energy becomes an important place to look for both savings and revenues.
So perhaps you've been formally tasked with improving energy efficiency for your company. Or maybe you've heard about the "Smart Grid" and are wondering how it will-or won't-impact your business. Perhaps you want to understand your corporate carbon footprint before regulatory pressures increase. Maybe you're a business owner or financial officer who needs to cut fixed costs. All of these and more are good reasons for finding out more about how you use electrical energy.
And you're not alone. A March 2009 article in the New York Times1 noted an increasing trend among large corporations to hire a Chief Sustainability Officer (CSO). SAP, DuPont, and Flowserve are just a few companies mentioned who already have CSOs. These C-level officers are usually responsible for saving energy, reducing carbon footprints, and developing "greener" products and processes.
While CSOs in large corporations may have a staff of engineers and a chunk of the marketing or production budget to help them find energy solutions, small and medium-sized industrial and commercial businesses usually take on this challenge as an additional job for their already overloaded technical or facilities staff.
This white paper takes a look at electrical power in the United States today, investigates the nature of the Smart Grid, and suggests ways that small and medium-sized companies can-without waiting for future technological development-gather energy data and control electrical energy costs today.02/23/2010
This paper presents a simple velocity control algorithm with output modification that has equivalent PI controller dynamic performance. The controller features a single control setting. The controller can be easily configured in most distributed control systems, DCS and programmable logic controllers, PLC. This paper describes the controller structure and behavior as well as a control discussion on how to calculate the gain setting to determine the control period. To test the controller on real processes, the algorithm was applied to a level and temperature control loops in a laboratory, pilot plant setting.
A control algorithm presented by W. Steven Woodward describes a velocity temperature controller  that modifies the output based on the pervious output value when the process variable, PV, crosses the set point, SP. This modification is the algebraic mean of the current calculated output and the output value at the previous zero error crossing. The term coined for this algorithm is "Take-Back-Half", TBH. This algorithm has some acceptance as an embedded application controller. In this paper we will demonstrate how this controller has applicability to the process control community. In section 2, we will describe how this simple controller functions and how to program the algorithm. Section 3 discusses the controller system design and how to determine the gain setting and closed loop period. In section 4 we will present the results of the pilot scale controllers performance. In section 5 we will set forth the conclusions.02/23/2010
Selecting the right MCC equipment leads to improved plant safety, helping protect people and capital investments.
Measures to increase equipment and personnel safety in manufacturing are reflected in new approaches and technologies designed to help minimize the risk of workplace dangers. One rapidly growing area of focus is reducing the potentially serious hazards associated with arc-flash events. This white paper examines the causes of arc flash, discusses the standards guiding arc-flash safety and details the role arc-resistant motor control centers (MCCs) play in helping contain arc energy. It also highlights the key features of an effective arc-resistant MCC design.
Managing safety hazards and reducing risks are top priorities for manufacturers across all sectors of industry. With a multitude of potential dangers and new ones continuously emerging, companies must be diligent in their ongoing efforts while considering new approaches and technologies to improve plant safety. One rapidly growing area of focus is implementing techniques and practices designed to reduce hazards and minimize risk for workers who must enter an area with an electrical arc-flash potential.02/08/2010
We can tune PID controllers, but what about tuning the operator?
The purpose of tuning loops is to reduce errors and thus provide more efficient operation that returns quickly to steady-state efficiency after upsets, errors or changes in load. State-of-the-art manufacturers in process and discrete industries have invested in advanced control software, manufacturing execution software and modeling software to "tune" everything from control loops to supply chains, thus driving higher quality and productivity.
The "forgotten loop" has been the operator, who is typically trained to "average" parameters to run adequately under most steady-state conditions. "Advanced tuning" of the operator could yield even better outputs, with higher quality, fewer errors and a wider response to fluctuating operating conditions. This paper explores the issue of improving operator actions, and a method for doing so.
Over the past decade we've spent, as an industry, billions of dollars and millions of man-hours automating our factories and plants. The solutions have included adding sensors, networks and software that can measure, analyze and either act or recommend action to help production get to "Six Sigma" efficiency. However, few, if any, plants are totally automated. Despite a continuing effort to remove personnel costs and drive repeatability through automation, all plants and factories have human operators. These important human assets are responsible for monitoring the control systems, either to act on system recommendations, or override automated actions if circumstances warrant.
Most of the time, operators let the system do what it was designed and programmed to do. Sometimes, operators make errors of commission, with causes ranging from misinterpretation of data to poor training or errors of omission attributed to lack of attention or speedy response. An operator's job has often been described as hours of boredom interrupted by moments of sheer panic. What the operator does during panic situations often depends on how well he or she has been trained, or "tuned."02/08/2010
This paper gives an overview of some basic criteria for choosing lining material for the water / wastewater industry and furthermore provides a short description of the properties, strengths and weaknesses of EPDM, NBR, PUR and Ebonite, i.e. the four types of lining material most commonly used in the water / wastewater industry.
Basic criteria for choosing lining material
Due to the functionality of the flowmeter, a non-conductive lining material is imperative, but other requirements vary according to the specific features of the intended application.01/25/2010