2010

1-20 of 67 < first | prev | | last >
  • Isolated Inputs Offer New Application Advantages

    Protection from noise and ground loops due to ISO-Channel architecture.

    Precision measurement systems are often limited in that all inputs are connected to a single ground. Typically, multiplexer input configurations are set up this way, since all signal inputs are connected to the same return. Even differential input configurations use the same ground reference. The result is that accuracy and flexibility for accurate measurements can be severely compromised when noise or common mode voltage is present.

    Crosstalk from one input signal can easily be reflected onto another input. The design movement to an A/D per channel can help this problem. But that is not sufficient in many cases.

    To minimize noise and ground loops, some newer systems offer isolation between the input signal ground reference and the computer ground. This effectively separates the computer ground from the measurement portion of the system. But still, there is no isolation between input sensor channels, which is a common source of error and frustration for user applications. Why?

    Data Translation
    01/06/2010
  • AMS2750D Temperature Uniformity Surveys Using TEMPpoint

    AMS2750D Temperature Uniformity Surveys using TEMPpoint.

    Industrial process furnaces and ovens require uniform temperature and heating; This is critical to repeatable product performance from batch to batch. These furnaces require periodic inspection for temperature uniformity.

    Electronic and Mechanical Calibration Services, Millbury Massachusetts characterizes temperature uniformity in industrial furnaces and ovens for their customers. This is accomplished by measuring temperature in several locations throughout the furnace and monitoring temperature with thermocouples over time according to AMS2750D specifications.

    The customer previously used chart recorders which require constant monitoring while the survey is running. Surveys can run anywhere from 35 minutes to several hours long depending on the industry specified requirements. With the TEMPpoint solution the operator can set it up and let it run unattended, freeing them up to multitask their time and work more efficiently. The shipping TEMPpoint application required very little modification using Measure Foundry and now fulfills customer's requirements.

    Data Translation
    01/06/2010
  • Avoid Pitfalls in Precision Temperature Measurement

    Everyone is familiar with the concept of temperature in an everyday sense because our bodies feel and are sensitive to any perceptible change. But for more exacting needs as found in many scientific, industrial, and commercial uses, the temperature of a process must be measured and controlled definitively. Even changes of a fraction of a degree Celsius can be wasteful or even catastrophic in many situations.

    For example, some biotech processes require elevated temperatures for reactions to occur and added reagents require exactly the right temperature for proper catalytic action. New alloys of metal and composites, such as those on the new Boeing 787 Dreamliner, are formed with high temperature methods at exacting degree points to create the necessary properties of strength, endurance, and reliability. Certain medical supplies and pharmaceuticals must be stored at exactly the desired temperature for transport and inventory to protect against deterioration and ensure effectiveness.

    These new applications have driven the hunt for more exacting temperature measurement and control solutions that are easy to implement and use by both novice users and experienced engineers alike. This is a challenging task. However, new equipment and standards, such as LXI (LAN Extensions for Instrumentation) offer a methodology to perform these exacting measurements in test and control applications.

    Many LXI devices are available on the market today. But, what do you need to know to select the best temperature measurement solution for your test and control application? This paper describes the common pitfalls of precision temperature measurement and what you need to consider before selecting a temperature measurement solution.

    Data Translation
    01/06/2010
  • Safety & Automation System (SAS) - How the Safety and the Automation Systems Finally Come Together as an HMI

    Today we have clear guidelines on how the Safety Instrumented Systems (SIS) and basic Process Control Systems (BPCS) should be separated from a controls and network perspective. But what does this mean to the HMI and the control room design?

    Where do Fire & Gas Systems fit into the big picture and what about new Security and Environmental monitoring tasks?

    What does the Instrument Engineer needs to know about operators and how systems communicate with them.

    The evolution of the control room continues as Large Screen Displays provide a big picture view of multiple systems. Do rules and guidelines exist for this aspect of independent protection layers? What are today's best practices for bringing these islands of technology together.

    This paper will review the topic and provide advice on a subject on which the books remain silent. Today's practices are haphazard and left to individuals without a systematic design or guidance.

    Over the past 20 years the Safety System and the Automation system have been evolving separately. They use similar technologies, but the operator interface needs to be just one system. Unfortunately, due to the nature of the designs, this is not the case.

    The automation system has been evolving since the introduction of the DCS and many Human Factor mistakes have been made. As we move towards new standards such as ISA SP 101 a more formal approach to HMI design is being taken.

    The past widespread use of black backgrounds which cause glare issues in the control room and are solely responsible for turning the control room lights down to very low levels, or in some cases off, are being replaced with grey backgrounds and a new grayscale graphic standard replacing bright colors for a more plain grayscale scheme only using color to attract the operators' attention.

    In having strong compliance schemes that restrict color usage to just a handful of colors, restricting the use of some colors that are reserved for important information such as alarm status, it appears that the automation system is being standardized and is starting to take advantage of new technology available to control room designers such as large screen displays.

    Ian Nimmo
    01/06/2010
  • Electromagnetic Flowmeters: Lining Material for Water Applications

    This paper gives an overview of some basic criteria for choosing lining material for the water / wastewater industry and furthermore provides a short description of the properties, strengths and weaknesses of EPDM, NBR, PUR and Ebonite, i.e. the four types of lining material most commonly used in the water / wastewater industry.

    Basic criteria for choosing lining material


    Due to the functionality of the flowmeter, a non-conductive lining material is imperative, but other requirements vary according to the specific features of the intended application.
    Siemens
    01/25/2010
  • Flowmeters: Discussion of Flowmeter Accuracy Specifications

    Understanding the accuracy of a given flowmeter is an important field but it can also be misleading as different specifications are used to explain how accurate a flowmeter measurement actually measures. This paper discusses the different specifications and interprets the impact of them.

    Why deal with accuracy?


    The reasons for dealing with flowmeter accuracy specifications are many-folded. One important reason is from an economical point of view. The more accurate a flowmeter can measure, the more money you will save as the medium is measured with only very little inaccurately.

    E.g. If the medium is expensive such as oil, it is important to know exactly how much is consumed. This ensures it is being consumed as efficiently as possible. Another reason is in terms of dosing, where a given amount of a medium is added. This must be done with a high level of precision and the accuracy is thus important in order to dose correctly. This is critical in certain industries such as in pharma or chemical.

    Siemens
    01/25/2010
  • Low Voltage MCC Technology Helps Reduce Arc-Flash Hazards and Minimize Risks

    Selecting the right MCC equipment leads to improved plant safety, helping protect people and capital investments.

    Measures to increase equipment and personnel safety in manufacturing are reflected in new approaches and technologies designed to help minimize the risk of workplace dangers. One rapidly growing area of focus is reducing the potentially serious hazards associated with arc-flash events. This white paper examines the causes of arc flash, discusses the standards guiding arc-flash safety and details the role arc-resistant motor control centers (MCCs) play in helping contain arc energy. It also highlights the key features of an effective arc-resistant MCC design.

    Managing safety hazards and reducing risks are top priorities for manufacturers across all sectors of industry. With a multitude of potential dangers and new ones continuously emerging, companies must be diligent in their ongoing efforts while considering new approaches and technologies to improve plant safety. One rapidly growing area of focus is implementing techniques and practices designed to reduce hazards and minimize risk for workers who must enter an area with an electrical arc-flash potential.

    Rockwell
    02/08/2010
  • Tuning the Forgotten Loop

    We can tune PID controllers, but what about tuning the operator?

    The purpose of tuning loops is to reduce errors and thus provide more efficient operation that returns quickly to steady-state efficiency after upsets, errors or changes in load. State-of-the-art manufacturers in process and discrete industries have invested in advanced control software, manufacturing execution software and modeling software to "tune" everything from control loops to supply chains, thus driving higher quality and productivity.

    The "forgotten loop" has been the operator, who is typically trained to "average" parameters to run adequately under most steady-state conditions. "Advanced tuning" of the operator could yield even better outputs, with higher quality, fewer errors and a wider response to fluctuating operating conditions. This paper explores the issue of improving operator actions, and a method for doing so.

    Over the past decade we've spent, as an industry, billions of dollars and millions of man-hours automating our factories and plants. The solutions have included adding sensors, networks and software that can measure, analyze and either act or recommend action to help production get to "Six Sigma" efficiency. However, few, if any, plants are totally automated. Despite a continuing effort to remove personnel costs and drive repeatability through automation, all plants and factories have human operators. These important human assets are responsible for monitoring the control systems, either to act on system recommendations, or override automated actions if circumstances warrant.

    Most of the time, operators let the system do what it was designed and programmed to do. Sometimes, operators make errors of commission, with causes ranging from misinterpretation of data to poor training or errors of omission attributed to lack of attention or speedy response. An operator's job has often been described as hours of boredom interrupted by moments of sheer panic. What the operator does during panic situations often depends on how well he or she has been trained, or "tuned."

    Steve Rubin, President & CEO, Longwatch
    02/08/2010
  • Evolving Best-Practices Through Simulation-Based Training

    Training the Field Operator of the Future

    Simulators are widely recognized as essential to process control training as they facilitate the propagation of a company's standard operating procedures (SOPs). This paper explores the use of process control simulators by Chevron Products Company to challenge existing corporate SOPs and to help achieve improvements in overall production performance.

    Simulation software has proven highly valuable to modern computer-driven businesses. The growth of Computer-Aided Design technologies in the 1960s enabled engineering and architectural firms to quickly explore new products and novel approaches. The impact was a dramatic reduction in the time and cost associated with then-current best-practices for product innovation and design. Computers became more affordable in the 1990s and software became more powerful. This facilitated widespread acceptance of simulation tools within educational spheres, particularly within universities. Simulators allow an instructional designer to construct realistic tasks or situations that elicit the behaviors a learner needs to function effectively within a domain (Mislevy, 2002). Simulation tools have been used as a means of exposing students to complex concepts and have inspired higher level learning activities including novel research. Through the use of two- and three-dimensional models, the theoretical was more easily examined and the proven more readily understood. Similarly, simulation models can be used for individual or team-based problem solving. In their research, Mislevy, Steinberg, Breyer, Almond, and Johnson (2002) describe the importance of capturing data from a simulator that directly relates to real-world performance and production. This helps instructors to connect the student's interactive simulation experiences with known best-practices for advanced learning.

    Dennis Nash, Control Station, Inc.; & Ronald Smith, Chevron Products Company
    02/23/2010
  • Model-Based Tuning Methods for PID Controllers

    The manner in which a measured process variable responds over time to changes in the controller output signal is fundamental to the design and tuning of a PID controller. The best way to learn about the dynamic behavior of a process is to perform experiments, commonly referred to as "bump tests." Critical to success is that the process data generated by the bump test be descriptive of actual process behavior. Discussed are the qualities required for "good" dynamic data and methods for modeling the dynamic data for controller design. Parameters from the dynamic model are not only used in correlations to compute tuning values, but also provide insight into controller design parameters such as loop sample time and whether dead time presents a performance challenge. It is becoming increasingly common for dynamic studies to be performed with the controller in automatic (closed loop). For closed loop studies, the dynamic data is generated by bumping the set point. The method for using closed loop data is illustrated. Concepts in this work are illustrated using a level control simulation.

    Jeffrey Arbogast, Department of Chemical Engineering University of Connecticut; Douglas J. Cooper, PhD, Control Station, Inc.; & Robert C. Rice, PhD, Control Station, Inc.
    02/23/2010
  • Performance Monitoring Fundamentals: Demystifying Performance Assessment Techniques

    Real-time performance monitoring to identify poorly or under-performing loops has become an integral part of preventative maintenance. Among others, rising energy costs and increasing demand for improved product quality are driving forces. Automatic process control solutions that incorporate real-time monitoring and performance analysis are fulfilling this market need. While many software solutions display performance metrics, however, it is important to understand the purpose and limitations of the various performance assessment techniques since each metric signifies very specific information about the nature of the process.

    This paper reviews performance measures from simple statistics to complicated model-based performance criteria. By understanding the underlying concepts of the various techniques, readers will gain an understanding of the proper use of performance criteria. Basic algorithms for computing performance measures are presented using example data sets. An evaluation of techniques with tips and suggestions provides readers with guidance for interpreting the results.

    Over the past two decades, process control performance monitoring software has become an important tool in the control engineer's toolbox. Still, the number of performance tests and statistics that can be calculated for any given control loop can be overwhelming. The problem with controller performance monitoring is not the lack of techniques and methods. Rather, the problem is the lack of guidance as to how to turn statistics into meaningful and actionable information that can be applied to improve performance.

    The performance analysis techniques discussed in this paper are separated into three sections. The first section details methods for identifying process characteristics using batches of existing data. The second section outlines methods used for real-time or dynamic analysis of streaming process data. These are vital techniques for the timely identification and interpretation of changing process behavior and deteriorating loop performance. The third section outlines techniques that aid in the identification of interacting control loops.

    Robert C. Rice,PhD, Control Station, Inc.; Rachelle R. Jyringi, Department of Chemical Engineering University of Connecticut; & Douglas J. Cooper, PhD, Control Station, Inc.
    02/23/2010
  • Reducing Energy Cost through Improved Disturbance Rejection

    Two of the most popular architectures for improving regulatory performance and increasing profitability are 1) cascade control and 2) feed forward with feedback trim. Both architectures trade off additional complexity in the form of instrumentation and engineering time for a controller better able to reject the impact of disturbances on the measured process variable. These architectures neither benefit nor detract from set point tracking performance. This paper compares and contrasts the two architectures and links the benefits of improved disturbance rejection with reducing energy costs in addition to improved product quality and reduced equipment wear. A comparative example is presented using data from a jacketed reactor process.

    The cost per barrel of crude oil has risen dramatically, increasing the burden on process facilities for both quality and profitable production. Adjusted for inflation, the cost of oil averaged $19.61 from 1945 thru 2003. October 2004 saw the per barrel cost of oil rise to $55.67, rising 70% over a 10-month timeframe and negatively impacting the profitability of companies across the process industries. According to the U.S. Department of Energy, 43% of all energy consumed by the average pulp and paper mill is production related. This percentage is small when compared to other industry segments such as chemicals (74%), glass (89%), and aluminum (93%). In all cases, the higher cost of energy suggests that all process companies need to examine ways of curbing energy consumption and unnecessary increases to their cost of goods sold. Improving disturbance rejection through cascade control or feed forward with feedback trim provides one way of achieving those objectives.

    Improved disturbance rejection is linked to increased product quality and decreased equipment wear. These are important benefits, indeed. Consider the market value of high quality white paper produced by an average mill. On-spec production is sold at a premium of approximately $2,000 per ton whereas "seconds" are sold on the aftermarket at a discounted rate. Of the 6%-8% that fails to meet spec, only 2% is classified as "broke" and able to be re-pulped Next consider the investment in production facilities. With initial costs of $400-$500 million and annual maintenance budgets approaching 10%, mills must operate 24 x 7 in order to recoup the investment. Effective disturbance rejection provides a valuable means of achieving a return on those investments through increased quality and decreased equipment wear. Additionally, it offers significant value in terms of reduced energy consumption and lower cost of goods sold.

    Robert C. Rice, PhD, & Douglas J. Cooper, PhD, Control Station, Inc.
    02/23/2010
  • Automation and the Smart Grid: Energy Management Today

    Is your company's electrical energy usage important to you? Whether still feeling the results of the recession or looking forward to competing as the global marketplace moves ahead, businesses are looking for ways to cut costs and increase revenues.

    Trends in energy show utility companies raising rates and introducing more tiered rate structures that penalize high-energy consumers. And with all the talk about carbon footprints and cap and trade, energy becomes an important place to look for both savings and revenues.

    So perhaps you've been formally tasked with improving energy efficiency for your company. Or maybe you've heard about the "Smart Grid" and are wondering how it will-or won't-impact your business. Perhaps you want to understand your corporate carbon footprint before regulatory pressures increase. Maybe you're a business owner or financial officer who needs to cut fixed costs. All of these and more are good reasons for finding out more about how you use electrical energy.

    And you're not alone. A March 2009 article in the New York Times1 noted an increasing trend among large corporations to hire a Chief Sustainability Officer (CSO). SAP, DuPont, and Flowserve are just a few companies mentioned who already have CSOs. These C-level officers are usually responsible for saving energy, reducing carbon footprints, and developing "greener" products and processes.

    While CSOs in large corporations may have a staff of engineers and a chunk of the marketing or production budget to help them find energy solutions, small and medium-sized industrial and commercial businesses usually take on this challenge as an additional job for their already overloaded technical or facilities staff.

    This white paper takes a look at electrical power in the United States today, investigates the nature of the Smart Grid, and suggests ways that small and medium-sized companies can-without waiting for future technological development-gather energy data and control electrical energy costs today.

    Opto22
    02/23/2010
  • A Simple Single Setting Controller Yields PI Performance

    This paper presents a simple velocity control algorithm with output modification that has equivalent PI controller dynamic performance. The controller features a single control setting. The controller can be easily configured in most distributed control systems, DCS and programmable logic controllers, PLC. This paper describes the controller structure and behavior as well as a control discussion on how to calculate the gain setting to determine the control period. To test the controller on real processes, the algorithm was applied to a level and temperature control loops in a laboratory, pilot plant setting.

    A control algorithm presented by W. Steven Woodward describes a velocity temperature controller [1] that modifies the output based on the pervious output value when the process variable, PV, crosses the set point, SP. This modification is the algebraic mean of the current calculated output and the output value at the previous zero error crossing. The term coined for this algorithm is "Take-Back-Half", TBH. This algorithm has some acceptance as an embedded application controller. In this paper we will demonstrate how this controller has applicability to the process control community. In section 2, we will describe how this simple controller functions and how to program the algorithm. Section 3 discusses the controller system design and how to determine the gain setting and closed loop period. In section 4 we will present the results of the pilot scale controller’s performance. In section 5 we will set forth the conclusions.

    Robert L Heider, PE, & Zachary Wegmann
    02/23/2010
  • An IT Perspective of Control Systems Security

    Enterprises with industrial operations typically utilize at least two types of computer networks – Information Technology (IT) - a network that supports enterprise information system functions like finance, HR, order entry, planning, email and document creation; and Operational Technology (OT) - a network that controls operations in real-time. This second type of network supports realtime or control system products, generally referred to as Supervisory Control and Data Acquisition (SCADA) systems, Distributed Control Systems (DCS), Energy Management Systems (EMS) or Manufacturing Execution Systems (MES), depending on the industry.

    There has been much discussion and debate around the convergence between Information Technology (IT) and Operational Technology (OT). In an effort to provide better visibility and information flow between revenue generating OT assets and enterprise applications, these systems have often been interconnected, in many cases without properly securing the control systems from cyber attack first. If the IT and OT networks are interconnected, yet not properly secured, a breach to one network can easily transverse to the other, leaving the entire computing infrastructure at risk.

    At first glance, interconnected IT and OT networks appear to share similar technologies and so a common approach to cyber-security might be indicated. However, upon deeper inspection, many important differences in IT and OT networks will be revealed. The unique characteristics of OT systems and networks preclude many traditional IT enterprise security products from operating safely without impairing operations, and when introduced, can provide significant disruption and downtime to these real-time, revenue generating assets.

    This paper is intended to educate IT professionals on the unique requirements of operational technology and what is required to properly secure these networks from cyber attack, so that organizations can assure security, reliability and safety of information and revenue generating assets.

    Learn more about Industrial Dedender

    Andrew Ginter, ISP, CIPS, CISSP, Chief Security Officer, Industrial Defender, Inc.
    02/26/2010
  • An Analysis of Whitelisting Security Solutions and Their Applicability in Control Systems

    Whitelisting is described by its advocates as "the next great thing" that will displace anti-virus technologies as the host intrusion prevention technology of choice. Anti-virus has a checkered history in operations networks and control systems – many people have horror stories of how they installed anti-virus and so impaired their test system that they simply couldn't trust deploying it in production.

    While anti-virus systems detect "bad" files that match signatures of known malware, whitelisting technologies identify "good" executables on a host and refuse to execute unauthorized or modified executables, presumably because such executables may contain malware. This is a least privilege approach of denying everything that is not specifically approved.

    In this paper the Industrial Defender team performs an independent analysis of a variety of whitelisting solutions for their applicability to control systems. The paper closes with some recommendations related to this technology and areas for further research.

    Lear more about Industrial Defender

    Andrew Ginter, ISP, CIPS, CISSP, Chief Security Officer, Industrial Defender, Inc.
    02/26/2010
  • Plant Modeling: A First Step to Early Verification of Control Systems

    Today's control system engineers face competing design demands: increase embedded system performance and functionality, without sacrificing quality or breaking the budget. It is difficult to meet these challenges using traditional design and verification approaches.

    Without simulation it is impossible to verify a control design until late in the development process when hardware prototypes become available. This is not an insurmountable problem for simpler designs with predictable system behavior, because there are fewer sources of error in simpler control algorithms--and those errors can often be resolved by tuning the controller on the hardware prototype.

    Today's multidomain designs combine mechanical, electrical, hydraulic, control, and embedded software components. For these systems, it is no longer practical to delay verification until late in the development process. As system complexity grows, the potential for errors and suboptimal designs increase. These problems are easiest to address when they are identified early in the development process. When design problems are discovered late, they are often expensive to correct and require time-consuming hardware fixes. In some cases the hardware simply cannot be changed late in the development process, resulting in a product that fails to meet its original specifications.

    Traditional verification methods are also inadequate for testing all corner cases in a design. For some control applications, it is impractical or unsafe to test the full operating envelope of the system on hardware.

    Arkadiy Turevskiy, Technical Marketing Manager, The MathWorks
    03/02/2010
  • Compliance Testing and Certification

    Moore Industries believes it is of vital importance to have third-party SIS evaluation for plant safety provided by a company with global coverage and reputation. Earlier designs for process control and safety systems typically used "good engineering practices and experience" as their guidelines. As safety awareness evolved new standards started to evolve. International standards such as IEC 61508/61511 and U.S. born standards like ANSI/ISA84 require the use of more sophisticated guidelines for implementing safety. Unfortunately for manufacturers, compliance with IEC 61508 standards requires enormous documentation. In addition, more complex products require a greater depth of analysis. Software-based products such as those from Moore Industries are complex with their inherent programmable and flexible features unlike previous generation single function analog circuits.

    Some companies are actively attempting to bypass the vital third party certification by proclaiming self certification to IEC 61508. This is not in the best interest of end users or the safety industry in general. Self certification is analogous as someone proclaiming compliance without third party testing on a hazardous area approval (such as Intrinsically-Safe).

    Moore Industries has been working for many years with customers who require products for safety systems, including those compliant with worldwide safety standards such as ANSI/ISA 84 and IEC 61508/61511. To assist customers in determining if their instruments are appropriate for specific safety systems, Moore Industries has been providing Failure Modes, Effects and Diagnostic Analysis (FMEDA) reports for key products, and has been involved in the evolution of the IEC 61508 standard. As this standard has become more widely recognized and adopted by worldwide customers it was clear that end users were looking for products which had been designed to IEC 61508 from their initial concept. Customers are demanding not only compliance to the standards but verification from an independent third party agency such as TUVRheinland.

    Moore Industries
    03/03/2010
  • Video Analytics and Security

    Using video data to improve both safety and ROI.

    Most companies are gathering trillions of bytes of data, day after day, at no small cost, and then doing very little with it. Worse still, the data often is not serving its primary function very cost-effectively.

    The "culprit," so to speak, is video surveillance data, the information captured by the video cameras that are used throughout most modern facilities.

    But the situation is changing rapidly, thanks to an application called Video Analytics. This white paper looks at the new software technology, and how it can be used to leverage video data for better security and business performance.

    Schneider Electric
    03/05/2010
  • Making Permanent Savings Through Active Energy Efficiency

    This white paper argues strongly that meeting greenhouse gas emissions targets set within the Kyoto Protocol will fail unless Active Energy Efficiency becomes compulsory.

    Active Energy Efficiency is defined as effecting permanent change through measurement, monitoring and control of energy usage. Passive energy efficiency is regarded as the installation of countermeasures against thermal losses, the use of low consumption equipment and so forth.

    It is vital, but insufficient, to make use of energy saving equipment and devices such as low energy lighting. Without proper control, these measures often merely militate against energy losses rather than make a real reduction in energy consumed and in the way it is used.

    Everything that consumes power - from direct electricity consumption through lighting, heating and most significantly electric motors, but also in HVAC control, boiler control and so forth - must be addressed actively if sustained gains are to be made. This includes changing the culture and mindsets of groups of individuals, resulting in behavioral shifts at work and at home, but clearly, this need is reduced by greater use of technical controls.

    Schneider Electric
    03/05/2010
1-20 of 67 < first | prev | | last >