We can tune PID controllers, but what about tuning the operator?
The purpose of tuning loops is to reduce errors and thus provide more efficient operation that returns quickly to steady-state efficiency after upsets, errors or changes in load. State-of-the-art manufacturers in process and discrete industries have invested in advanced control software, manufacturing execution software and modeling software to "tune" everything from control loops to supply chains, thus driving higher quality and productivity.
The "forgotten loop" has been the operator, who is typically trained to "average" parameters to run adequately under most steady-state conditions. "Advanced tuning" of the operator could yield even better outputs, with higher quality, fewer errors and a wider response to fluctuating operating conditions. This paper explores the issue of improving operator actions, and a method for doing so.
Over the past decade we've spent, as an industry, billions of dollars and millions of man-hours automating our factories and plants. The solutions have included adding sensors, networks and software that can measure, analyze and either act or recommend action to help production get to "Six Sigma" efficiency. However, few, if any, plants are totally automated. Despite a continuing effort to remove personnel costs and drive repeatability through automation, all plants and factories have human operators. These important human assets are responsible for monitoring the control systems, either to act on system recommendations, or override automated actions if circumstances warrant.
Most of the time, operators let the system do what it was designed and programmed to do. Sometimes, operators make errors of commission, with causes ranging from misinterpretation of data to poor training or errors of omission attributed to lack of attention or speedy response. An operator's job has often been described as hours of boredom interrupted by moments of sheer panic. What the operator does during panic situations often depends on how well he or she has been trained, or "tuned."
This paper gives an overview of some basic criteria for choosing lining material for the water / wastewater industry and furthermore provides a short description of the properties, strengths and weaknesses of EPDM, NBR, PUR and Ebonite, i.e. the four types of lining material most commonly used in the water / wastewater industry.
Basic criteria for choosing lining material
Due to the functionality of the flowmeter, a non-conductive lining material is imperative, but other requirements vary according to the specific features of the intended application.
Understanding the accuracy of a given flowmeter is an important field but it can also be misleading as different specifications are used to explain how accurate a flowmeter measurement actually measures. This paper discusses the different specifications and interprets the impact of them.
Why deal with accuracy?
The reasons for dealing with flowmeter accuracy specifications are many-folded. One important reason is from an economical point of view. The more accurate a flowmeter can measure, the more money you will save as the medium is measured with only very little inaccurately.
E.g. If the medium is expensive such as oil, it is important to know exactly how much is consumed. This ensures it is being consumed as efficiently as possible. Another reason is in terms of dosing, where a given amount of a medium is added. This must be done with a high level of precision and the accuracy is thus important in order to dose correctly. This is critical in certain industries such as in pharma or chemical.
Protection from noise and ground loops due to ISO-Channel architecture.
Precision measurement systems are often limited in that all inputs are connected to a single ground. Typically, multiplexer input configurations are set up this way, since all signal inputs are connected to the same return. Even differential input configurations use the same ground reference. The result is that accuracy and flexibility for accurate measurements can be severely compromised when noise or common mode voltage is present.
Crosstalk from one input signal can easily be reflected onto another input. The design movement to an A/D per channel can help this problem. But that is not sufficient in many cases.
To minimize noise and ground loops, some newer systems offer isolation between the input signal ground reference and the computer ground. This effectively separates the computer ground from the measurement portion of the system. But still, there is no isolation between input sensor channels, which is a common source of error and frustration for user applications. Why?
AMS2750D Temperature Uniformity Surveys using TEMPpoint.
Industrial process furnaces and ovens require uniform temperature and heating; This is critical to repeatable product performance from batch to batch. These furnaces require periodic inspection for temperature uniformity.
Electronic and Mechanical Calibration Services, Millbury Massachusetts characterizes temperature uniformity in industrial furnaces and ovens for their customers. This is accomplished by measuring temperature in several locations throughout the furnace and monitoring temperature with thermocouples over time according to AMS2750D specifications.
The customer previously used chart recorders which require constant monitoring while the survey is running. Surveys can run anywhere from 35 minutes to several hours long depending on the industry specified requirements. With the TEMPpoint solution the operator can set it up and let it run unattended, freeing them up to multitask their time and work more efficiently. The shipping TEMPpoint application required very little modification using Measure Foundry and now fulfills customer's requirements.
Everyone is familiar with the concept of temperature in an everyday sense because our bodies feel and are sensitive to any perceptible change. But for more exacting needs as found in many scientific, industrial, and commercial uses, the temperature of a process must be measured and controlled definitively. Even changes of a fraction of a degree Celsius can be wasteful or even catastrophic in many situations.
For example, some biotech processes require elevated temperatures for reactions to occur and added reagents require exactly the right temperature for proper catalytic action. New alloys of metal and composites, such as those on the new Boeing 787 Dreamliner, are formed with high temperature methods at exacting degree points to create the necessary properties of strength, endurance, and reliability. Certain medical supplies and pharmaceuticals must be stored at exactly the desired temperature for transport and inventory to protect against deterioration and ensure effectiveness.
These new applications have driven the hunt for more exacting temperature measurement and control solutions that are easy to implement and use by both novice users and experienced engineers alike. This is a challenging task. However, new equipment and standards, such as LXI (LAN Extensions for Instrumentation) offer a methodology to perform these exacting measurements in test and control applications.
Many LXI devices are available on the market today. But, what do you need to know to select the best temperature measurement solution for your test and control application? This paper describes the common pitfalls of precision temperature measurement and what you need to consider before selecting a temperature measurement solution.
Today we have clear guidelines on how the Safety Instrumented Systems (SIS) and basic Process Control Systems (BPCS) should be separated from a controls and network perspective. But what does this mean to the HMI and the control room design?
Where do Fire & Gas Systems fit into the big picture and what about new Security and Environmental monitoring tasks?
What does the Instrument Engineer needs to know about operators and how systems communicate with them.
The evolution of the control room continues as Large Screen Displays provide a big picture view of multiple systems. Do rules and guidelines exist for this aspect of independent protection layers? What are today's best practices for bringing these islands of technology together.
This paper will review the topic and provide advice on a subject on which the books remain silent. Today's practices are haphazard and left to individuals without a systematic design or guidance.
Over the past 20 years the Safety System and the Automation system have been evolving separately. They use similar technologies, but the operator interface needs to be just one system. Unfortunately, due to the nature of the designs, this is not the case.
The automation system has been evolving since the introduction of the DCS and many Human Factor mistakes have been made. As we move towards new standards such as ISA SP 101 a more formal approach to HMI design is being taken.
The past widespread use of black backgrounds which cause glare issues in the control room and are solely responsible for turning the control room lights down to very low levels, or in some cases off, are being replaced with grey backgrounds and a new grayscale graphic standard replacing bright colors for a more plain grayscale scheme only using color to attract the operators' attention.
In having strong compliance schemes that restrict color usage to just a handful of colors, restricting the use of some colors that are reserved for important information such as alarm status, it appears that the automation system is being standardized and is starting to take advantage of new technology available to control room designers such as large screen displays.