Manufacturing and production processes have had to be controlled and managed in real time from inception because they change in real time frames. This has been a natural premise of industrial systems from the very beginning.
A major shift in the business of manufacturing has occurred over the past decade which is driving the dynamics of the business of production and manufacturing into the real time domain. Business variables, such as energy prices, feedstock prices and even product prices have rapidly transitioned from highly transactional time frames into real time frames. For example, a decade ago it was not unusual for an industrial plant to establish a contract with its energy supplier that essentially set the price over an extended time period, of often 6 months or even a year. Today, in most parts of the world, long term fixed price energy contracts are not being offered and the price of energy can change multiple times in a day. The implications of this transition are clear. Industrial business functions must operate in real time to be effective and efficient. Industrial companies that do not move to real time business operations will be at a severe disadvantage in their marketplace.
Invensys, Peter G. MartinInvensys, Peter G. Martin
Diode rectifier with large DC bus capacitors, used in the front ends of Variable Frequency Drives (VFDs), draw discontinuous current from the power system resulting in current distortion and hence voltage distortion. Typically, the power system can handle current distortion without showing signs of voltage distortion. However, when the majority of the load on a distribution feeder is made up of VFDs, current distortion becomes an important issue. Multi-pulse techniques to reduce input harmonics are popular because they do not interfere with the existing power system either from higher conducted EMI when active techniques are used or from possible resonance, when capacitor based filters are employed.
In this paper, a new 18-pulse topology is proposed that has two six-pulse rectifiers powered via a phase-shifting isolation transformer, while the third six-pulse rectifier is fed directly from the AC source via a matching-impedance. This idea relies on harmonic current cancellation strategy rather than the flux cancellation method and results in lower overall harmonics. It is also seen to be smaller in size and weight, and lower in cost compared to an isolation transformer. Experimental results are given to validate the concept.
Mahesh Swamy, Tsuneo J. Kume and Noriyuki Takada, Yaskawa Electric America
This paper presents a simple velocity control algorithm with output modification that has equivalent PI controller dynamic performance. The controller features a single control setting. The controller can be easily configured in most distributed control systems, DCS and programmable logic controllers, PLC. This paper describes the controller structure and behavior as well as a control discussion on how to calculate the gain setting to determine the control period. To test the controller on real processes, the algorithm was applied to a level and temperature control loops in a laboratory, pilot plant setting.
A control algorithm presented by W. Steven Woodward describes a velocity temperature controller  that modifies the output based on the pervious output value when the process variable, PV, crosses the set point, SP. This modification is the algebraic mean of the current calculated output and the output value at the previous zero error crossing. The term coined for this algorithm is "Take-Back-Half", TBH. This algorithm has some acceptance as an embedded application controller. In this paper we will demonstrate how this controller has applicability to the process control community. In section 2, we will describe how this simple controller functions and how to program the algorithm. Section 3 discusses the controller system design and how to determine the gain setting and closed loop period. In section 4 we will present the results of the pilot scale controllers performance. In section 5 we will set forth the conclusions.
This technical white paper will discuss Yokogawa's CENTUM VP DCS (Distributed Control System) product, hereafter referred to as "CENTUM VP", and the extent of its compliance with Part 11 of Title 21 of the Code of Federal Regulations, (21 CFR Part 11), the Electronic Records / Electronic Signatures Rule.
CENTUM VP Batch Management is the optional Batch control function for CENTUM VP, which provides recipe management and process management functionality based upon the ISA-88 Batch Control System standard. This whitepaper addresses the use of CENTUM VP and the Batch Management function.
A detailed analysis of Part 11 was performed, the results of which are listed in the Detailed Part 11 Compliance section (section 5) of this document, which supports the compliance of the CENTUM VP system to Part 11.
CENTUM VP is a comprehensive software package containing configurable functions that support Part 11 compliance (audit trails, electronic signatures and electronic records). The system capitalizes on its Part 11 compliance attributes in the marketing strategy of supplying FDA regulated industries with state of the art automation capabilities.
User training and education as well as the development and utilization of policies and procedures are key components of Part 11 compliance which must be established by the user.
In today's manufacturing environment, there is an urgency to increase operating efficiencies, and to do it quickly. One area of improvement that can produce immediate results is reducing energy consumption. It's good for the environment and it's good for the bottom line. "Energy management," therefore, has become a common best practice, but there is more there than meets the eye. Typically it implies rigorously modeling all or a major portion of the plant, coupled with the use of real-time optimization technology. While this approach has been used successfully, there are other simpler, faster options for reducing energy consumption in a manufacturing plant. Learn what these options are.
Paul Kesseler, Manager, Advanced Process Control Practice, Global Consulting Group, Invensys Operations Management
Selecting the right MCC equipment leads to improved plant safety, helping protect people and capital investments.
Measures to increase equipment and personnel safety in manufacturing are reflected in new approaches and technologies designed to help minimize the risk of workplace dangers. One rapidly growing area of focus is reducing the potentially serious hazards associated with arc-flash events. This white paper examines the causes of arc flash, discusses the standards guiding arc-flash safety and details the role arc-resistant motor control centers (MCCs) play in helping contain arc energy. It also highlights the key features of an effective arc-resistant MCC design.
Managing safety hazards and reducing risks are top priorities for manufacturers across all sectors of industry. With a multitude of potential dangers and new ones continuously emerging, companies must be diligent in their ongoing efforts while considering new approaches and technologies to improve plant safety. One rapidly growing area of focus is implementing techniques and practices designed to reduce hazards and minimize risk for workers who must enter an area with an electrical arc-flash potential.
AMS2750D Temperature Uniformity Surveys using TEMPpoint.
Industrial process furnaces and ovens require uniform temperature and heating; This is critical to repeatable product performance from batch to batch. These furnaces require periodic inspection for temperature uniformity.
Electronic and Mechanical Calibration Services, Millbury Massachusetts characterizes temperature uniformity in industrial furnaces and ovens for their customers. This is accomplished by measuring temperature in several locations throughout the furnace and monitoring temperature with thermocouples over time according to AMS2750D specifications.
The customer previously used chart recorders which require constant monitoring while the survey is running. Surveys can run anywhere from 35 minutes to several hours long depending on the industry specified requirements. With the TEMPpoint solution the operator can set it up and let it run unattended, freeing them up to multitask their time and work more efficiently. The shipping TEMPpoint application required very little modification using Measure Foundry and now fulfills customer's requirements.
Whitelisting is described by its advocates as "the next great thing" that will displace anti-virus technologies as the host intrusion prevention technology of choice. Anti-virus has a checkered history in operations networks and control systems many people have horror stories of how they installed anti-virus and so impaired their test system that they simply couldn't trust deploying it in production.
While anti-virus systems detect "bad" files that match signatures of known malware, whitelisting technologies identify "good" executables on a host and refuse to execute unauthorized or modified executables, presumably because such executables may contain malware. This is a least privilege approach of denying everything that is not specifically approved.
In this paper the Industrial Defender team performs an independent analysis of a variety of whitelisting solutions for their applicability to control systems. The paper closes with some recommendations related to this technology and areas for further research.
Enterprises with industrial operations typically utilize at least two types of computer networks Information Technology (IT) - a network that supports enterprise information system functions like finance, HR, order entry, planning, email and document creation; and Operational Technology (OT) - a network that controls operations in real-time. This second type of network supports realtime or control system products, generally referred to as Supervisory Control and Data Acquisition (SCADA) systems, Distributed Control Systems (DCS), Energy Management Systems (EMS) or Manufacturing Execution Systems (MES), depending on the industry.
There has been much discussion and debate around the convergence between Information Technology (IT) and Operational Technology (OT). In an effort to provide better visibility and information flow between revenue generating OT assets and enterprise applications, these systems have often been interconnected, in many cases without properly securing the control systems from cyber attack first. If the IT and OT networks are interconnected, yet not properly secured, a breach to one network can easily transverse to the other, leaving the entire computing infrastructure at risk.
At first glance, interconnected IT and OT networks appear to share similar technologies and so a common approach to cyber-security might be indicated. However, upon deeper inspection, many important differences in IT and OT networks will be revealed. The unique characteristics of OT systems and networks preclude many traditional IT enterprise security products from operating safely without impairing operations, and when introduced, can provide significant disruption and downtime to these real-time, revenue generating assets.
This paper is intended to educate IT professionals on the unique requirements of operational technology and what is required to properly secure these networks from cyber attack, so that organizations can assure security, reliability and safety of information and revenue generating assets.
Continuous level measurement is about one thing, e.g. answering the question "how much stuff do I have". There are many applications where you need to know how much material is in a bin, silo or other vessel type. Usually the desired engineering unit is expressed in terms of volume or weight. "Measuring" volume or weight is not always the most practical approach, sometimes it isn't even viable. Take those silos you have, how do you weigh the ingredients if the silos weren't installed with load systems? Not an easy or inexpensive question to answer. So what do we do? This is where continuous level measurement sensors and systems come into play and offer a viable and cost effective approach.
The purpose of this white paper is to discuss and inform about the application considerations when you need to measure the level of material continuously or simply determine on a continuous basis how much stuff you have in your vessels.
Is your company's electrical energy usage important to you? Whether still feeling the results of the recession or looking forward to competing as the global marketplace moves ahead, businesses are looking for ways to cut costs and increase revenues.
Trends in energy show utility companies raising rates and introducing more tiered rate structures that penalize high-energy consumers. And with all the talk about carbon footprints and cap and trade, energy becomes an important place to look for both savings and revenues.
So perhaps you've been formally tasked with improving energy efficiency for your company. Or maybe you've heard about the "Smart Grid" and are wondering how it will-or won't-impact your business. Perhaps you want to understand your corporate carbon footprint before regulatory pressures increase. Maybe you're a business owner or financial officer who needs to cut fixed costs. All of these and more are good reasons for finding out more about how you use electrical energy.
And you're not alone. A March 2009 article in the New York Times1 noted an increasing trend among large corporations to hire a Chief Sustainability Officer (CSO). SAP, DuPont, and Flowserve are just a few companies mentioned who already have CSOs. These C-level officers are usually responsible for saving energy, reducing carbon footprints, and developing "greener" products and processes.
While CSOs in large corporations may have a staff of engineers and a chunk of the marketing or production budget to help them find energy solutions, small and medium-sized industrial and commercial businesses usually take on this challenge as an additional job for their already overloaded technical or facilities staff.
This white paper takes a look at electrical power in the United States today, investigates the nature of the Smart Grid, and suggests ways that small and medium-sized companies can-without waiting for future technological development-gather energy data and control electrical energy costs today.
Everyone is familiar with the concept of temperature in an everyday sense because our bodies feel and are sensitive to any perceptible change. But for more exacting needs as found in many scientific, industrial, and commercial uses, the temperature of a process must be measured and controlled definitively. Even changes of a fraction of a degree Celsius can be wasteful or even catastrophic in many situations.
For example, some biotech processes require elevated temperatures for reactions to occur and added reagents require exactly the right temperature for proper catalytic action. New alloys of metal and composites, such as those on the new Boeing 787 Dreamliner, are formed with high temperature methods at exacting degree points to create the necessary properties of strength, endurance, and reliability. Certain medical supplies and pharmaceuticals must be stored at exactly the desired temperature for transport and inventory to protect against deterioration and ensure effectiveness.
These new applications have driven the hunt for more exacting temperature measurement and control solutions that are easy to implement and use by both novice users and experienced engineers alike. This is a challenging task. However, new equipment and standards, such as LXI (LAN Extensions for Instrumentation) offer a methodology to perform these exacting measurements in test and control applications.
Many LXI devices are available on the market today. But, what do you need to know to select the best temperature measurement solution for your test and control application? This paper describes the common pitfalls of precision temperature measurement and what you need to consider before selecting a temperature measurement solution.
Moore Industries believes it is of vital importance to have third-party SIS evaluation for plant safety provided by a company with global coverage and reputation. Earlier designs for process control and safety systems typically used "good engineering practices and experience" as their guidelines. As safety awareness evolved new standards started to evolve. International standards such as IEC 61508/61511 and U.S. born standards like ANSI/ISA84 require the use of more sophisticated guidelines for implementing safety. Unfortunately for manufacturers, compliance with IEC 61508 standards requires enormous documentation. In addition, more complex products require a greater depth of analysis. Software-based products such as those from Moore Industries are complex with their inherent programmable and flexible features unlike previous generation single function analog circuits.
Some companies are actively attempting to bypass the vital third party certification by proclaiming self certification to IEC 61508. This is not in the best interest of end users or the safety industry in general. Self certification is analogous as someone proclaiming compliance without third party testing on a hazardous area approval (such as Intrinsically-Safe).
Moore Industries has been working for many years with customers who require products for safety systems, including those compliant with worldwide safety standards such as ANSI/ISA 84 and IEC 61508/61511. To assist customers in determining if their instruments are appropriate for specific safety systems, Moore Industries has been providing Failure Modes, Effects and Diagnostic Analysis (FMEDA) reports for key products, and has been involved in the evolution of the IEC 61508 standard. As this standard has become more widely recognized and adopted by worldwide customers it was clear that end users were looking for products which had been designed to IEC 61508 from their initial concept. Customers are demanding not only compliance to the standards but verification from an independent third party agency such as TUVRheinland.
Ensuring your PAC-based control system is an integrated, robust and flexible information producer helps improve business performance, lower costs and uncover unique opportunities for competitiveness.
All companies seek ways to make their businesses grow for the long-term. Ask any manufacturer today what he/she needs in an increasingly challenging economy. It's likely to include cutting costs, improving yield, increasing functionality and becoming more competitive in the global marketplace.
Manufacturing convergence helps companies meet these business drivers - globalization, innovation, productivity and sustainability - by more closely aligning manufacturing technologies and production system operations with the rest of the enterprise. This convergence is enabled throughout the manufacturing environment with the technologies of convergence - control, power, information and communication.
An Introduction to Data Loggers
"I just think the only way we are really going to get to the point we need to get to is to start collecting the real data."
This comment, made in 2009 by New York Public Service Commission chairman Garry Brown, conveys a growing sentiment about the need for solid, objective data on building energy performance.
When it comes to determining actual building performance, it all comes down to data. Data takes the guesswork out of energy management, and drives decisions as to what energy conservation measures need to be taken in a facility.
Portable data loggers are ideal tools for collecting building performance data. These affordable, compact devices can help establish energy performance baselines, and reveal a buildings performance under real-world, rather than modeled, circumstances.
They offer fine-tuned visual performance feedback, measuring changes in temperature and energy use when people enter and exit a building, turn on and off lights, or run heating and cooling systems. They can also be used to help ensure that indoor air quality and comfort are maintained in a building.
This paper gives an overview of some basic criteria for choosing lining material for the water / wastewater industry and furthermore provides a short description of the properties, strengths and weaknesses of EPDM, NBR, PUR and Ebonite, i.e. the four types of lining material most commonly used in the water / wastewater industry.
Basic criteria for choosing lining material
Due to the functionality of the flowmeter, a non-conductive lining material is imperative, but other requirements vary according to the specific features of the intended application.
Process measurements are instantaneous, but analyzer responses never are. From the tap to the analyzer, there is always a delay. Unfortunately, this time delay is often underestimated or not accounted for or understood. Time delay in sample systems is the most common cause of inappropriate results from process analyzers.
In many cases, it is invisible to operators and technicians, who are focused on the necessity of making the sample suitable for the analyzer. It is not unusual for operators to assume that the analytical measurement is instantaneous. In fact, sample systems often fail to achieve the industry standard of a one minute response.
As a general rule, it's always best to minimize time delay, even for long cycle times, but delays extending beyond the industry standard are not necessarily a problem. The process engineer determines acceptable delay times based on process dynamics.
Delays become an issue when they exceed a system designer's expectations. A poor estimate or wrong assumption about time delay will necessarily result in inferior process control.
This article is intended to enhance understanding of the causes of time delay and to provide the tools required to calculate or approximate a delay within a reasonable margin of error. We will also provide some recommendations for reducing time delay. The potential for delay exists in the follow sections of an analytical instrumentation (AI) system: process line, tap and probe, field station, transport line, sample conditioning system, stream switching system, and analyzer.
In many analytical instrumentation systems, the analyzer does not provide an absolute measurement. Rather, it provides a relative response based on settings established during calibration, which is a critical process subject to significant error. To calibrate an analyzer, a calibration fluid of known contents and quantities is passed through the analyzer, producing measurements of component concentration. If these measurements are not consistent with the known quantities in the calibration fluid, the analyzer is adjusted accordingly. Later, when process samples are analyzed, the accuracy of the analyzer's reading will depend on the accuracy of the calibration process. It is therefore, imperative, that we understand how error or contamination can be introduced through calibration; when calibration can - and cannot - address a perceived performance issue with the analyzer; how atmospheric pressure or temperature fluctuations can undo the work of calibration; and when and when not to calibrate.
The objective of an analytical instrumentation (AI) system is to provide a timely analytical result that is representative of the fluid in the process line at the time the sample was taken. If the AI system alters the sample so the analytical result is changed from what it would have been, then the sample is no longer representative and the outcome is no longer meaningful or useful. Assuming the sample is properly taken at the tap, it may still become unrepresentative under any of the following conditions:
- If deadlegs or dead spaces are introduced at inappropriate locations in the AI system, resulting in a "static leak," a bleeding or leaking of the old sample into the new sample;If the sample is altered through contamination, permeation, or adsorption;
- If the balance of chemicals is upset due to a partial change in phase; or
- If the sample undergoes a chemical reaction.
This article will review the major issues leading to an unrepresentative sample and provide recommendations on how to avoid a compromised sample. It will discuss deadlegs and dead spaces; component design and placement; adsorption and permeation; internal and external leaks; cross contamination in stream selection; and phase preservation.
Variable Frequency Drives (VFDs) with diode rectifier front end are typically equipped with a resistorcontactor arrangement to limit the inrush current into the dc bus capacitors, thereby providing a means for soft charging the dc bus capacitors. Because of the mechanical nature of the magnetic contactor typically used in VFDs, there exists a concern for fatigue. In addition, during a brown out condition, typically the contactor remains closed and when the voltage recovers, the ensuing transient is often large enough to possibly cause unfavorable influence to surrounding components in the VFD. Many researchers and application engineers have thought about this issue and many are actively seeking non-mechanical solutions in a cost effective manner.
In this paper, a new topology to soft charge the dc bus capacitor is proposed. Other techniques that have been evaluated are also introduced. The relative advantages and disadvantages are discussed. Experimental tests to show the feasibility of the proposed idea is also provided.
Mahesh Swamy, Tsuneo J. Kume and Noriyuki Takada, Yaskawa Electric America