Meeting the next great disruptive challenge of the 21st century.
Since the Industrial Revolution our society has been driven by an increasing pace of change in business and technology. Every decade or two we have faced a new and disruptive event that challenges business and creates opportunities-the locomotive, the electric light, the automobile, the airplane, the television and the computer, to name a few.
But the greatest disruptive event of the next 20 years may come, not from a single invention, but from the world around us-that is, climate change.
How your business responds to the climate challenge can either differentiate you from the competition and launch new and successful products, or make you the focus of consumer backlash and eroding margins.
This paper will explore the environment as a disruptive force in business, examine the consequences of inaction, and propose the benefits of a proactive environmental policy. It will describe increasing levels of investment that a small company, an enterprise or an industry can make to address the challenge and develop a business case. The paper ends with a concrete roadmap to lead you from today's "business as usual" to a long-term sustainable approach to growing a Green corporation.
After reading this paper, business leaders in every industry will have an understanding of how the environment will impact their business, how to make changes to mitigate the negative impacts and how to explore business opportunities in this new and exciting sustainable world.
Moore Industries believes it is of vital importance to have third-party SIS evaluation for plant safety provided by a company with global coverage and reputation. Earlier designs for process control and safety systems typically used "good engineering practices and experience" as their guidelines. As safety awareness evolved new standards started to evolve. International standards such as IEC 61508/61511 and U.S. born standards like ANSI/ISA84 require the use of more sophisticated guidelines for implementing safety. Unfortunately for manufacturers, compliance with IEC 61508 standards requires enormous documentation. In addition, more complex products require a greater depth of analysis. Software-based products such as those from Moore Industries are complex with their inherent programmable and flexible features unlike previous generation single function analog circuits.
Some companies are actively attempting to bypass the vital third party certification by proclaiming self certification to IEC 61508. This is not in the best interest of end users or the safety industry in general. Self certification is analogous as someone proclaiming compliance without third party testing on a hazardous area approval (such as Intrinsically-Safe).
Moore Industries has been working for many years with customers who require products for safety systems, including those compliant with worldwide safety standards such as ANSI/ISA 84 and IEC 61508/61511. To assist customers in determining if their instruments are appropriate for specific safety systems, Moore Industries has been providing Failure Modes, Effects and Diagnostic Analysis (FMEDA) reports for key products, and has been involved in the evolution of the IEC 61508 standard. As this standard has become more widely recognized and adopted by worldwide customers it was clear that end users were looking for products which had been designed to IEC 61508 from their initial concept. Customers are demanding not only compliance to the standards but verification from an independent third party agency such as TUVRheinland.
Today's control system engineers face competing design demands: increase embedded system performance and functionality, without sacrificing quality or breaking the budget. It is difficult to meet these challenges using traditional design and verification approaches.
Without simulation it is impossible to verify a control design until late in the development process when hardware prototypes become available. This is not an insurmountable problem for simpler designs with predictable system behavior, because there are fewer sources of error in simpler control algorithms--and those errors can often be resolved by tuning the controller on the hardware prototype.
Today's multidomain designs combine mechanical, electrical, hydraulic, control, and embedded software components. For these systems, it is no longer practical to delay verification until late in the development process. As system complexity grows, the potential for errors and suboptimal designs increase. These problems are easiest to address when they are identified early in the development process. When design problems are discovered late, they are often expensive to correct and require time-consuming hardware fixes. In some cases the hardware simply cannot be changed late in the development process, resulting in a product that fails to meet its original specifications.
Traditional verification methods are also inadequate for testing all corner cases in a design. For some control applications, it is impractical or unsafe to test the full operating envelope of the system on hardware.
Arkadiy Turevskiy, Technical Marketing Manager, The MathWorks
Real-time performance monitoring to identify poorly or under-performing loops has become an integral part of preventative maintenance. Among others, rising energy costs and increasing demand for improved product quality are driving forces. Automatic process control solutions that incorporate real-time monitoring and performance analysis are fulfilling this market need. While many software solutions display performance metrics, however, it is important to understand the purpose and limitations of the various performance assessment techniques since each metric signifies very specific information about the nature of the process.
This paper reviews performance measures from simple statistics to complicated model-based performance criteria. By understanding the underlying concepts of the various techniques, readers will gain an understanding of the proper use of performance criteria. Basic algorithms for computing performance measures are presented using example data sets. An evaluation of techniques with tips and suggestions provides readers with guidance for interpreting the results.
Over the past two decades, process control performance monitoring software has become an important tool in the control engineer's toolbox. Still, the number of performance tests and statistics that can be calculated for any given control loop can be overwhelming. The problem with controller performance monitoring is not the lack of techniques and methods. Rather, the problem is the lack of guidance as to how to turn statistics into meaningful and actionable information that can be applied to improve performance.
The performance analysis techniques discussed in this paper are separated into three sections. The first section details methods for identifying process characteristics using batches of existing data. The second section outlines methods used for real-time or dynamic analysis of streaming process data. These are vital techniques for the timely identification and interpretation of changing process behavior and deteriorating loop performance. The third section outlines techniques that aid in the identification of interacting control loops.
Robert C. Rice,PhD, Control Station, Inc.; Rachelle R. Jyringi, Department of Chemical Engineering University of Connecticut; & Douglas J. Cooper, PhD, Control Station, Inc.
Applying virtualization technology to open industrial control systems reduces lifecycle costs and improves manageability. Virtualization helps reduce hardware and operating system (OS) changes, improve computer platform resource utilization and makes the system easier to maintain. Read this white paper to learn more.
This white paper discusses the "Smart Redundancy" capability of GEs revolutionary Quad PAC solution, which includes a patent-pending algorithm that continually calculates the relative system availability in real time and delivers predictive analysis to maintain maximum system availability.
Although multivariable control is now a wellestablished technology, new applications are still being found on which to apply it.In this paper, details will be presented on how Honeywells Profit Controller was found to be particularly applicable to the offshore production process.
This paper discusses how portable data logging technology can be used to measure, record, and document the performance of geothermal heat pumps, and provides specific case study examples of how the technology is being applied in geothermal system monitoring applications.
In todays pharmaceutical and biotechnology manufacturing environments, compliance has taken on new meaning. It once implied a system of warnings that required attention. Today, the Food and Drug Administration (FDA) is demanding a new focus on compliance. Recent headlines reveal continuing industry problems and new efforts by the FDA to reduce them. And when you look beneath the regulation jargon, there are new opportunities to improve manufacturing efficiencies as well as compliance in ways that benefit the bottom line rather than cut into it. This paper explores the recent regulatory and industry changes that place new demands on manufacturers today. Then, it looks at newer technology solutions that exist to help navigate the current and future needs of life sciences manufacturing.
It's now time to upgrade to a new HART Communicator. Your old hand held HART Communicator is obsolete and receives limited support. You shop around and find that it costs between $3000 and $7000 for a new hand held HART Communicator. A Google search reveals a PC based alternative. Will the PC alternative perform as required? What should you look for?
The PC based HART Communicator has been around for many years, but until recently it has not been able to replace the hand held HART communicator. The main reason is that it could not communicate at the DD level with all the devices in the DD library. Recent developments have eliminated that problem and now is a good time to review the capabilities of a PC based HART Communicator.
The Effect on the 10/100 Industrial Ethernet Switch Performance.
The Anixter Infrastructure Solutions Lab wanted to determine what effect the new TIA-1005 industrial cabling infrastructure standard would have on the data throughput performance of real Ethernet data packets running between SmartBits test cards and various manufacturers' 10/100 Ethernet switches in a real-world simulation. The test included five (5) different IP20-rated switches and three (3) different enterprise rack-mounted switches using various cabling channels made from both Category 5e and Category 6 cabling components and connector pairs that are allowable under the standard. The premise also asserts that the effect of the cabling channel interference will also vary from port to port and switch to switch because of the variable transmitter and receiver functionality.
Reducing the carbon footprint brought on by plant inefficiencies, with the goals of reducing plant costs, achieving energy efficiency and security, and abating greenhouse gases (GHGs) are challenges faced today. Download paper now for energy efficiency solutions. Or, visit TheOptimizedPlant.com Knowledge Center for more white papers and case studies on reducing costs and increasing efficiency.
The Need for Wireless Monitoring An Overview
There is a real on-going need for monitoring of valve positions (actuated or manual) in the process
line. Malfunctioning of a valve can result in danger to human health and safety, affect yields, and
generate environmental risks. In some industries, regulation requires constant recording of valve
position. Currently, such monitoring is done through wired Switch Boxes. Each such device
requires data transmission and power cabling. Not only are these cables costly to manufacture and
install, they are also one of the most frequent sources of failures in the process line, due to the fact
that they are very often exposed to harsh environmental conditions. In fact, it is right here, at the
field device level, where the majority of problems with wires really exist.
Israel Radomsky , CEO and Founder, Eltav Wireless Monitoring Ltd. Israel
A wireless sensor network (WSN) is a wireless network consisting of spatially distributed autonomous devices that use sensors to monitor physical or environmental conditions. These autonomous devices, or nodes, combine with routers and a gateway to create a typical WSN system. The distributed measurement nodes communicate wirelessly to a central gateway, which provides a connection to the wired world where you can collect, process, analyze and present your measurement data.
Automation systems today have become remarkable warehouses of knowledge and information. Beyond just system configuration, many years of effort is inevitably invested in these systems by not only control engineers, but operations, process, maintenance, business and management personnel as well. In fact, over the life of an automation system the total intellectual investment will come to exceed the initial hardware and software cost many times over.
This paper will discuss some of the factors contributing to the impending process industry automation knowledge crisis, present reallife industry examples and provide a proven solution to mitigate the problems.
Industrial application developers have had two main options for interacting with production processes via programmable logic controllers (PLCs): they can buy a preprogrammed monolithic, shrink-wrapped human machine interface (HMI), complete and ready to go or they can customize their own solutions.
Shrink-wrapped HMI software packages are appealing because many complex tasks are hidden from you. Purchase the development software from an authorized distributor, load it into your development PC and then configure, debug and test. Then, just deploy the necessary runtime applications, data servers and configuration files on to your target PC or PCs. What could be easier?
But cookie-cutter HMI software solutions might not necessarily be the best or most practical approach for your specific industrial applications.
For one thing, while the shrink-wrapped HMI software packages enable connections to other vendors' devices, software, and systems via OPC or other standards, such connectivity is seldom adequate for high security or real-time control. And no matter how advanced the integration technology the package uses, you will end up lagging behind the technology curve. For example, if you had bought a package using the distributed common object model (DCOM) and wanted to benefit from advances in security and robustness that Microsoft had made since you bought the package, you would have to buy a new package. Moreover, the monolithic nature of the shrink-wrapped offerings often makes it difficult to embed third-party capabilities directly into your solution, thus limiting your options further.
Then there's training. Because the development environment and behavior of each HMI vendor's software varies, you'll need to acquire specialized skills to accomplish similar tasks. Training courses, material costs and schedules also vary by HMI publisher and many times are offered only through exclusive distributor channels. You could consider hiring outside help, but because of the specialized training and experience, the talent pool can be relatively shallow and therefore proportionately expensive.
And for many, cost of multiple deployments is an even bigger issue. Before you can actually deploy your solution to PCs, portable devices, or Web servers, you must typically have to pay for additional runtime software licenses. If you have more than a couple of users, this could amount to a considerable expense, often making this approach cost-prohibitive, especially if you are paying for more functionality than you actually never need.
Finally, there are the intangibles. As well-designed and flexible as these shrinkwrapped solutions might be, they almost always force compromises that would not be necessary if the solution were custom built for your specific applications. Whether that is a matter of function or just pride, it can be significant determining your satisfaction with the resulting interface.
Ethernet for industrial communications is growing rapidly in factory automation, process control and SCADA systems. The ODVA EtherNet/IP network standard is gaining popularity as a preferred industrial protocol. Plant engineers are recognizing the significant advantages that Ethernet-enabled devices provide such as ease of connectivity, high performance and cost savings. While EtherNet/IP has many advantages, cable installation is often expensive, and communications to remote sites or moving platforms may not be reliable or cost-effective.
Wireless Ethernet technologies have emerged that can now reliably reduce network costs while improving plant production. However, applying these technologies is not a simple matter as industrial Ethernet systems vary greatly in terms of bandwidth requirements, response times and data transmission characteristics. This paper will explore applying wireless technologies to EtherNet/IP based networks for industrial automation systems.
Statement for the Record, July 21, 2009 Hearing before the Subcommittee on Emerging Threats, Cybersecurity, Science and Technology.
I appreciate the opportunity to provide the following statement for the record. I have spent more than thirty-five years working in the commercial power industry designing, developing, implementing, and analyzing industrial instrumentation and control systems. I hold two patents on industrial control systems, and am a Fellow of the International Society of Automation. I have performed cyber security vulnerability assessments of power plants, substations, electric utility control centers, and water systems. I am a member of many groups working to improve the reliability and availability of critical infrastructures and their control systems.
On October 17, 2007, I testified to this Subcommittee on "Control Systems Cyber SecurityThe Need for Appropriate Regulations to Assure the Cyber Security of the Electric Grid."
On March 19, 2009, I testified to the Senate Committee on Commerce, Science, and Transportation on "Control Systems Cyber SecurityThe Current Status of Cyber Security of Critical Infrastructures."
I will provide an update on cyber security of the electric system including adequacy of the NERC CIPs and my views on Smart Grid cyber security. I will also provide my recommendations for DOE, DHS, and Congressional action to help secure the electric grid from cyber incidents.
Joe Weiss, PE, CISM. Applied Control Solutions, LLC
This whitepaper provides the history of the Six Sigma Symbol and explanations on the Six Sigma concept, the Six Sigma implementation, the Six Sigma calculation and more. Download this paper now.
Product variation and defects undercut customer loyalty as well as company profits. Six Sigma is a rigorous, disciplined, data-driven methodology that was developed to enhance product quality and company profitability by improving manufacturing and business processes.
Six Sigma uses statistical analysis to quantitatively measure how a process is performing. That process can involve manufacturing, business practices, products, or service. To be defined as Six Sigma means that the process does not produce more than 3.4 defects per million opportunities (DPMO) which translates to 99.9997% efficiency.
A Six Sigma defect is considered anything that can cause customer dissatisfaction, such as being outside of customer specifications. A Six Sigma opportunity is the total number of chances for a defect to occur.
Six Sigma Concept
The Six Sigma concept was developed by Motorola in 1986 with the stated goal of improving manufacturing processes and reducing product defects and variation. The underlying goal was to achieve near quality perfection with 99.9997% of variable values within specifications.