We're about to acquire a significant new cybervulnerability. The world's energy utilities are starting to install hundreds of millions of 'smart meters' which contain a remote off switch. Its main purpose is to ensure that customers who default on their payments can be switched remotely to a prepay tariff; secondary purposes include supporting interruptible tariffs and implementing rolling power cuts at times of supply shortage. The off switch creates information security problems of a kind, and on a scale, that the energy companies have not had to face before. From the viewpoint of a cyber attacker - whether a hostile government agency, a terrorist organization or even a militant environmental group - the ideal attack on a target country is to interrupt its citizens' electricity supply. This is the cyber equivalent of a nuclear strike; when electricity stops, then pretty soon everything else does too. Until now, the only plausible ways to do that involved attacks on critical generation, transmission and distribution assets, which are increasingly well defended. Smart meters change the game. The combination of commands that will cause meters to interrupt the supply, of applets and software upgrades that run in the meters, and of cryptographic keys that are used to authenticate these commands and software changes, create a new strategic vulnerability, which we discuss in this paper.
A Comprehensive Plant Crew Training Solution Improving Process Reliability and Safety
One of the key challenges that capitalintensive industries will face over the next five years is replacing the gray-haired workforce with the computer-savvy/gaming generation. High-fidelity operator trainer simulators that represent the production process, control system and the control room interface have proved to be very effective for control room operations training. However, for the remaining 50% of the plant start-up procedures that are executed in the field, no fully interactive training environment has been available - until now.
Industries like oil and gas, refining and power companies need to institutionalize their workforce knowledge in more efficient and effective ways. Leveraging Virtual Reality (VR) models to improve time-to competency in critical areas like safety, environment protection systems, knowledge, performance training, and reliability provides a vehicle to rapidly train the new workforce in ways that align with their interests and skills.
With continuing advances in hardware and software techniques Virtual Reality (VR) is accessible today as the best aid to multimedia training, process design, maintenance, safety, etc. which are currently based around conventional 2-Dimensional (2-D) equipment views.
The real time rendering of equipment views puts demands on processor time and so the use of high fidelity simulators is becoming more and more of a standard in process understanding and training. Within many VR commercial projects in the past, the results have either been unrealistically slow or oversimplified to the detriment of the solution effectiveness. As the technology continues to develop, these issues have been eliminated, giving way to a new process simulation era that is based on commercially standard IT hardware.
IVRP (Immersive Virtual Reality Plant) now provides a large range of effective multimedia aids that are easily and economically accessible to support design, training, maintenance or safety in the process industry by linking the power of dynamic simulation - DYNSYM - to VR applications and tools.
Invensys has filed patents for the solution outlined in this paper.
Invensys, Maurizio Rovaglio, Tobias Scheele and Norbert Jung
Most companies are gathering trillions of bytes of data, day after day, at no small cost, and then doing very little with it. Worse still, the data often is not serving its primary function very cost-effectively.
The "culprit," so to speak, is video surveillance data, the information captured by the video cameras that are used throughout most modern facilities.
But the situation is changing rapidly, thanks to an application called Video Analytics. This white paper looks at the new software technology, and how it can be used to leverage video data for better security and business performance.
This application note describes how to use the Tofino Industrial Security Solution to prevent the spread of the Stuxnet worm in both Siemens and non-Siemens network environments.
What is Stuxnet?
Stuxnet is a computer worm designed to target one or more industrial systems that use Siemens PLCs. The objective of this malware appears to be to destroy specific industrial processes.
Stuxnet will infect Windows-based computers on any control or SCADA system, regardless of whether or not it is a Siemens system. The worm only attempts to make modifications to controllers that are model S7-300 or S7-400 PLCs. However, it is aggressive on all networks and can negatively affect any control system. Infected computers may also be used as a launch point for future attacks.
How Stuxnet Spreads
Stuxnet is one of the most complex and carefully engineered worms ever seen. It takes advantage of at least four previously unknown vulnerabilities, has multiple propagation processes and shows considerable sophistication in its exploitation of Siemens control systems.
A key challenge in preventing Stuxnet infections is the large variety of techniques it uses for infecting other computers. It has three primary pathways for spreading to new victims:
- via infected removable USB drives;
- via Local Area Network communications
- via infected Siemens project files
Within these pathways, it takes advantage of seven independent mechanisms to spread to other computers.
Stuxnet also has a P2P (peer-to-peer) networking system that automatically updates all installations of the Stuxnet worm in the wild, even if they cannot connect back to the Internet. Finally, it has an Internet-based command and control mechanism that is currently disabled, but could be reactivated in the future.
When it comes to accurately measuring the flow of liquid or gas, your flowmeter is only as accurate as the equipment it is calibrated on. And in the age of ISO 9001, ISO/IEC 17025, ANSI Z540 and other strict quality standards, this fact is becoming increasingly important.
Test and measurement applications depend on repeatable flow measurements, which provide performance criteria of the instrument being tested.
These devices often play a critical role on aircraft, placing greater demand on accurate flow test measurement for fuel consumption or hydraulic actuator controls.
Industrial operations live and die by the repeatability of process conditions. It is not enough for an individual flow-metering instrument to perform in a consistent manner, day in and day out; measurements must also be replicated. Multiple devices running on the same process-in different physical locations-must perform the same under identical conditions. This is only achieved through repeatable calibration equipment traceability to government metrology laboratories such as NIST.
For industrial operations, inaccurate flowmeter calibrations can have a serious impact on plant performance, ultimately resulting in poor yields or compromised quality. Therefore, periodic flowmeter calibration must be part of the user's quality process.
Mubeen Almoustafa, Calibration Application Engineer, Flow Dynamics. Inc.
The date of January 1, 2005 sits vividly in the minds of manufacturers within the industrial control panel field. That's because that's the day when the National Fire Protection Association's (NFPA) National Electrical Code (NEC) 2005 Article 409 officially went into effect. The code required that short circuit current rating be clearly marked on the industrial control panels in order to be inspected and approved. The markings made it easier to verify proper over-current protection against hazards such as fires and shocks on components or equipment, whether it be for initial installation or relocation. It was the beginning of an era when things would become a little more complicated, but for all the right reasons of ensuring more safety within the industrial world.
The main vision of the NFPA is to reduce or limit the burden of fire and other hazards on the quality of life by providing and advocating scientifically based consensus codes and standards, research, training and education. These codes and standards were established to minimize the possibility of and effects of fire and other risks. Due to misinterpretations, inconsistencies and advancements in technology over the years, they have had to update their codes with consistency in order to comply with existing standards.
Therefore, the focus of this paper will look at the changes that occurred due to Article 409, the impacts that it had, who was affected by the code and how to comply with the code. Precautions like this article had been enforced in the past, but they were too vague, so people found ways to get around them.
The biggest change that took place within the article was the new requirements adopted for industrial machinery electrical panels, industrial control panels, some HVAC equipment, meter disconnect switches and various motor controllers. For the purpose of this paper, we will be concentrating on industrial control panels which are specified as assemblies rated for 600V or less and intended for general use. All in all, it states that the above products must feature a safe design and be clearly marked with specific information concerning Short Circuit Current Rating (SCCR) in efforts of aiding with the designing, building, installation and inspection of the control panels. This way, the above users can both reference and apply all the needed requirements for all new products and installations as well as for modifying existing ones.
Registration Evaluation Authorization and Restriction of Chemical Substances
It is certainly no secret to anyone that the past decade has placed a renewed focus on the environment and how all members of the world community, to include business organizations, affect it. Concerns about protecting the world in which we live have been the impetus behind such worldwide movements as recycling and renewable energy. From a manufacturing standpoint, RoHS (Reduction of Hazardous Substances) has impacted businesses as well as REACH, a more recent set of regulations that are becoming more significant to North American based manufacturing operations that are part of a supply chain that directly or indirectly supplies products into the European Union.
As with any new regulatory requirements, the initial exposure to the documentation can create a degree of uncertainty among those who will be asked to comply. From this perspective, REACH is no different from any of its predecessors. In an attempt to offer some understanding of the REACH regulations and some clarification of the requirements it places on manufacturers, C&M Corporation gathered Michael Karg, Director of Product Development, along with Randy Elliott, Regulatory Compliance Engineer, and Ariann Griffin, Regulatory Compliance Technician, to discuss some of the particulars of REACH and respond to some of the questions C&M has been discussing with members of its client base.
What is the purpose of REACH?
Mike Levesque, Randy Elliott, Ariann Griffin and Michael Karg, C&M Corporation
NFPA-79 is the electrical standard that has been developed by the National Fire Protection Association (NFPA) and is "intended to minimize the potential hazard of electrical shock and electrical fire hazards of industrial metalworking machine tools, woodworking machinery, plastics machinery and mass produced equipment, not portable by hand."
The National Fire Protection Association is also responsible for the National Electric Code (NEC)/ (NFPA-70).
The scope of NFPA-79 is summarized as follows: "The standard shall apply to the electrical/electronic equipment, apparatus, or systems of industrial machines operating from a nominal voltage of 600 volts or less, and commencing at the point of connection of the supply to the electrical equipment to the machine."
One of the focuses of the latest edition is to improve product safety by ensuring that appropriate types of wire and cable are used in the application with regard to current carrying capacity, temperature rating, or flammability.
As such, the guidelines for NFPA-79 compliant products are more stringent than those cables allowed by past editions.
The NFPA-79 provisions make specific reference to only two types of cable.
We can tune PID controllers, but what about tuning the operator?
The purpose of tuning loops is to reduce errors and thus provide more efficient operation that returns quickly to steady-state efficiency after upsets, errors or changes in load. State-of-the-art manufacturers in process and discrete industries have invested in advanced control software, manufacturing execution software and modeling software to "tune" everything from control loops to supply chains, thus driving higher quality and productivity.
The "forgotten loop" has been the operator, who is typically trained to "average" parameters to run adequately under most steady-state conditions. "Advanced tuning" of the operator could yield even better outputs, with higher quality, fewer errors and a wider response to fluctuating operating conditions. This paper explores the issue of improving operator actions, and a method for doing so.
Over the past decade we've spent, as an industry, billions of dollars and millions of man-hours automating our factories and plants. The solutions have included adding sensors, networks and software that can measure, analyze and either act or recommend action to help production get to "Six Sigma" efficiency. However, few, if any, plants are totally automated. Despite a continuing effort to remove personnel costs and drive repeatability through automation, all plants and factories have human operators. These important human assets are responsible for monitoring the control systems, either to act on system recommendations, or override automated actions if circumstances warrant.
Most of the time, operators let the system do what it was designed and programmed to do. Sometimes, operators make errors of commission, with causes ranging from misinterpretation of data to poor training or errors of omission attributed to lack of attention or speedy response. An operator's job has often been described as hours of boredom interrupted by moments of sheer panic. What the operator does during panic situations often depends on how well he or she has been trained, or "tuned."
Industry professionals have been trying to achieve safe, smart, responsible, sustainable manufacturing for at least the past 20 years, but why have they failed?
There are serious challenges to overcome in order to achieve smart manufacturing. Some of the challenges include economic instability, changing workforce, the need for greater than incremental increases in productivity, pressures to minimize environmental impacts and an increased focus on safety and risks of accident.
Manufacturing ought to be safe, because working safely is more profitable and more economical. Manufacturing ought to be smart. The data that is being continuously generated by smart machines and transmitters must be translated into actionable information. Manufacturing ought to be responsible. Manufacturing ought to be sustainable. Energy and waste reduction savings go straight to the bottom line.
So what is smart manufacturing, and how do we get there? Download this presentation and find out how Walt Boyes defines smart manufacturing and what suggestions he gives to get there.
1. Support - A Competitive Weapon?
2. Sustaining Performance - a Dream or a Goal?
3. The End Result - Less than Optimal Performance
4. Sustain and Improve Operational Performance
5. Outside the Box - An Opportunity for Synergy
6. Support is More than "Just Insurance"
7. Proactively Preventing Problems is Better than Just Fixing Them
8. Support Programs - a Cost or a Benefit?
9. The Business Case for Support
10. "Side-Effects" Consideration
11. Invensys Experience
Bringing a production facility online is the result of a huge investment. It typically takes years of planning, design, construction and finally, you move into operating the plant. Whether your operation has been running for 20 years or is about to start up, you face the ongoing challenge to achieve the highest returns possible on that investment. Once you have the initial bugs worked out and achieve the goals of targeted productivity, efficiency, quality and performance, how do you sustain high performance, or even improve it?
This goal of this paper is to present a perspective on Support Services as a tactical approach to not only sustain current Operations performance levels, but to continually improve them - and to be able to measure the ROI of an ongoing Support program.
Many industrial businesses and manufacturing operations were designed, implemented and operated around a set of basic assumptions that have served the industry well over the last century. For example, although it was expected that the values of process variables, such as flow, level, temperature and pressure, would naturally fluctuate in real time, business variables, such as production value, energy cost, and material cost were assumed to be fairly stable over long periods of time. It was also typically assumed that the production operations could effectively work independently from the business operations. Production operations would focus on making the products while business operations would focus on reporting results. This, in turn, led to a bottom-up business information flow perspective. Business information was used only for reporting results and only the required data from the operation had to be provided to the business reporting system. Often no business information flowed to the operations.
The traditional focus of industrial operations resulting from these assumptions has been on operational objectives, such as throughput and consumption of resources, as compared to business objectives. Typically, plants were designed to maximize production output, which proved to have the limited agility necessary to meet market demands during economic downturns.
Finally, the labor mindset of the industry resulting from the workforce dynamics of the early industrial revolution is, for the most part, still very much part of the standard operational philosophy utilized in today's industry. A huge separation continues to exist between the professional and management staffs from the operations and maintenance staffs that comprise today's labor force. This separation was necessary during the formative period of the industrial revolution when the available labor force was unskilled and almost completely uneducated. Although today's "labor force" is fairly well educated and highly skilled in comparison, the professional and management teams still tend to work under the traditional assumptions. For example, the operator interfaces of most industrial automation systems have been designed around a philosophy called operations by exception. Essentially this means that operators are to do nothing that impacts the plant unless an exception condition, an alarm or event occurs that requires human intervention. Once the event is addressed, operators can go back to doing nothing. This philosophy was developed to protect the plant from the uneducated and unskilled operators.
For the most part, these traditional industrial assumptions have served the industry quite well up to this point. However, there are current changes underway that are beginning to show that these traditional assumptions will not be effective going forward.
Today we have clear guidelines on how the Safety Instrumented Systems (SIS) and basic Process Control Systems (BPCS) should be separated from a controls and network perspective. But what does this mean to the HMI and the control room design?
Where do Fire & Gas Systems fit into the big picture and what about new Security and Environmental monitoring tasks?
What does the Instrument Engineer needs to know about operators and how systems communicate with them.
The evolution of the control room continues as Large Screen Displays provide a big picture view of multiple systems. Do rules and guidelines exist for this aspect of independent protection layers? What are today's best practices for bringing these islands of technology together.
This paper will review the topic and provide advice on a subject on which the books remain silent. Today's practices are haphazard and left to individuals without a systematic design or guidance.
Over the past 20 years the Safety System and the Automation system have been evolving separately. They use similar technologies, but the operator interface needs to be just one system. Unfortunately, due to the nature of the designs, this is not the case.
The automation system has been evolving since the introduction of the DCS and many Human Factor mistakes have been made. As we move towards new standards such as ISA SP 101 a more formal approach to HMI design is being taken.
The past widespread use of black backgrounds which cause glare issues in the control room and are solely responsible for turning the control room lights down to very low levels, or in some cases off, are being replaced with grey backgrounds and a new grayscale graphic standard replacing bright colors for a more plain grayscale scheme only using color to attract the operators' attention.
In having strong compliance schemes that restrict color usage to just a handful of colors, restricting the use of some colors that are reserved for important information such as alarm status, it appears that the automation system is being standardized and is starting to take advantage of new technology available to control room designers such as large screen displays.
Today, for a variety of reasons, tremendous pressures are building that will require plant managers to update their aging automation systems during the next decade. Defining the need for and exploring alternative approaches to this modernization of manufacturing systems is the subject of this report.
Managers in today's process manufacturing plants must react to factors ranging from massive customization and growing demand for change orders in the middle of production runs to management expectations mandating ever-faster execution of production orders.
Such constant pressures are driving many manufacturers to reevaluate the role of their automation strategies while improving the overall effectiveness of their enterprises. They're finding that automation is playing an increasingly important role in the effectiveness and profitability of their entire enterprise, impacting everything from cost of operations to customer satisfaction.
Fortunately, many are also discovering that they can make significant improvements throughout their value chain - without being forced to abandon their entire existing automation investment.
Two of the most popular architectures for improving regulatory performance and increasing profitability are 1) cascade control and 2) feed forward with feedback trim. Both architectures trade off additional complexity in the form of instrumentation and engineering time for a controller better able to reject the impact of disturbances on the measured process variable. These architectures neither benefit nor detract from set point tracking performance. This paper compares and contrasts the two architectures and links the benefits of improved disturbance rejection with reducing energy costs in addition to improved product quality and reduced equipment wear. A comparative example is presented using data from a jacketed reactor process.
The cost per barrel of crude oil has risen dramatically, increasing the burden on process facilities for both quality and profitable production. Adjusted for inflation, the cost of oil averaged $19.61 from 1945 thru 2003. October 2004 saw the per barrel cost of oil rise to $55.67, rising 70% over a 10-month timeframe and negatively impacting the profitability of companies across the process industries. According to the U.S. Department of Energy, 43% of all energy consumed by the average pulp and paper mill is production related. This percentage is small when compared to other industry segments such as chemicals (74%), glass (89%), and aluminum (93%). In all cases, the higher cost of energy suggests that all process companies need to examine ways of curbing energy consumption and unnecessary increases to their cost of goods sold. Improving disturbance rejection through cascade control or feed forward with feedback trim provides one way of achieving those objectives.
Improved disturbance rejection is linked to increased product quality and decreased equipment wear. These are important benefits, indeed. Consider the market value of high quality white paper produced by an average mill. On-spec production is sold at a premium of approximately $2,000 per ton whereas "seconds" are sold on the aftermarket at a discounted rate. Of the 6%-8% that fails to meet spec, only 2% is classified as "broke" and able to be re-pulped Next consider the investment in production facilities. With initial costs of $400-$500 million and annual maintenance budgets approaching 10%, mills must operate 24 x 7 in order to recoup the investment. Effective disturbance rejection provides a valuable means of achieving a return on those investments through increased quality and decreased equipment wear. Additionally, it offers significant value in terms of reduced energy consumption and lower cost of goods sold.
Robert C. Rice, PhD, & Douglas J. Cooper, PhD, Control Station, Inc.
Distributed Control Systems (DCS) have been successfully utilized to help control manufacturing and production processes since the late 1970s. The primary function of these DCS systems has been the automatic feedback control of the various process loops across the plants and the human interfacing with plant operators guiding the production from control rooms. Although these systems have proven to be very successful at improving the efficiency of industrial operations as compared with earlier control technologies, the state-of-the-art has not grown significantly since their inception. Most plants still operate exactly as they did 40 years ago.
Considerable research and development has been invested in expanding the functionality of DCS's in the areas of advanced controls and advanced manufacturing execution software. Numerous industrial plants have started to employ advanced controls in critical or high-value process operations, with some venturing into the use of advanced application software packages, each typically designed to address a specific issue or challenge within the industrial operations. Entrepreneurial software companies typically developed the software at this level of operation, essentially between the automation and business levels, often referred to as the manufacturing execution software (MES).
Although some industrial operations implemented advanced control and advanced MES software, the vast majority of processes are still controlled by simple automatic feedback control. The efficiency and effectiveness of most plants is a function of the installed feedback control systems. As a result, many industrial managers have expressed concerns that, in spite of the huge investments made in automation systems and software, plants do not appear to be operating better than they had been 30 years ago. In some cases, the plants actually appear to be operating less efficiently, possibly due to the reduced and inexperienced work forces and aging equipment.
Invensys, Peter G. Martin, PhD, Invensys Operations Management
2. IOM Real-Time Energy Management
3. Real-Time Energy Management as Part of an Enterprise approach
Over the last several years energy costs have more than doubled! In the process manufacturing industries, with energy costs often comprising as much as 80% of the overall variable cost of operating a plant, this has created a crisis. Many manufacturers have responded to this crisis with programs aimed at reducing the overall energy consumption of an operation or looking to alternate, lower cost fuels. Although these initiatives may provide a good starting point in the battle to reduce energy costs, they are not adequate to meet the needs of today's real time business environment.
Historically, the price of energy could often be dealt with as a constant over a prolonged time period. Large energy users could develop contracts with energy suppliers for 6 months or even a year that would effectively set the price of energy over that time period. Today long-term energy contracts are the exception. In most parts of the world the price of energy changes in real time.
It is essential that industrial companies manage their business in the time frame at which the business variables change. Otherwise the business is completely out of control. When it comes to managing industrial energy, the time frame is real time and real time energy management is required.
When a business expands an existing facility, adds a new location, incorporates an influx of new users, or upgrades an existing infrastructure - it's vital to ensure network readiness and validate infrastructure changes to optimize network performance, minimize user downtime and reduce problems after implementation. This white paper describes a methodology to manage network changes that meets the need for speed of implementation without sacrificing accuracy.
Changes in business place demands on the network -and the network professionals who administer it -to expand and accommodate different users, additional users, remote locations and more. Situations driving this increased need to manage and validate infrastructure changes include:
- Mergers and acquisitions: The network established for 50 users must now accommodate 500.
- Business growth into a new wing or facilities: The current network must handle the increased load of new users, applications and infrastructure.
- New technologies: As part of a corporate-wide upgrade, a new technology must be validated for all users before implementation.
- Upgrading the network: When installing new infrastructure devices, the configuration must be validated as correct.
Regardless of what drives the change, one commonality is the need for rapid and accurate completion of the project. Too often, however, changes are reacted to rather than managed proactively, leading to future problems. In part, this is due to the need for fast deployment: All of these changes must happen as quickly as possible, so shortcuts are taken and steps skipped in the process. Accuracy suffers as a result. And ironically, both the network and IT staffs are slowed down because expanding or upgrading networks without upfront due diligence leads to time-consuming problems and troubleshooting later.