The Stuxnet worm is a sophisticated piece of computer malware designed to sabotage industrial processes controlled by Siemens SIMATIC WinCC, S7 and PCS 7 control systems. The worm used both known and previously unknown vulnerabilities to spread, and was powerful enough to evade state-of-the-practice security technologies and procedures.
Since the discovery of the Stuxnet worm in July 2010, there has been extensive analysis by Symantec, ESET, Langner and others of the worms internal workings and the various vulnerabilities it exploits. From the antivirus point of view, this makes perfect sense. Understanding how the worm was designed helps antivirus product vendors make better malware detection software.
What has not been discussed in any depth is how the worm might have migrated from the outside world to a supposedly isolated and secure industrial control system (ICS). To the owners and operators of industrial control systems, this matters. Other worms will follow in Stuxnet's footsteps and understanding the routes that a directed worm takes as it targets an ICS is critical if these vulnerable pathways are to be closed. Only by understanding the full array of threats and pathways into a SCADA or control network can critical processes be made truly secure.
It is easy to imagine a trivial scenario and a corresponding trivial solution: Scenario: Joe finds a USB flash drive in the parking lot and brings it into the control room where he plugs it into the PLC programming station. Solution: Ban all USB flash drives in the control room.
While this may be a possibility, it is far more likely that Stuxnet travelled a circuitous path to its final victim. Certainly, the designers of the worm expected it to - they designed at least seven different propagation techniques for Stuxnet to use. Thus, a more realistic analysis of penetration and infection pathways is needed.
This White Paper is intended to address this gap by analyzing a range of potential "infection pathways" in a typical ICS system. Some of these are obvious, but others less so. By shedding light on the multitude of infection pathways, we hope that the designers and operators of industrial facilities can take the appropriate steps to make control systems much more secure from all threats.
Flameproof enclosure (Ex d) and intrinsic safety (Ex i) are very common equipment protection methods in Process Automation. One reason to use Ex d is the amount of energy which could not be provided via Ex i. This disadvantage has gone with the introduction of intrinsically safe, dynamic methods of arc prevention such as DART or Power-i. This white paper shows that when using intrinsic safety, installation, maintenance and inspection costs will be reduced.
This paper addresses decision makers and professionals responsible for automation systems in hazardous areas. A good understanding of the principles of explosion protection is required.
This report describes a framework for a proposed path forward for Smart Manufacturing in a number of priority areas. The report reflects the views of a national cross-section of industry leaders involved in planning the future of the process industries, vendors supplying technology solutions for manufacturing operations, and academic researchers engaged in a range of associated systems research. The report is based on information generated during the workshop on Implementing 21st Century Smart Manufacturing held in Washington, D.C. in September 2010, and from subsequent discussions among members of the Smart Manufacturing Leadership Coalition. A complete list of participants who contributed their valuable ideas at the workshop is shown on the facing page.
21st Century Smart Manufacturing applies information and manufacturing intelligence to integrate the voice, demands and intelligence of the 'customer' throughout the entire manufacturing supply chain. This enables a coordinated and performance-oriented manufacturing enterprise that quickly responds to the customer and minimizes energy and material usage while maximizing environmental sustainability, health and safety, and economic competitiveness. Innovations that allow diverse devices, machines, and equipment to communicate seamlessly are opening the door for much wider use of system simulation and optimization software in the operation and control of advanced manufacturing systems. Today, smart tools and systems that both generate and use greater amounts of data and information are being used to innovate, plan, design, build, operate, maintain, and manage industrial facilities
Executive Summaryand systems in dynamic ways that significantly increase efficiency, reduce waste, and improve competitiveness.
While industry is making progress in developing and using smart manufacturing, the infrastructure and capabilities needed to deliver the full potential of this knowledge-based manufacturing environment have yet to be developed. Challenges include incorporating and integrating customer intelligence and demand dynamics and the needs for greater affordability, operator usability, protection of proprietary data, systems interoperability, and cyber security.
To identify and prioritize the actions needed to overcome some of these challenges in smart manufacturing, a workshop on Implementing 21st Century Smart Manufacturing was held in Washington, D.C., on September 14-15, 2010.
This paper summarizes Sigurd Skogestad's struggles in the plantwide control field.
A chemical plant may have thousands of measurements and control loops. By the term plantwide control it is not meant the tuning and behavior of each of these loops, but rather the control philosophy of the overall plant with emphasis on the structural decisions. In practice, the control system is usually divided into several layers, separated by time scale.
My interest in this field of plantwide control dates back to 1983 when I started my PhD work at Caltech. As an application, I worked on distillation column control, which is excellent example of a plantwide control problem. I was inspired by Greg Shinskey's book on Distillation Control, which came out with a second edition in 1984 (Shinskey, 1984). In particular, I liked his systematic procedure, which involved computing the steady-state relative gain array (RGA) for 12 different control structures ("configurations"); the DV-configuration, LV-configuration, ratio configuration, and so on. However, when I looked in more detail on the procedure I discovered that its theoretical basis was weak. First, it did not actually include all structures, and it even eliminated the DB-configuration as "impossible" even through it is workable in practise (Luyben, 1989). Second, controllability theory tells that the steady-state RGA by itself is actually not useful, except that one should avoid pairing on negative gains. Third, the procedure focused on dual composition control, while one in practise uses only single end control, for example, because it may be optimal economically to use maximum heating to maximize the recovery of the valuable product.
Sigurd Skogestad, Norwegian University of Science and Technology (NTNU)
The most popular rendition of Murphy's Law is, "What can go wrong will, and at the worst possible time " In today's automation world, we are building ever more complicated automation and management systems, designed to eek the last bit of quality and production performance our of our processes. We are creating some fertile ground for the production of future Murphy's Law crops. Minimizing these risks, from all perspectives - Automation Vendor, System Integrator, and End User, is essential to create solutions that will degrade gracefully to minimize downtime.
Why am I writing about this now? Two Words - Hard Drive. As Baz Luhrmann once said in a commencement speech turned into "Sun Tan Song," "The real troubles in your life are apt to be things that never crossed your worried mind; the kind that blindside you at 4pm on some idle Tuesday." OK, in my case it was 3:15 on a Monday and I had to do a hard boot. That was it "Drive not recognized." New Drive in hand, some software upgrades at the same time, backups that were out of date and a day later of loading, copying, recovering and I was 90% whole again, (not what the plant manager would want to hear, right?). A few shortcuts to get back on line quickly Antivirus software can wait, documentation of the new system setup can wait, I won't forget to update that temporary license I got from my software vendor, to get back up and running By now you're likely shaking your head saying Yup, I've been there Oh, but it gets better. The next morning I wake up to a message saying there is a problem with the Operating System, and the system can't boot. Recover with a Boot Disk, scan the drive and there are bad clusters. The Bathtub Curve still exists
Over the years, we've had sensor upgrades for process measurement, improved controllers for automation, HMI/SCADA installations for Operator visibility, Historians for data archiving and operational analytics, and now we have enterprise integration for improved material management, corporate agility, regulatory compliance and a host of other features. The manager makes the decision, and the engineer is saddled with a lot of work and stress. That work often rolls downhill, creating opportunity for System Integrators. We are all in an age where there is a technology abundance, leading to many ways to skin the cat. Have you researched all the alternatives? Are you up to speed with the latest tools to solve complicated enterprise integration?
What does integration mean to you? It may mean sending data to your corporate database, enabling tools from Oracle, SAP and other enterprise vendors to report and analyze on it. The first step is usually to create an enterprise dashboard of KPIs (Key Performance Indicators). That should keep management happy for a while. But then, the focus will shift to even tighter integration. You'll want to close the loop on equipment, enabling data to flow from the enterprise back down to the equipment. How you accomplish all this is very much driven by your automation perspective. Will you control this from the Enterprise, or will you coordinate this from the plant floor.
This presentation discusses:
- Highlights of Greenhouse Gas Mandatory Reporting Rule (GHG MRR)
Overview of Subpart A: General Provisions
Overview of Subpart C: Stationary Combustion Units
Several Industry specific requirements
- Green facilities of the future?
Energy and carbon management impact
Benefits from GHG Compliance
1. Enhance loss control (business process and procedures)
2. Enhance key equipment performance
3. Improve energy efficiency/emission recovery
Gulf Coast Chemical Plant Case Study
Patrick Truesdale, Emerson Process Management - ISA
Critical infrastructure sites and facilities are becoming increasingly dependent on interconnected physical and cyber-based real-time distributed control systems (RTDCSs). A mounting cybersecurity threat results from the nature of these ubiquitous and sometimes unrestrained communications interconnections.
In today's highly automated machines, fieldbus valve manifolds are replacing conventional hardwired solutions. They more easily perform vital functions by integrating communication interfaces to pneumatic valve manifolds with input/output (I/O) capabilities. This allows programmable logic controllers (PLCs) to more efficiently turn valves on and off and to channel I/O data from sensors, lights, relays, individual valves, or other I/O devices via various industrial networks. The resulting integrated control packages can also be optimized to allow diagnostic benefits not previously available.
Fieldbus valve manifolds from manufacturers such as Festo, SMC, and Numatics find wide utility in packaging, automotive/tire, and material handling applications, as well as in the pharmaceutical, chemical, water, and wastewater industries. They are specified for purchase by controls engineers at original equipment manufacturers (OEMs) who design and develop industrial automation solutions - as well as by end users in relevant industries.
This paper presents controls engineers, specifiers, and buyers with new insights into five crucial factors they must consider before selecting pneumatic fieldbus valve manifolds - commissioning, distribution, modularity, diagnostics, and recovery - while also outlining some shortcomings of conventional approaches. Finally, it highlights new designs that offer substantial improvements in the application, performance, and maintenance of these valve manifolds from the end users and OEMs' points of view.
While a molded cable assembly can offer significant advantages over a similar product of a mechanical construction, the art of insert molding remains somewhat of a mystery to cable assembly consumers. While attracted by the potential for a more aesthetically pleasing product that can be sealed from the environment and rendered 'tamper proof', the complexity of the insert molding manufacturing process is often over looked.
Many cable assembly engineers who are consumers - but not producers - of molded assemblies are familiar to some degree with conventional molding. In this environment, the goal is the maximization of process speed which translates directly to bottom line financial performance. Manufacturing lot sizes are often characterized by long runs, where the same part is produced continuously over a considerable amount of time. The molding machines are usually horizontal in construction, use a closed cavity approach with auto-ejection of the finished parts, and operate at much higher injection pressures and speeds than an insert molding process. Additionally, the often uniform nature of the parts relative to wall thickness, balanced runner systems, and sufficient draft on the molded parts being produced serve to support consistent quality in the face of maximum manufacturing speed. The ability to optimize tool cooling, standardize mounting, and implement automated processes are also major differentiators between the conventional horizontal molding and vertical insert molding approaches. The result, all things equal, is a much higher production rate for finished parts in a conventional molding process.
What then are the challenges of the insert molding process used to manufacture cable assemblies, and, more importantly, how are they met by the manufacturer? At a high level there are four major areas of consideration when discussing the intricacies of insert molding. These include the operator, tooling, equipment, and the process itself. Let's examine each of these in more detail.
Operator: As with any non-automated process, it is the operator who is often the most important component of the success or failure of a manufacturing lot. This is especially true in cable assembly molding. In addition to knowing the basics of machine operation, the operator has several variables to properly monitor and control if he or she are to produce parts that meet the established design and quality guidelines. In light of some of the equipment and component variability discussed earlier, some of these operator focused considerations include...
Mike Levesque, Shawn Young & Brock Richard, C&M Corporation
Significant changes have taken place regarding Surge Protection Devices and UL 1449. With the changes has come different product marking requirements to identify those testing and product changes. Manufacturers of SPD equipment have long been testing to UL 1449 but only recently have such significant changes taken place regarding a whole product categories testing and performance.
An updated UL 1449 standard was released titled UL Standard for Safety for Surge Protective Devices, UL 1449 Third Edition, and was dated September 29, 2006. The result was that all manufacturers were required to retest their SPD products to ensure compliance before 9/29/2009.
The easiest way to notice a new SPD product versus an older product that still may be in inventory is the new gold UL holographic label on the product. The new label must have the SPD and not TVSS which was used during the later part of UL 1449- 2 edition.
Frequently, our customers will ask for a "one size fits all" Surge Protective Device (SPD), eliminating the need to stock several different part numbers to meet their customers needs. Some manufactures claim to have a one size fits all Surge Protection Device (SPD), however there is absolutely no benefit of this to the end user. Why? The one size fits all approach could in most cases actually cause damage to the equipment it should be protecting.
Specifiers and users of Surge Protective Devices (SPDs) are adjusting to new terminology and requirements. UL revised their 1449 Safety Standard for Surge Protective Devices to increase safety. The National Electrical Code (NEC) incorporated specific language to require the use of these safer products. This tip sheet will explain some of the changes affecting specifiers and users.
In this global business environment, it is common for manufacturers in North America to ship equipment to Europe. North America and Europe each have their own standards for Surge Protective Devices (SPD's)
which makes understanding the differences in electrical system terminology very important. In North America, all SPD products are associated with UL 1449 3rd edition whereas in Europe, IEC 61643-1 is used to provide standards. Recently 1449 3rd edition adopted new terminology and testing criteria to be more congruent with IEC 61643-1. However, system voltages and the how they are defined differ between the two standards.
Selecting the appropriate Surge Protective Devices (SPD) can seem like a daunting task with all of the different types on the market today. The surge rating or kA rating of an SPD is one of the most misunderstood ratings. Customers commonly ask for an SPD to protect their 200A panel and there is a tendency to think that the larger the panel, the larger the kA device rating needs to be for protection As we will explore in this paper, this is a common misunderstanding.
When a surge enters a panel, it does not care or know the size of the panel. So how do you know if you should use a 50kA, 100kA or 200kA SPD? Realistically, the largest surge that can enter a building's wiring is 10kA, as explained in the IEEE C62.41 standard. So why would you ever need a SPD rated for 200kA? Simply stated - for longevity.
So one may think: if 200kA is good, then 600kA must be three times better, right? Not necessarily. At some point, the rating diminishes its return, only adding extra cost and no substantial benefit. Since most SPDs on the market use a metal oxide varistor (MOV) as the main limiting device, we can explore how/why higher kA ratings are achieved. If an MOV is rated for 10kA and sees a 10kA surge, it would use 100% of its capacity. This can be viewed somewhat like a gas tank, where the surge will degrade the MOV a little bit (no longer is it 100% full). Now if the SPD has two 10kA MOVs in parallel, it would be rated for 20kA. Theoretically, the MOVs will evenly split the 10kA surge, so each would take 5kA. In this case, each MOV have only used 50% of their capacity which degrades the MOV much less (leaving more left in the tank for future surges).
Does this translate into surge "stopping power?" No, just because an SPD has 2 or 20 MOVs in parallel it does not mean it will limit the 10kA surge any better then a single SPD (of the same rating). The main objective of having MOVs in parallel is to increase the longevity of the SPD. Again, keep in mind that it is subjective and at some point you are only adding cost by incorporating more MOVs and receiving little benefit.
As mentioned before, panel size does not really play a role in the selection of a kA rating. The location of the panel within the facility is much more important. IEEE C62.41.2 defines the types of expected surges within a facility as:
Category C: Service Entrance, more severe environment: 10kV, 10kA surge
Category B: Downstream, greater than 30' from category C, less severe environment: 6kV, 3kA surge
Category A: Further downstream, greater than 60' from category C, least severe environment: 6kV, 0.5kA surge
How do you know what kA rating to use? The IEEE categories provide a good base for selecting kA ratings. There are many "right" sizes for each category but there needs to be a balance between redundancy and added cost. Qualified judgment should always be used when selecting the appropriate kA rating for an SPD.
The Surge-Trap is a branded surge protection device (SPD)that utilizes Mersen's patented thermally protected metal oxide varistor (TPMOV) technology. This technology eliminates the need for fuses to be installed in series with the Surge-Trap SPD.
which saves money and panel space. Surge-Trap SPD is typically installed in industrial control panels to protect sensitive electrical equipment from harmful voltage transients. Nearly 80% of all transients are caused by equipment or power disturbances within a facility.
What Types of Ratings Do SPDs Have?
Do SPDs have a current rating? This is a trick question! They do not have a continuous current rating however they do have other important current-based ratings. They are required to have a short circuit current rating (SCCR), which is the maximum rms current at a specified voltage the SPD can withstand.
The nominal discharge current (In) is new to UL 1449 Third Edition (effective 9/29/09). This is the peak value of the current (20kA maximum) through the SPD (8/20μs waveform) where the SPD remains functional after 15 surges.
There are two main voltage ratings for an SPD, the first is maximum continuous operating voltage (MCOV) which is the maximum rms voltage that may be applied to the SPD per each connected mode.
Voltage protection rating (VPR) is determined as the nearest high value (from a list of preferred values) to the measured limiting voltage determined during the transient-voltage surge suppression test using the combination wave generator at a setting of 6kV, 3kA.
A short bit of history helps to understand why the cloud instrumentation development is so significant.
The first created instruments, let us call them traditional instruments, are of standalone or box format. Users connect sensors directly to the box instrument front panel, which contains the measurement circuitry and displays the results. Initially it was on analog meters and later with digital displays.
In many cases, test engineers wanted to have instruments communicate with each other, for instance in a stimulus/response experiment, when a signal generator instructs a digitizer when to start taking samples. This was initially done with serial links, but in the 1970s the Hewlett Packard Interface Bus, which evolved into today's IEEE-488 interface, became extremely popular for connecting instruments.
The next major breakthrough in measurement technology came with the availability of desktop computers, which made it more cost effective to run test programs, control instruments as well as collect data and allow test engineers to process and display data. Plug-in IEEE-488 boards allowed minicomputers and later PCs to perform these tasks.
Today such interface cards are often not needed thanks to instruments that communicate with PCs directly over USB or the Ethernet, and most recently even over wireless Ethernet schemes.
Marius Ghercioiu, President of Tag4M at Cores Electronic LLC
When adding, modifying or upgrading a system, many critical infrastructures conduct a Factory Acceptance Test (FAT). A FAT includes a customized testing procedure for systems and is executed before the final installation at the critical facility. Because it is difficult to predict the correct operation of the safety instrumented system or consequences due to failures in some parts of the safety instrumented system, a FAT provides a valuable check of these safety issues. Similarly, since cyber security can also impact safety of critical systems if a system is compromised, it naturally makes sense to integrate cyber security with the FAT, a concept that brings extreme value and savings to an implementation process.
An Integrated Factory Acceptance Test (IFAT) is a testing activity that brings together selected components of major control system vendors and Industrial Control System (ICS) plant personnel in a single space for validation and testing of a subset of the control system network and security application environment in an ICS environment. Conducting an IFAT provides important advantages and benefits including: time savings, cost savings, improved ability to meet compliance requirements, and increased comfort level with integrated security solutions.
With the current trend of more intelligent ICSs and increased regulatory compliance, the best practice to achieving ICS and IT integration is by conducting an IFAT. A common problem that occurs in the industry is the unanticipated work associated with implementing security controls which can result in production issues. Performing an IFAT avoids costly redesign and troubleshooting during outage operations saving time and money that leads to an enhanced, sound security solution.
Jerome Farquharson, Critical Infrastructure and Compliance Practice Manager, and Alexandra Wiesehan, Cyber Security Analyst, Burns & McDonnell
Is Moving Your SCADA System to the Cloud Right For Your Company?
Cloud computing is a hot topic. As people become increasingly reliant on accessing important information through the Internet, the idea of storing or displaying vital real-time data in the cloud has become more commonplace. With tech giants like Apple, Microsoft, and Google pushing forward the cloud computing concept, it seems to be more than just a passing trend.
Recently the focus of cloud computing has started to shift from consumer-based applications to enterprise management systems. With the promise of less overhead, lower prices, quick installation, and easy scalability, cloud computing appears to be a very attractive option for many companies.
Common questions surround this new technology: What is the "cloud"? What kind of information should be stored there? What are the benefits and risks involved? Is moving toward cloud computing right for your company?
Cloud computing is not a "fix-all" solution. It has strengths and weaknesses, and understanding them is key to making a decision about whether it's right for your company. We'll explore the major benefits and risks involved, and give you a set of factors to consider when choosing what information to put on the cloud.