The most popular rendition of Murphy's Law is, "What can go wrong will, and at the worst possible time " In today's automation world, we are building ever more complicated automation and management systems, designed to eek the last bit of quality and production performance our of our processes. We are creating some fertile ground for the production of future Murphy's Law crops. Minimizing these risks, from all perspectives - Automation Vendor, System Integrator, and End User, is essential to create solutions that will degrade gracefully to minimize downtime.
Why am I writing about this now? Two Words - Hard Drive. As Baz Luhrmann once said in a commencement speech turned into "Sun Tan Song," "The real troubles in your life are apt to be things that never crossed your worried mind; the kind that blindside you at 4pm on some idle Tuesday." OK, in my case it was 3:15 on a Monday and I had to do a hard boot. That was it "Drive not recognized." New Drive in hand, some software upgrades at the same time, backups that were out of date and a day later of loading, copying, recovering and I was 90% whole again, (not what the plant manager would want to hear, right?). A few shortcuts to get back on line quickly Antivirus software can wait, documentation of the new system setup can wait, I won't forget to update that temporary license I got from my software vendor, to get back up and running By now you're likely shaking your head saying Yup, I've been there Oh, but it gets better. The next morning I wake up to a message saying there is a problem with the Operating System, and the system can't boot. Recover with a Boot Disk, scan the drive and there are bad clusters. The Bathtub Curve still exists
While a molded cable assembly can offer significant advantages over a similar product of a mechanical construction, the art of insert molding remains somewhat of a mystery to cable assembly consumers. While attracted by the potential for a more aesthetically pleasing product that can be sealed from the environment and rendered 'tamper proof', the complexity of the insert molding manufacturing process is often over looked.
Many cable assembly engineers who are consumers - but not producers - of molded assemblies are familiar to some degree with conventional molding. In this environment, the goal is the maximization of process speed which translates directly to bottom line financial performance. Manufacturing lot sizes are often characterized by long runs, where the same part is produced continuously over a considerable amount of time. The molding machines are usually horizontal in construction, use a closed cavity approach with auto-ejection of the finished parts, and operate at much higher injection pressures and speeds than an insert molding process. Additionally, the often uniform nature of the parts relative to wall thickness, balanced runner systems, and sufficient draft on the molded parts being produced serve to support consistent quality in the face of maximum manufacturing speed. The ability to optimize tool cooling, standardize mounting, and implement automated processes are also major differentiators between the conventional horizontal molding and vertical insert molding approaches. The result, all things equal, is a much higher production rate for finished parts in a conventional molding process.
What then are the challenges of the insert molding process used to manufacture cable assemblies, and, more importantly, how are they met by the manufacturer? At a high level there are four major areas of consideration when discussing the intricacies of insert molding. These include the operator, tooling, equipment, and the process itself. Let's examine each of these in more detail.
Operator: As with any non-automated process, it is the operator who is often the most important component of the success or failure of a manufacturing lot. This is especially true in cable assembly molding. In addition to knowing the basics of machine operation, the operator has several variables to properly monitor and control if he or she are to produce parts that meet the established design and quality guidelines. In light of some of the equipment and component variability discussed earlier, some of these operator focused considerations include...
Mike Levesque, Shawn Young & Brock Richard, C&M Corporation
A number of previously unknown security vulnerabilities in the ICONICS GENESIS32 and GENESIS64 products have been publically disclosed. The release of these vulnerabilities included proof-of-concept (PoC) exploit code.
While we are currently unaware of any malware or cyber attacks taking advantage of these security issues, there is a risk that criminals or political groups may attempt to exploit them for either financial or ideological gain.
The products affected, namely GENESIS32 and GENESIS 64 are OPC Web-based human-machine interface (HMI) / Supervisory Control and Data Acquisition (SCADA) systems. They are widely used in critical control applications including oil and gas pipelines, military building management systems, airport terminal systems, and power generation plants.
Of concern to the SCADA and industrial control systems (ICS) community is the fact that, though these vulnerabilities may initially appear to be trivial, a more experienced attacker could exploit them to gain initial system access and then inject additional payloads and/or potentially malicious code. At a minimum, all these vulnerabilities can be used to forcefully crash system servers, causing a denial-of-service condition. What makes these vulnerabilities difficult to detect and prevent is that they expose the core communication application within the GENESIS platform used to manage and transmit messages between various clients and services.
This White Paper summarizes the current known facts about these vulnerabilities. It also provides guidance regarding a number of possible mitigations and compensating controls that operators of SCADA and ICS systems can take to protect critical operations.
The continuing drive to improve productivity will encourage more automation networking. The driving factors behind this expected growth include lean working, increased traceability legislation, product lifecycle management production (PLM), and improvements in manufacturing cycle times. This requires connecting the factory floor to the corporate offices where enterprise resource planning (ERP) systems make information available backwards into the supply chain, as well as forward to customers. Simply put, everyone wants to see what's happening. As a result, networks and the information they handle are becoming as important as the industrial control functions they manage.
This white paper describes the open CC-Link IE Field network, an Industrial Ethernet technology, which operates at 1 Gigabit/sec. This data rate is 10 times faster than other Industrial Ethernet technologies in order to provide highly responsive control system communications, while at the same time allowing connection to field devices (RFID readers, vision systems, etc.) that have TCP/IP Ethernet ports communicating at slower 10Mb or 100Mb data rates.
This guide discusses how best to optimize combustion efficiency in any application using combustion plant.
Combustion optimization in some form or other has become an absolute necessity for all combustion processes. Optimization improves efficiency, reduces environmental impact, reduces maintenance requirements and increases the time between maintenance shutdowns. There are many types of application where combustion optimization will be a key requirement. These include:
- Process heaters - the driver here is to increase throughput of feedstock, not necessarily fuel efficiency
- Waste incinerators - waste throughput is the main driver but environmental impact also has to be considered
- Steam raising, for power generation or other processes, pulp and paper, food and beverage etc, where fuel efficiency is the main driver
In any of the above examples, poor control of the combustion process may ultimately lead to damage to the plant, with problems such as soot formation, hotspots and/or corrosion in the flue ducts, to name but a few. In each case, the incidence of such problems, especially if left unresolved, will result in increased maintenance expenditure and a reduction in the life cycle of the plant.
The Stuxnet worm is a sophisticated piece of computer malware designed to sabotage industrial processes controlled by Siemens SIMATIC WinCC, S7 and PCS 7 control systems. The worm used both known and previously unknown vulnerabilities to spread, and was powerful enough to evade state-of-the-practice security technologies and procedures.
Since the discovery of the Stuxnet worm in July 2010, there has been extensive analysis by Symantec, ESET, Langner and others of the worms internal workings and the various vulnerabilities it exploits. From the antivirus point of view, this makes perfect sense. Understanding how the worm was designed helps antivirus product vendors make better malware detection software.
What has not been discussed in any depth is how the worm might have migrated from the outside world to a supposedly isolated and secure industrial control system (ICS). To the owners and operators of industrial control systems, this matters. Other worms will follow in Stuxnet's footsteps and understanding the routes that a directed worm takes as it targets an ICS is critical if these vulnerable pathways are to be closed. Only by understanding the full array of threats and pathways into a SCADA or control network can critical processes be made truly secure.
It is easy to imagine a trivial scenario and a corresponding trivial solution: Scenario: Joe finds a USB flash drive in the parking lot and brings it into the control room where he plugs it into the PLC programming station. Solution: Ban all USB flash drives in the control room.
While this may be a possibility, it is far more likely that Stuxnet travelled a circuitous path to its final victim. Certainly, the designers of the worm expected it to - they designed at least seven different propagation techniques for Stuxnet to use. Thus, a more realistic analysis of penetration and infection pathways is needed.
This White Paper is intended to address this gap by analyzing a range of potential "infection pathways" in a typical ICS system. Some of these are obvious, but others less so. By shedding light on the multitude of infection pathways, we hope that the designers and operators of industrial facilities can take the appropriate steps to make control systems much more secure from all threats.
The global trends and challenges driving the need for industry to improve energy efficiency are well known. The growing population and economic development in many countries throughout the world has caused energy and transportation fuel consumption to increase.
This paper summarizes Sigurd Skogestad's struggles in the plantwide control field.
A chemical plant may have thousands of measurements and control loops. By the term plantwide control it is not meant the tuning and behavior of each of these loops, but rather the control philosophy of the overall plant with emphasis on the structural decisions. In practice, the control system is usually divided into several layers, separated by time scale.
My interest in this field of plantwide control dates back to 1983 when I started my PhD work at Caltech. As an application, I worked on distillation column control, which is excellent example of a plantwide control problem. I was inspired by Greg Shinskey's book on Distillation Control, which came out with a second edition in 1984 (Shinskey, 1984). In particular, I liked his systematic procedure, which involved computing the steady-state relative gain array (RGA) for 12 different control structures ("configurations"); the DV-configuration, LV-configuration, ratio configuration, and so on. However, when I looked in more detail on the procedure I discovered that its theoretical basis was weak. First, it did not actually include all structures, and it even eliminated the DB-configuration as "impossible" even through it is workable in practise (Luyben, 1989). Second, controllability theory tells that the steady-state RGA by itself is actually not useful, except that one should avoid pairing on negative gains. Third, the procedure focused on dual composition control, while one in practise uses only single end control, for example, because it may be optimal economically to use maximum heating to maximize the recovery of the valuable product.
Sigurd Skogestad, Norwegian University of Science and Technology (NTNU)
There is an upside for forward-thinking manufacturers regarding EPA blueprint for the way state and local regulatory agencies use the Clean Air Act permitting process to regulate greenhouse gas emissions in the United States.
U.S. Environmental Protection Agency blueprint for the way state and local regulatory agencies use the Clean Air Act permit process to regulate greenhouse gas emissions in the United States is defined in their November 17 document: PSD and Title V Permitting Guidance for Greenhouse Gases.
The greenhouse gases that will be regulated include carbon dioxide, methane, nitrous oxide, sulfur hexafluoride and a number of refrigerants.
The Agency believes that these compounds are responsible for changing the planet's climate and is thus taking steps to reduce emissions of the gases throughout the nation. In taking this action, EPA is breaking new ground, by not only defining a broad new class of air pollutants, but by changing the way that the Agency regulates emissions of those pollutants.
Traditionally, EPA has set definitive, measurable goals when seeking to reduce air pollutant emissions, both in terms of how much a compound a facility is allowed to emit and in terms of the maximum amount of the pollutant that can be in the air we breathe. The Agency will not take the same approach when it comes to greenhouse gases. Instead, they will be asking facilities to reduce emissions to the greatest extent possible and economically feasible.
And, yes, there is upside for forward-thinking manufacturers.
A short bit of history helps to understand why the cloud instrumentation development is so significant.
The first created instruments, let us call them traditional instruments, are of standalone or box format. Users connect sensors directly to the box instrument front panel, which contains the measurement circuitry and displays the results. Initially it was on analog meters and later with digital displays.
In many cases, test engineers wanted to have instruments communicate with each other, for instance in a stimulus/response experiment, when a signal generator instructs a digitizer when to start taking samples. This was initially done with serial links, but in the 1970s the Hewlett Packard Interface Bus, which evolved into today's IEEE-488 interface, became extremely popular for connecting instruments.
The next major breakthrough in measurement technology came with the availability of desktop computers, which made it more cost effective to run test programs, control instruments as well as collect data and allow test engineers to process and display data. Plug-in IEEE-488 boards allowed minicomputers and later PCs to perform these tasks.
Today such interface cards are often not needed thanks to instruments that communicate with PCs directly over USB or the Ethernet, and most recently even over wireless Ethernet schemes.
Marius Ghercioiu, President of Tag4M at Cores Electronic LLC
Critical infrastructure sites and facilities are becoming increasingly dependent on interconnected physical and cyber-based real-time distributed control systems (RTDCSs). A mounting cybersecurity threat results from the nature of these ubiquitous and sometimes unrestrained communications interconnections.