This presentation discusses:
- Highlights of Greenhouse Gas Mandatory Reporting Rule (GHG MRR)
Overview of Subpart A: General Provisions
Overview of Subpart C: Stationary Combustion Units
Several Industry specific requirements
- Green facilities of the future?
Energy and carbon management impact
Benefits from GHG Compliance
1. Enhance loss control (business process and procedures)
2. Enhance key equipment performance
3. Improve energy efficiency/emission recovery
Gulf Coast Chemical Plant Case Study
Patrick Truesdale, Emerson Process Management - ISA
Critical infrastructure sites and facilities are becoming increasingly dependent on interconnected physical and cyber-based real-time distributed control systems (RTDCSs). A mounting cybersecurity threat results from the nature of these ubiquitous and sometimes unrestrained communications interconnections.
Is Moving Your SCADA System to the Cloud Right For Your Company?
Cloud computing is a hot topic. As people become increasingly reliant on accessing important information through the Internet, the idea of storing or displaying vital real-time data in the cloud has become more commonplace. With tech giants like Apple, Microsoft, and Google pushing forward the cloud computing concept, it seems to be more than just a passing trend.
Recently the focus of cloud computing has started to shift from consumer-based applications to enterprise management systems. With the promise of less overhead, lower prices, quick installation, and easy scalability, cloud computing appears to be a very attractive option for many companies.
Common questions surround this new technology: What is the "cloud"? What kind of information should be stored there? What are the benefits and risks involved? Is moving toward cloud computing right for your company?
Cloud computing is not a "fix-all" solution. It has strengths and weaknesses, and understanding them is key to making a decision about whether it's right for your company. We'll explore the major benefits and risks involved, and give you a set of factors to consider when choosing what information to put on the cloud.
A short bit of history helps to understand why the cloud instrumentation development is so significant.
The first created instruments, let us call them traditional instruments, are of standalone or box format. Users connect sensors directly to the box instrument front panel, which contains the measurement circuitry and displays the results. Initially it was on analog meters and later with digital displays.
In many cases, test engineers wanted to have instruments communicate with each other, for instance in a stimulus/response experiment, when a signal generator instructs a digitizer when to start taking samples. This was initially done with serial links, but in the 1970s the Hewlett Packard Interface Bus, which evolved into today's IEEE-488 interface, became extremely popular for connecting instruments.
The next major breakthrough in measurement technology came with the availability of desktop computers, which made it more cost effective to run test programs, control instruments as well as collect data and allow test engineers to process and display data. Plug-in IEEE-488 boards allowed minicomputers and later PCs to perform these tasks.
Today such interface cards are often not needed thanks to instruments that communicate with PCs directly over USB or the Ethernet, and most recently even over wireless Ethernet schemes.
Marius Ghercioiu, President of Tag4M at Cores Electronic LLC
The most commonly and most frequently measurable variable in industry is temperature. Every temperature measurement is different, which makes the temperature calibration process slow and expensive. While standards determine accuracy to which manufacturers must comply, they nevertheless do not determine the permanency of accuracy. Therefore, the user must be sure to verify the permanency of accuracy. If temperature is a significant measurable variable from the point of view of the process, it is necessary to calibrate the instrument and the temperature sensor.
Download this white paper to learn how to calibrate temperature instruments and why this is so important.
Significant changes have taken place regarding Surge Protection Devices and UL 1449. With the changes has come different product marking requirements to identify those testing and product changes. Manufacturers of SPD equipment have long been testing to UL 1449 but only recently have such significant changes taken place regarding a whole product categories testing and performance.
An updated UL 1449 standard was released titled UL Standard for Safety for Surge Protective Devices, UL 1449 Third Edition, and was dated September 29, 2006. The result was that all manufacturers were required to retest their SPD products to ensure compliance before 9/29/2009.
The easiest way to notice a new SPD product versus an older product that still may be in inventory is the new gold UL holographic label on the product. The new label must have the SPD and not TVSS which was used during the later part of UL 1449- 2 edition.
A number of previously unknown security vulnerabilities in the ICONICS GENESIS32 and GENESIS64 products have been publically disclosed. The release of these vulnerabilities included proof-of-concept (PoC) exploit code.
While we are currently unaware of any malware or cyber attacks taking advantage of these security issues, there is a risk that criminals or political groups may attempt to exploit them for either financial or ideological gain.
The products affected, namely GENESIS32 and GENESIS 64 are OPC Web-based human-machine interface (HMI) / Supervisory Control and Data Acquisition (SCADA) systems. They are widely used in critical control applications including oil and gas pipelines, military building management systems, airport terminal systems, and power generation plants.
Of concern to the SCADA and industrial control systems (ICS) community is the fact that, though these vulnerabilities may initially appear to be trivial, a more experienced attacker could exploit them to gain initial system access and then inject additional payloads and/or potentially malicious code. At a minimum, all these vulnerabilities can be used to forcefully crash system servers, causing a denial-of-service condition. What makes these vulnerabilities difficult to detect and prevent is that they expose the core communication application within the GENESIS platform used to manage and transmit messages between various clients and services.
This White Paper summarizes the current known facts about these vulnerabilities. It also provides guidance regarding a number of possible mitigations and compensating controls that operators of SCADA and ICS systems can take to protect critical operations.
This paper summarizes Sigurd Skogestad's struggles in the plantwide control field.
A chemical plant may have thousands of measurements and control loops. By the term plantwide control it is not meant the tuning and behavior of each of these loops, but rather the control philosophy of the overall plant with emphasis on the structural decisions. In practice, the control system is usually divided into several layers, separated by time scale.
My interest in this field of plantwide control dates back to 1983 when I started my PhD work at Caltech. As an application, I worked on distillation column control, which is excellent example of a plantwide control problem. I was inspired by Greg Shinskey's book on Distillation Control, which came out with a second edition in 1984 (Shinskey, 1984). In particular, I liked his systematic procedure, which involved computing the steady-state relative gain array (RGA) for 12 different control structures ("configurations"); the DV-configuration, LV-configuration, ratio configuration, and so on. However, when I looked in more detail on the procedure I discovered that its theoretical basis was weak. First, it did not actually include all structures, and it even eliminated the DB-configuration as "impossible" even through it is workable in practise (Luyben, 1989). Second, controllability theory tells that the steady-state RGA by itself is actually not useful, except that one should avoid pairing on negative gains. Third, the procedure focused on dual composition control, while one in practise uses only single end control, for example, because it may be optimal economically to use maximum heating to maximize the recovery of the valuable product.
Sigurd Skogestad, Norwegian University of Science and Technology (NTNU)
This guide discusses how best to optimize combustion efficiency in any application using combustion plant.
Combustion optimization in some form or other has become an absolute necessity for all combustion processes. Optimization improves efficiency, reduces environmental impact, reduces maintenance requirements and increases the time between maintenance shutdowns. There are many types of application where combustion optimization will be a key requirement. These include:
- Process heaters - the driver here is to increase throughput of feedstock, not necessarily fuel efficiency
- Waste incinerators - waste throughput is the main driver but environmental impact also has to be considered
- Steam raising, for power generation or other processes, pulp and paper, food and beverage etc, where fuel efficiency is the main driver
In any of the above examples, poor control of the combustion process may ultimately lead to damage to the plant, with problems such as soot formation, hotspots and/or corrosion in the flue ducts, to name but a few. In each case, the incidence of such problems, especially if left unresolved, will result in increased maintenance expenditure and a reduction in the life cycle of the plant.
The continuing drive to improve productivity will encourage more automation networking. The driving factors behind this expected growth include lean working, increased traceability legislation, product lifecycle management production (PLM), and improvements in manufacturing cycle times. This requires connecting the factory floor to the corporate offices where enterprise resource planning (ERP) systems make information available backwards into the supply chain, as well as forward to customers. Simply put, everyone wants to see what's happening. As a result, networks and the information they handle are becoming as important as the industrial control functions they manage.
This white paper describes the open CC-Link IE Field network, an Industrial Ethernet technology, which operates at 1 Gigabit/sec. This data rate is 10 times faster than other Industrial Ethernet technologies in order to provide highly responsive control system communications, while at the same time allowing connection to field devices (RFID readers, vision systems, etc.) that have TCP/IP Ethernet ports communicating at slower 10Mb or 100Mb data rates.