This guide discusses how best to optimize combustion efficiency in any application using combustion plant.
Combustion optimization in some form or other has become an absolute necessity for all combustion processes. Optimization improves efficiency, reduces environmental impact, reduces maintenance requirements and increases the time between maintenance shutdowns. There are many types of application where combustion optimization will be a key requirement. These include:
- Process heaters - the driver here is to increase throughput of feedstock, not necessarily fuel efficiency
- Waste incinerators - waste throughput is the main driver but environmental impact also has to be considered
- Steam raising, for power generation or other processes, pulp and paper, food and beverage etc, where fuel efficiency is the main driver
In any of the above examples, poor control of the combustion process may ultimately lead to damage to the plant, with problems such as soot formation, hotspots and/or corrosion in the flue ducts, to name but a few. In each case, the incidence of such problems, especially if left unresolved, will result in increased maintenance expenditure and a reduction in the life cycle of the plant.
Equipment designers frequently must incorporate miniature solenoid valves into their pneumatic designs. These valves are important components of medical devices and instrumentation as well as environmental, analytical, and similar product applications. However, all too often, designers find themselves frustrated. They face compromise after compromise. Pressure for increasingly miniaturized devices complicates every step of the design and valve selection process. And missteps can wreak havoc. How do designers balance the needs for reliability, extended service life, and standards compliance against often-contradictory performance requirements such as light weight, high flow, and optimum power use?
This report consolidates the expert views of designers and manufacturers with wide experience applying miniature solenoid valves for myriad uses across multiple industries. It presents a true insider's guide to which requirements are critical for common applications. It also highlights new valve technologies that may lessen or eliminate those troubling compromises.
The most commonly and most frequently measurable variable in industry is temperature. Every temperature measurement is different, which makes the temperature calibration process slow and expensive. While standards determine accuracy to which manufacturers must comply, they nevertheless do not determine the permanency of accuracy. Therefore, the user must be sure to verify the permanency of accuracy. If temperature is a significant measurable variable from the point of view of the process, it is necessary to calibrate the instrument and the temperature sensor.
Download this white paper to learn how to calibrate temperature instruments and why this is so important.
Paper is part of our everyday lives - whether in the workplace or at home. Global consumption of paper has grown 400% in the last 40 years. As manufacturing companies, our consumption of paper is far higher than it needs to be, especially given that there are technologies, software and electronic devices readily available today which render the use of paper in the workplace unnecessary. Calibrating instruments is an enormous task that consumes vast amounts of paperwork. Far too many automation companies still use paper-based calibration systems, which means they are missing out on the benefits of moving towards a paperless calibration system.
Download this white paper to learn more about the benefits of moving towards a paperless calibration system.
There is an upside for forward-thinking manufacturers regarding EPA blueprint for the way state and local regulatory agencies use the Clean Air Act permitting process to regulate greenhouse gas emissions in the United States.
U.S. Environmental Protection Agency blueprint for the way state and local regulatory agencies use the Clean Air Act permit process to regulate greenhouse gas emissions in the United States is defined in their November 17 document: PSD and Title V Permitting Guidance for Greenhouse Gases.
The greenhouse gases that will be regulated include carbon dioxide, methane, nitrous oxide, sulfur hexafluoride and a number of refrigerants.
The Agency believes that these compounds are responsible for changing the planet's climate and is thus taking steps to reduce emissions of the gases throughout the nation. In taking this action, EPA is breaking new ground, by not only defining a broad new class of air pollutants, but by changing the way that the Agency regulates emissions of those pollutants.
Traditionally, EPA has set definitive, measurable goals when seeking to reduce air pollutant emissions, both in terms of how much a compound a facility is allowed to emit and in terms of the maximum amount of the pollutant that can be in the air we breathe. The Agency will not take the same approach when it comes to greenhouse gases. Instead, they will be asking facilities to reduce emissions to the greatest extent possible and economically feasible.
And, yes, there is upside for forward-thinking manufacturers.
The continuing drive to improve productivity will encourage more automation networking. The driving factors behind this expected growth include lean working, increased traceability legislation, product lifecycle management production (PLM), and improvements in manufacturing cycle times. This requires connecting the factory floor to the corporate offices where enterprise resource planning (ERP) systems make information available backwards into the supply chain, as well as forward to customers. Simply put, everyone wants to see what's happening. As a result, networks and the information they handle are becoming as important as the industrial control functions they manage.
This white paper describes the open CC-Link IE Field network, an Industrial Ethernet technology, which operates at 1 Gigabit/sec. This data rate is 10 times faster than other Industrial Ethernet technologies in order to provide highly responsive control system communications, while at the same time allowing connection to field devices (RFID readers, vision systems, etc.) that have TCP/IP Ethernet ports communicating at slower 10Mb or 100Mb data rates.
A number of previously unknown security vulnerabilities in the ICONICS GENESIS32 and GENESIS64 products have been publically disclosed. The release of these vulnerabilities included proof-of-concept (PoC) exploit code.
While we are currently unaware of any malware or cyber attacks taking advantage of these security issues, there is a risk that criminals or political groups may attempt to exploit them for either financial or ideological gain.
The products affected, namely GENESIS32 and GENESIS 64 are OPC Web-based human-machine interface (HMI) / Supervisory Control and Data Acquisition (SCADA) systems. They are widely used in critical control applications including oil and gas pipelines, military building management systems, airport terminal systems, and power generation plants.
Of concern to the SCADA and industrial control systems (ICS) community is the fact that, though these vulnerabilities may initially appear to be trivial, a more experienced attacker could exploit them to gain initial system access and then inject additional payloads and/or potentially malicious code. At a minimum, all these vulnerabilities can be used to forcefully crash system servers, causing a denial-of-service condition. What makes these vulnerabilities difficult to detect and prevent is that they expose the core communication application within the GENESIS platform used to manage and transmit messages between various clients and services.
This White Paper summarizes the current known facts about these vulnerabilities. It also provides guidance regarding a number of possible mitigations and compensating controls that operators of SCADA and ICS systems can take to protect critical operations.
Preventing unplanned shutdowns, reducing downtime, and lowering maintenance costs have been shown to provide significant financial benefits. One way to achieve these results is to make certain that all installed assets are used to the best of their ability.
FDT Technology can be easily used in existing or new plants and can bring significant operational and financial benefits throughout the plant life cycle.
This paper provides an overview of FDT Technology and suggests text to use as part of your proposal or ordering specifications to make sure you are putting your assets to work.
Article 120.1 of the NFPA 70E establishes the procedure for creating an electrically safe work condition. Since this was written, the day-to-day practice of electrical safety has changed going beyond the precise language of Article 120.1(1-6). This is due to the increased usage of permanent electrical safety devices (PESDs) in Lock-out/Tagout procedures. The relatively new concept of permanent electrical safety devices actually improves the workers' ability to safely isolate electrical energy beyond that which was originally conceived when Article 120 was written. PESDs go beyond the high standard, yet they still adhere to the core principles found in Article 120.1. With PESDs incorporated into safety procedures and installed correctly into electrical enclosures, workers can transition the once-risky endeavors of verifying voltage into a less precarious undertaking that never exposes them to voltage. Since, every electrical incident has one required ingredient - voltage, electrical safety is radically improved by eliminating exposure to voltage while still validating zero energy from outside the panel.
The global trends and challenges driving the need for industry to improve energy efficiency are well known. The growing population and economic development in many countries throughout the world has caused energy and transportation fuel consumption to increase.
Is Moving Your SCADA System to the Cloud Right For Your Company?
Cloud computing is a hot topic. As people become increasingly reliant on accessing important information through the Internet, the idea of storing or displaying vital real-time data in the cloud has become more commonplace. With tech giants like Apple, Microsoft, and Google pushing forward the cloud computing concept, it seems to be more than just a passing trend.
Recently the focus of cloud computing has started to shift from consumer-based applications to enterprise management systems. With the promise of less overhead, lower prices, quick installation, and easy scalability, cloud computing appears to be a very attractive option for many companies.
Common questions surround this new technology: What is the "cloud"? What kind of information should be stored there? What are the benefits and risks involved? Is moving toward cloud computing right for your company?
Cloud computing is not a "fix-all" solution. It has strengths and weaknesses, and understanding them is key to making a decision about whether it's right for your company. We'll explore the major benefits and risks involved, and give you a set of factors to consider when choosing what information to put on the cloud.
When adding, modifying or upgrading a system, many critical infrastructures conduct a Factory Acceptance Test (FAT). A FAT includes a customized testing procedure for systems and is executed before the final installation at the critical facility. Because it is difficult to predict the correct operation of the safety instrumented system or consequences due to failures in some parts of the safety instrumented system, a FAT provides a valuable check of these safety issues. Similarly, since cyber security can also impact safety of critical systems if a system is compromised, it naturally makes sense to integrate cyber security with the FAT, a concept that brings extreme value and savings to an implementation process.
An Integrated Factory Acceptance Test (IFAT) is a testing activity that brings together selected components of major control system vendors and Industrial Control System (ICS) plant personnel in a single space for validation and testing of a subset of the control system network and security application environment in an ICS environment. Conducting an IFAT provides important advantages and benefits including: time savings, cost savings, improved ability to meet compliance requirements, and increased comfort level with integrated security solutions.
With the current trend of more intelligent ICSs and increased regulatory compliance, the best practice to achieving ICS and IT integration is by conducting an IFAT. A common problem that occurs in the industry is the unanticipated work associated with implementing security controls which can result in production issues. Performing an IFAT avoids costly redesign and troubleshooting during outage operations saving time and money that leads to an enhanced, sound security solution.
Jerome Farquharson, Critical Infrastructure and Compliance Practice Manager, and Alexandra Wiesehan, Cyber Security Analyst, Burns & McDonnell
A short bit of history helps to understand why the cloud instrumentation development is so significant.
The first created instruments, let us call them traditional instruments, are of standalone or box format. Users connect sensors directly to the box instrument front panel, which contains the measurement circuitry and displays the results. Initially it was on analog meters and later with digital displays.
In many cases, test engineers wanted to have instruments communicate with each other, for instance in a stimulus/response experiment, when a signal generator instructs a digitizer when to start taking samples. This was initially done with serial links, but in the 1970s the Hewlett Packard Interface Bus, which evolved into today's IEEE-488 interface, became extremely popular for connecting instruments.
The next major breakthrough in measurement technology came with the availability of desktop computers, which made it more cost effective to run test programs, control instruments as well as collect data and allow test engineers to process and display data. Plug-in IEEE-488 boards allowed minicomputers and later PCs to perform these tasks.
Today such interface cards are often not needed thanks to instruments that communicate with PCs directly over USB or the Ethernet, and most recently even over wireless Ethernet schemes.
Marius Ghercioiu, President of Tag4M at Cores Electronic LLC
Significant changes have taken place regarding Surge Protection Devices and UL 1449. With the changes has come different product marking requirements to identify those testing and product changes. Manufacturers of SPD equipment have long been testing to UL 1449 but only recently have such significant changes taken place regarding a whole product categories testing and performance.
An updated UL 1449 standard was released titled UL Standard for Safety for Surge Protective Devices, UL 1449 Third Edition, and was dated September 29, 2006. The result was that all manufacturers were required to retest their SPD products to ensure compliance before 9/29/2009.
The easiest way to notice a new SPD product versus an older product that still may be in inventory is the new gold UL holographic label on the product. The new label must have the SPD and not TVSS which was used during the later part of UL 1449- 2 edition.
Frequently, our customers will ask for a "one size fits all" Surge Protective Device (SPD), eliminating the need to stock several different part numbers to meet their customers needs. Some manufactures claim to have a one size fits all Surge Protection Device (SPD), however there is absolutely no benefit of this to the end user. Why? The one size fits all approach could in most cases actually cause damage to the equipment it should be protecting.
Specifiers and users of Surge Protective Devices (SPDs) are adjusting to new terminology and requirements. UL revised their 1449 Safety Standard for Surge Protective Devices to increase safety. The National Electrical Code (NEC) incorporated specific language to require the use of these safer products. This tip sheet will explain some of the changes affecting specifiers and users.
In this global business environment, it is common for manufacturers in North America to ship equipment to Europe. North America and Europe each have their own standards for Surge Protective Devices (SPD's)
which makes understanding the differences in electrical system terminology very important. In North America, all SPD products are associated with UL 1449 3rd edition whereas in Europe, IEC 61643-1 is used to provide standards. Recently 1449 3rd edition adopted new terminology and testing criteria to be more congruent with IEC 61643-1. However, system voltages and the how they are defined differ between the two standards.
Selecting the appropriate Surge Protective Devices (SPD) can seem like a daunting task with all of the different types on the market today. The surge rating or kA rating of an SPD is one of the most misunderstood ratings. Customers commonly ask for an SPD to protect their 200A panel and there is a tendency to think that the larger the panel, the larger the kA device rating needs to be for protection As we will explore in this paper, this is a common misunderstanding.
When a surge enters a panel, it does not care or know the size of the panel. So how do you know if you should use a 50kA, 100kA or 200kA SPD? Realistically, the largest surge that can enter a building's wiring is 10kA, as explained in the IEEE C62.41 standard. So why would you ever need a SPD rated for 200kA? Simply stated - for longevity.
So one may think: if 200kA is good, then 600kA must be three times better, right? Not necessarily. At some point, the rating diminishes its return, only adding extra cost and no substantial benefit. Since most SPDs on the market use a metal oxide varistor (MOV) as the main limiting device, we can explore how/why higher kA ratings are achieved. If an MOV is rated for 10kA and sees a 10kA surge, it would use 100% of its capacity. This can be viewed somewhat like a gas tank, where the surge will degrade the MOV a little bit (no longer is it 100% full). Now if the SPD has two 10kA MOVs in parallel, it would be rated for 20kA. Theoretically, the MOVs will evenly split the 10kA surge, so each would take 5kA. In this case, each MOV have only used 50% of their capacity which degrades the MOV much less (leaving more left in the tank for future surges).
Does this translate into surge "stopping power?" No, just because an SPD has 2 or 20 MOVs in parallel it does not mean it will limit the 10kA surge any better then a single SPD (of the same rating). The main objective of having MOVs in parallel is to increase the longevity of the SPD. Again, keep in mind that it is subjective and at some point you are only adding cost by incorporating more MOVs and receiving little benefit.
As mentioned before, panel size does not really play a role in the selection of a kA rating. The location of the panel within the facility is much more important. IEEE C62.41.2 defines the types of expected surges within a facility as:
Category C: Service Entrance, more severe environment: 10kV, 10kA surge
Category B: Downstream, greater than 30' from category C, less severe environment: 6kV, 3kA surge
Category A: Further downstream, greater than 60' from category C, least severe environment: 6kV, 0.5kA surge
How do you know what kA rating to use? The IEEE categories provide a good base for selecting kA ratings. There are many "right" sizes for each category but there needs to be a balance between redundancy and added cost. Qualified judgment should always be used when selecting the appropriate kA rating for an SPD.