The Need for Wireless Monitoring An Overview
There is a real on-going need for monitoring of valve positions (actuated or manual) in the process
line. Malfunctioning of a valve can result in danger to human health and safety, affect yields, and
generate environmental risks. In some industries, regulation requires constant recording of valve
position. Currently, such monitoring is done through wired Switch Boxes. Each such device
requires data transmission and power cabling. Not only are these cables costly to manufacture and
install, they are also one of the most frequent sources of failures in the process line, due to the fact
that they are very often exposed to harsh environmental conditions. In fact, it is right here, at the
field device level, where the majority of problems with wires really exist.
Israel Radomsky , CEO and Founder, Eltav Wireless Monitoring Ltd. Israel
Westar Energy Inc. turned to WiredCitys IT Monitor as its primary means of preventing network and system problems inside its power plant network. The decision was spurred on by Westar Energy's familiarity and satisfaction with WiredCitys parent company, OSIsoft Inc., whose Real-time Performance Management (RtPM) Platform has had a successful track record in Westar Energy's power generation plants. In addition to improving the quality of network service and ensuring that power traders maintain critical Internet connectivity, Westar Energy discovered new uses for IT Monitor, including tracking down a virus and tuning an Oracle database, which saved a potentially costly upgrade.
Properly functioning steam traps open to release condensate and automatically close when steam is present. Failed traps waste fuel, reduce efficiency, increase production costs and compromise the overall integrity of the steam and condensate systems. Traps should be tested on a regular basis -- or the neglect may be quite costly.
Bruce Gorelick, Enercheck Systems and Alan Bandes, UE Systems, Inc.
As industrial Ethernet networks grow in number and importance, keeping the right traffic on and off the network becomes essential
The use of Ethernet for industrial automation has grown dramatically. One of the main benefits of moving from legacy fieldbus to Ethernet is the ability to connect the front office to the manufacturing system. This is possible because Ethernet is not a proprietary communication protocol. The non-proprietary nature of Ethernet allows engineers to mix and match equipment from different vendors and get competitive bids. This combination of better office-factory communication and open standards helped industrial Ethernet gain recent widespread acceptance.
But with these benefits come potential problems. As networks and the services they provide evolve and servers or user machines are replaced and upgraded, the likelihood of passing unwanted, often obsolete protocols within the network increases.
Potentially more challenging is the existence of unknown protocols that may degrade the performance of the network. Unknown protocols are often caused by well-intended but uninformed employees who attach unauthorized devices, such as wireless access points, to the network. They can also be caused by traffic such as streaming audio from employees listening to Internet radio stations while working.
Portal technology is invigorating todays corporate environments. The business world began to take portal technology seriously when the price to acquire start-up portal sites, such as My Space and Flickr, exceeded all anticipated market values. Today, portals are big business. Corporations ranging from SAP to Microsoft are investing millions of dollars in portal technology. New technology frameworks and architecture have changed the direction of portal solutions from recreational portals to the enterprise. Networking technology enables users to access portal-based web sites from anywhere and through any device that can connect to the internet. The purpose of this paper is to help you determine how your company can benefit from a portal environment, and from the OSIsoft suite of visualization components. For the first time, you can combine data stored in PI with enterprise systems and other data sources into easily accessible information, visible to individuals, teams, sites, and the enterprise.
On September 11, 2001, America was attacked by terrorists and the United States quickly acknowledged
vulnerabilities at our airports, borders, food supply and water supply systems. Soon after, the government
required vulnerability assessments (VAs) for all municipalities with large cities required to go first. In 2002,
Madison Water Utility (MWU) in Madison, Wisconsin, underwent its VA and saw a need for video cameras at
many locations, including 32 remote sites.
Two obstacles stood in the way of Madison meeting this need:
- Technology--whose cameras, network and communication system? How can video work with our SCADA system?
- Money-- who will pay to protect Madisons water supply?
Al Larson P.E., Principal Engineer Madison Water Utility, Madison, WI
With the economy slowing down on a global basis, managers are reluctant to spend money or move forward on prior plans. Indeed, many companies are postponing or cancelling projects and many have begun cost cutting measures that almost always mean job eliminations.Reducing costs is a responsible management action; the goal is to protect the return on the investors equity. Video can provide a solution to many of the issues that remain, even when people are laid off or factories are closed.
The Kodak Park, located in Rochester, N.Y., is over 100 years old. The site has 1300 acres, two utility power plants, two company-owned water and waste water treatment plants, 150 buildings and 11,000 employees. The Kodak Park utility power plants have enormous generation output and demand requirements including 2,000,000 pounds per hour steam load and a 125 MW electric load.
The site also has 600 electric distribution meters, 600 additional non-electric distribution meters and many generation site meters. The utilities systems were operated and monitored by a group of disparate building automation systems and distributed control systems.
With such a vast energy and management system, Kodak shares many of the same concerns as regional utility companies conservation, optimization of resources and consolidation of data from various legacy systems. Any new technology solution added to this mix had to be compatible with our well-defined information architecture requirements.
This white paper discusses how appliance transaction modules enable the sharing of data for tracking and tracing applications.
Automated tracking and tracing all aspects of a product from its initial ingredients or components, through manufacturing and into the supply chain, is not only a requirement in industries such as food and pharmaceutical, it has also become a viable strategy for all businesses. From automotive and metals to appliances and consumer goods, companies rely on tracking and tracing to lower material, production, inventory, labor and scrap costs while improving customer satisfaction.
By being able to see, analyze, manage and store selected data in real-time, companies are able to make swift changes to optimize selected areas within their production capabilities. They are also able to document their processes from incoming raw materials, through production and onto the supply chain.
IPLOM, a privately held company, manufactures environmentally compatible fuel products. As a small player in a competitive market, IPLOM needed to manage and optimize production in a real-time environment. IPLOM also needed to demonstrate the consistency of the products in real-time in an easily accessible Web site to its customers.
IPLOM first selected OSIsoft Sigmafine to provide mass balance yields. After one year, the company purchased the PI System and is now planning an RtWebParts implementation.
Industrial application developers have had two main options for interacting with production processes via programmable logic controllers (PLCs): they can buy a preprogrammed monolithic, shrink-wrapped human machine interface (HMI), complete and ready to go or they can customize their own solutions.
Shrink-wrapped HMI software packages are appealing because many complex tasks are hidden from you. Purchase the development software from an authorized distributor, load it into your development PC and then configure, debug and test. Then, just deploy the necessary runtime applications, data servers and configuration files on to your target PC or PCs. What could be easier?
But cookie-cutter HMI software solutions might not necessarily be the best or most practical approach for your specific industrial applications.
For one thing, while the shrink-wrapped HMI software packages enable connections to other vendors' devices, software, and systems via OPC or other standards, such connectivity is seldom adequate for high security or real-time control. And no matter how advanced the integration technology the package uses, you will end up lagging behind the technology curve. For example, if you had bought a package using the distributed common object model (DCOM) and wanted to benefit from advances in security and robustness that Microsoft had made since you bought the package, you would have to buy a new package. Moreover, the monolithic nature of the shrink-wrapped offerings often makes it difficult to embed third-party capabilities directly into your solution, thus limiting your options further.
Then there's training. Because the development environment and behavior of each HMI vendor's software varies, you'll need to acquire specialized skills to accomplish similar tasks. Training courses, material costs and schedules also vary by HMI publisher and many times are offered only through exclusive distributor channels. You could consider hiring outside help, but because of the specialized training and experience, the talent pool can be relatively shallow and therefore proportionately expensive.
And for many, cost of multiple deployments is an even bigger issue. Before you can actually deploy your solution to PCs, portable devices, or Web servers, you must typically have to pay for additional runtime software licenses. If you have more than a couple of users, this could amount to a considerable expense, often making this approach cost-prohibitive, especially if you are paying for more functionality than you actually never need.
Finally, there are the intangibles. As well-designed and flexible as these shrinkwrapped solutions might be, they almost always force compromises that would not be necessary if the solution were custom built for your specific applications. Whether that is a matter of function or just pride, it can be significant determining your satisfaction with the resulting interface.
The Effect on the 10/100 Industrial Ethernet Switch Performance.
The Anixter Infrastructure Solutions Lab wanted to determine what effect the new TIA-1005 industrial cabling infrastructure standard would have on the data throughput performance of real Ethernet data packets running between SmartBits test cards and various manufacturers' 10/100 Ethernet switches in a real-world simulation. The test included five (5) different IP20-rated switches and three (3) different enterprise rack-mounted switches using various cabling channels made from both Category 5e and Category 6 cabling components and connector pairs that are allowable under the standard. The premise also asserts that the effect of the cabling channel interference will also vary from port to port and switch to switch because of the variable transmitter and receiver functionality.
As with most electric transmission and distribution (T&D) companies, growth does not come without many challenges commonly faced by most large utility companies: diverse data sources in different locations with restricted access; numerous manual data retrieval processes with limited ability for people outside the control center to immediately observe what problems have occurred; and operating in an interactive environment.
In 2003, one T&D utility implemented OSIsoft's PI System (PI) across all transmission and distribution operations. This decision dramatically changed the way the utility was able to access power system data and conduct business. Now operators, engineers, analysts, managers and executives are able to monitor real-time power system data using easy-to-configure displays with the ability to trend and analyze in real time or historical mode. PI gave the utility the ability to monitor transmission line status from the Emergency Operations Center when crises arise. Systems are now integrated, and data is provided to operations, management, planning, forecasting and regulatory compliance groups.
The plant operator has an extremely valuable and important responsibility: being the force and energy managing a capital enterprise easily worth hundreds of millions of dollars to produce or impact a daily revenue stream of millions, give or take. We ask him to be ever mindful of what the plant might be doing. We ask him to be capable of finding every little problem before it grows into a big one. We ask him to shoulder the burden of everything that goes wrong during his watch, all without any recognition when nothing does, and precious little (if not actual blame) when it goes wrong and he manages to manage. Within his area of responsibility and authority he must be able to view every control loop, most sensors, most pieces of equipment, and much of the supporting utilities, and then adjust as appropriate.
The failure to maintain situational awareness has been present in almost every disaster event that was not the result of spontaneous complete surprise. Start with the assumption that no one wants an accident. That no one would chose disaster over success. But accidents and disasters happen. We now know to a high degree of certainty that they happen because those in charge of ensuring that they do not happen, aren't aware that they are happening. They fail to know the situation. They are unaware of what is really going on, what is likely to happen, or what isn't happening that they think is. As explained in my book Alarm Management for Process Control, the solution is facilitated by effective operator interface design. Let's follow the path of interface design that can lead to better situation awareness.
Safety is a big concern when it comes to managing power distribution systems. High voltage and high current switchgear boxes serve an important role in establishing points of control within the power distribution system. The high voltages and current flowing through switchgear boxes greatly increase the probability of electric arcing and arc blasts. An arc blast is characterized by intense amounts of heat, pressure, shrapnel and molten copper. Great strides have been made in building arc resistant switchgears, nevertheless accidents have happened and the state of technology is far from completely eliminating them. One solution to this problem is to monitor the temperature of switchgear boxes to elicit early warning signs of imminent failure. Conventional methods of monitoring switchgear temperature are expensive and not entirely effective. Surface Acoustic Wave (SAW) technology can provide a passively powered (battery-less), wireless temperature measurement solution that is suited for switchgears. This application note explores how a SAW based solution can provide a reliable, safe and cost-effective means of monitoring switchgear temperature.
Safety is a big concern when it comes to managing power distribution systems. High voltage and high current switchgear boxes serve an important role in establishing points of control within the power distribution system.
Thirty years ago, specifying an enclosure involved three steps: ordering the appropriately sized gray box, installing sensitive electronic equipment and hoping the enclosure would withstand its surroundings.
This white paper describes how SNMP is applied to asset management and transportation of "shadow data," information on equipment maintenance and security within the SCADA system. Since SNMP has emerged as a very efficient vehicle for transportation of this information, it is feasible for addition to existing systems. The white paper includes descriptions of smart function blocks, which significantly reduce programming efforts when used with Semaphore's T-BOX RTU and Kingfisher RTU product lines.
This whitepaper provides the history of the Six Sigma Symbol and explanations on the Six Sigma concept, the Six Sigma implementation, the Six Sigma calculation and more. Download this paper now.
Product variation and defects undercut customer loyalty as well as company profits. Six Sigma is a rigorous, disciplined, data-driven methodology that was developed to enhance product quality and company profitability by improving manufacturing and business processes.
Six Sigma uses statistical analysis to quantitatively measure how a process is performing. That process can involve manufacturing, business practices, products, or service. To be defined as Six Sigma means that the process does not produce more than 3.4 defects per million opportunities (DPMO) which translates to 99.9997% efficiency.
A Six Sigma defect is considered anything that can cause customer dissatisfaction, such as being outside of customer specifications. A Six Sigma opportunity is the total number of chances for a defect to occur.
Six Sigma Concept
The Six Sigma concept was developed by Motorola in 1986 with the stated goal of improving manufacturing processes and reducing product defects and variation. The underlying goal was to achieve near quality perfection with 99.9997% of variable values within specifications.