Reliability of electronic circuit boards and electronic devices of console of drilling units are investigated. This new analytical method confirms modification of electronic device dynamic behaviour which installed in console of drilling unit. It is in agreement with experimental results which show electronic circuit board failure in driller console when dynamic loading is considerably less than values required to affect the board when board itself , lonely, is under vibration test.
Majority of failures of electronic equipment, instruments, sensors and sensitive devices are mechanical in nature. Typical failures caused by break, crack and creep. It is often quite difficult to determine main cause of malfunction and failure. Dynamic and vibration loads caused very large percent of sensitive equipment failures which installed in dynamic environment. Vibration failures are often difficult to trace. Usually resonance situations and coupling effects are involved. Sometimes it does not appear to be any connection between failed parts and vibration loadings.
The dynamic behaviour of electronic and electric device is a subject that has received considerable attention due to its technical importance. Analysis of electric / electronic devices or circuit boards under dynamic loading was presented previously. But only a limited number of contributions have been published on counteraction effects of complex systems (in this case drilling unit) and these devices. New analytical formulations to study modified dynamic behaviour of sensitive devices in a complex system (drilling unit) is presented. When a circuit board or electronic device is installed in a driller console, there will be complicated interactions between its dynamic characteristics and drilling unit dynamic behaviour. There are different dynamic responses and consequently much more possibilities of resonance. Electronic devices and electronic circuit boards are vulnerable against dynamic loading. For example for a drilling unit revamp project, vibration tests were conducted on electronic circuit board in a broad frequency range and with high vibration amplitudes. Tests were repeated however no damage was observed. During drilling unit site performance test, failure of electronic circuit board was occurred because of mechanical damage. The dynamic loading amplitude was considerably less than amplitude in vibration tests. Extensive investigation shows that failure was only due to modified dynamic behaviour of system.
Automation systems today have become remarkable warehouses of knowledge and information. Beyond just system configuration, many years of effort is inevitably invested in these systems by not only control engineers, but operations, process, maintenance, business and management personnel as well. In fact, over the life of an automation system the total intellectual investment will come to exceed the initial hardware and software cost many times over.
This paper will discuss some of the factors contributing to the impending process industry automation knowledge crisis, present reallife industry examples and provide a proven solution to mitigate the problems.
Industrial application developers have had two main options for interacting with production processes via programmable logic controllers (PLCs): they can buy a preprogrammed monolithic, shrink-wrapped human machine interface (HMI), complete and ready to go or they can customize their own solutions.
Shrink-wrapped HMI software packages are appealing because many complex tasks are hidden from you. Purchase the development software from an authorized distributor, load it into your development PC and then configure, debug and test. Then, just deploy the necessary runtime applications, data servers and configuration files on to your target PC or PCs. What could be easier?
But cookie-cutter HMI software solutions might not necessarily be the best or most practical approach for your specific industrial applications.
For one thing, while the shrink-wrapped HMI software packages enable connections to other vendors' devices, software, and systems via OPC or other standards, such connectivity is seldom adequate for high security or real-time control. And no matter how advanced the integration technology the package uses, you will end up lagging behind the technology curve. For example, if you had bought a package using the distributed common object model (DCOM) and wanted to benefit from advances in security and robustness that Microsoft had made since you bought the package, you would have to buy a new package. Moreover, the monolithic nature of the shrink-wrapped offerings often makes it difficult to embed third-party capabilities directly into your solution, thus limiting your options further.
Then there's training. Because the development environment and behavior of each HMI vendor's software varies, you'll need to acquire specialized skills to accomplish similar tasks. Training courses, material costs and schedules also vary by HMI publisher and many times are offered only through exclusive distributor channels. You could consider hiring outside help, but because of the specialized training and experience, the talent pool can be relatively shallow and therefore proportionately expensive.
And for many, cost of multiple deployments is an even bigger issue. Before you can actually deploy your solution to PCs, portable devices, or Web servers, you must typically have to pay for additional runtime software licenses. If you have more than a couple of users, this could amount to a considerable expense, often making this approach cost-prohibitive, especially if you are paying for more functionality than you actually never need.
Finally, there are the intangibles. As well-designed and flexible as these shrinkwrapped solutions might be, they almost always force compromises that would not be necessary if the solution were custom built for your specific applications. Whether that is a matter of function or just pride, it can be significant determining your satisfaction with the resulting interface.
Controller tuning can be accomplished quickly and accurately using proven techniques. While many engineers and technicians resort to "tune by feel," most will admit that this approach yields inconsistent results. While some might claim that controller tuning is "part art, part science," use of these best practices can ensure that it is 98% science.
In today's world, automation is used prominently in every major industry. While different industries often use different specialized devices, control systems and applications, they all share a common rapidly growing challenge - how to share data amongst all these components and the rest of the enterprise. OPC is the solution - it solves the problem of communication between devices, controllers and applications. It is a standardized approach to data connectivity that does not get caught up in the usual custom-drive based connectivity problems.
Read this guide to learn the ABCs of OPC and how it can solve your data connectivity issues!
Ethernet for industrial communications is growing rapidly in factory automation, process control and SCADA systems. The ODVA EtherNet/IP network standard is gaining popularity as a preferred industrial protocol. Plant engineers are recognizing the significant advantages that Ethernet-enabled devices provide such as ease of connectivity, high performance and cost savings. While EtherNet/IP has many advantages, cable installation is often expensive, and communications to remote sites or moving platforms may not be reliable or cost-effective.
Wireless Ethernet technologies have emerged that can now reliably reduce network costs while improving plant production. However, applying these technologies is not a simple matter as industrial Ethernet systems vary greatly in terms of bandwidth requirements, response times and data transmission characteristics. This paper will explore applying wireless technologies to EtherNet/IP based networks for industrial automation systems.
Waste and rework in batch manufacturing cost serious money and impact time to market. Standards-based recipe management can improve quality, reduce cost and improve profit and time to market.
Read this white paper and improve your bottom line.
Process Analytics and Intelligencesometimes called Manufacturing Intelligencehas transformed the way companies produce goods, understand their manufacturing processes, and ensure a quality product in ways we could not have foreseen ten years ago.
Real-time Analytics have replaced the legacy concept of running reports. Reports that represent a static picture of a process at a fixed point in time are great tools for compliance audits and long term warranty analysis. However, they may not accurately represent the "as-is" state of a process. Reports showing large amounts of data can be difficult to interpret. There are often limitations in how the report data can be drilleddown and viewed.
With today's large volumes of data, there's a wealth of information that can be gained about the process. But how can this data be captured, managed and retrieved in a way that presents the information in an up-to-theminute easy to understand format? Real-time Analytics provides the techniques and solutions that address this problem. Instead of users having to interpret the data, it's presented in a graphical form enabling them to easily drill down to explore the data in real-time.
This white paper discusses how Process Analytics is implemented and utilized. Ways of managing and distributing Process Analytics to the organization are presented.
Many manufacturing plants are driven to lower product costs and increase quality and flexibility while maintaining focus on compliance and safety. While automation plays an essential role in meeting business needs, some solutions can add complexity and costs, thereby reducing the return on investment. The ideal solution provides the agility required without sacrificing reliability and lifecycle costs.
Download this resource to learn how Experion LS can be the ideal solution for your plant.
Diverse Redundancy is being used in SIS technology to achieve higher safety integrity. Diverse redundancy refers to the use of two or more different systems, which are built using different components, algorithms, electronics, design methodology etc. to perform the same task. One benefit of diverse redundancy is the increased capabilities to reduce common mode and systematic failures such as those caused by design flaws.
Today's manufacturing companies face many pressures, including rising costs for raw materials, labor and distribution, not to mention customer demands for higher quality at lower prices. This paper explores the use of today's recipe software solutions as a part of a control strategy to transform a company's recipe area from a place of contentious schedule adherence and "rollofthedice quality" to one that can reduce waste while increasing quality to provide a consistent schedule adherence in an environment for continuous improvement.
Download and learn how recipe solutions can improve a company's bottom line.
There are potential challenges and pitfalls of supporting advanced solutions across the areas of process optimization, blending, alarm management, asset management, manufacturing execution systems, and more. This paper discussed best practices and lifecycle support methodologies that address many of the support and maintenance challenges required to sustain advanced solutions benefits.
Download to learn more.
Thirty years ago, specifying an enclosure involved three steps: ordering the appropriately sized gray box, installing sensitive electronic equipment and hoping the enclosure would withstand its surroundings.
Electromagnetic flowmeters, also known as mag meters, are popular and proven devices for flow measurement of electrically conductive process fluids and for volumetric filling machine applications. Of prime importance to a mag meters accuracy and long term performance is the condition of the metering section of the flow sensor.
Unlike in most processes, mag meters in filling machine applications are frequently subject to widely varying conditions during normal operation. As a result, they are viable candidates for evaluating their long term performance in an accelerated use environment. Therefore PTB, a German research and approvals agency, in association with KROHNE, undertook an extensive project to study the long term measurement stability of mag meters in filling machine applications.
Faraday's law is the basis of a mag meters measuring principle. The design generally features an electrical isolating liner on the inner wall of the mag meter measuring tube. Linings such as PTFE, PFA or polypropylene or for hygienic reasons, PFA (perfluoroalkoxy) are used. Pressure bearing ceramic pipes are also used. PFA is known to absorb moisture, it can flow under pressure and temperature which means that it changes structure and shape which, in turn, affects the interior diameter of the measuring tube. Changes in the inner diameter of the measuring tube lead to measurement errors. This can lead to problems, especially when extreme precision or repeatability are at stake. This only takes effect after the devices have been in use for longer periods of time and through the corresponding frequent cleaning processes using liquid or steam as are common in the food industry.
The effect is particularly significant when it comes to mag meters used on filling machines for filling PET bottles ("Filling mag meter"). In this case, an extremely high degree of repeatability is required and the quality of the filling process is directly visible in each individual bottle.
That is why, in a joint research cooperative with the Physikalisch-Technischen Bundesanstalt (National Metrology Institute) (PTB), KROHNE Messtechnik tested the measurement stability of filling mag meters. Filling mag meters with PFA liners and filling mag meters with ceramic measuring tubes were both tested. The PTB was interested in this test because for more than 20 years Magmeters with ceramic measuring tubes have been the norm in the normal PTB measuring systems as well as in many other calibration test stations. Thanks to this test, the PTB was able to gain additional knowledge about the behavior of these devices under difficult conditions.
Dipl.-Ing. F. Hofmann and Dipl.-Ing. B. Schumacher, KROHNE Messtechnik GmbH Co KG Duisburg, Germany
This white paper discusses important differences between the commercialgrade components used in enterprise Ethernet systems and the industrial-grade cabling, connectivity and active devices that are essential to the optimal, longterm performance of robust industrial Ethernet networks.
In the whitepaper, Ethernet Direct describes how PoE technology enables the end devices like Wireless Access Point, IP Phone, IP Camera, IP Access Control terminal, RFID reader and other IP-based appliances to get power supply from a Cat-3, Cat-5/5e or Cat-6 LAN cable without extra power connection needed.
Statement for the Record, July 21, 2009 Hearing before the Subcommittee on Emerging Threats, Cybersecurity, Science and Technology.
I appreciate the opportunity to provide the following statement for the record. I have spent more than thirty-five years working in the commercial power industry designing, developing, implementing, and analyzing industrial instrumentation and control systems. I hold two patents on industrial control systems, and am a Fellow of the International Society of Automation. I have performed cyber security vulnerability assessments of power plants, substations, electric utility control centers, and water systems. I am a member of many groups working to improve the reliability and availability of critical infrastructures and their control systems.
On October 17, 2007, I testified to this Subcommittee on "Control Systems Cyber SecurityThe Need for Appropriate Regulations to Assure the Cyber Security of the Electric Grid."
On March 19, 2009, I testified to the Senate Committee on Commerce, Science, and Transportation on "Control Systems Cyber SecurityThe Current Status of Cyber Security of Critical Infrastructures."
I will provide an update on cyber security of the electric system including adequacy of the NERC CIPs and my views on Smart Grid cyber security. I will also provide my recommendations for DOE, DHS, and Congressional action to help secure the electric grid from cyber incidents.
Joe Weiss, PE, CISM. Applied Control Solutions, LLC
Despite increased dependence on ever more powerful processcontrol and safety systems, the human aspect remains an integral part of any plant's operation. ABB believes that the safety system of the future is no longer an "addon," that is designed and supplied separately from the rest of the plant or process, but an integral part of it.
This whitepaper provides the history of the Six Sigma Symbol and explanations on the Six Sigma concept, the Six Sigma implementation, the Six Sigma calculation and more. Download this paper now.
Product variation and defects undercut customer loyalty as well as company profits. Six Sigma is a rigorous, disciplined, data-driven methodology that was developed to enhance product quality and company profitability by improving manufacturing and business processes.
Six Sigma uses statistical analysis to quantitatively measure how a process is performing. That process can involve manufacturing, business practices, products, or service. To be defined as Six Sigma means that the process does not produce more than 3.4 defects per million opportunities (DPMO) which translates to 99.9997% efficiency.
A Six Sigma defect is considered anything that can cause customer dissatisfaction, such as being outside of customer specifications. A Six Sigma opportunity is the total number of chances for a defect to occur.
Six Sigma Concept
The Six Sigma concept was developed by Motorola in 1986 with the stated goal of improving manufacturing processes and reducing product defects and variation. The underlying goal was to achieve near quality perfection with 99.9997% of variable values within specifications.