Process Automation Programming Language. One for All

The Case for Using Natural Language

By William M. Hawkins

1 of 2 < 1 | 2 View on one page

Automation continues to expand into manufacturing processes, driven by the need for more production with reduced costs. As it expands, it grows in complexity as the technology available for process equipment and control becomes more complex and cost-effective. As complexity grows, the process becomes more hidden from the operators, who are supposed to deal with unforeseen problems as they arise.

The problems associated with opaque automation are already with us. The most common way to obscure a process is with a flood of alarms, so good work is being done in the alarm management field. Early fly-by-wire aircraft flew into terrain when the pilots were unable to correct what the automation was doing because they didn't understand what was going on. The investigators blamed the crashes on opaque automation.

A continuous process doesn't have much automation aside from its alarms and interlocks. The emphasis is on making the measurements of the state of the process available to operators, while those measurements are held to setpoints by controllers. This is changing as ways to apply procedural control are being developed. It's not that continuous processes don't have procedures—the standard operating procedures manual for a process is full of them. Only recently has the procedural control developed for batch processes been considered for use with continuous processes.

Also Read: Open Control Programming at Last?

Automating process procedures can easily lead to opaque automation when the procedure can go in different directions, depending on alarms and changes in the process. An operator who has no idea what will happen next has a limited ability to keep the process out of trouble. The process equivalent of flight into terrain often leads to a fire and possible explosions in a chemical process. Discrete processes suffer loss of production when part of a machine breaks, and the debris falls into a gearbox.

The principle reason why procedures become opaque is that they're designed by clever (not to say overly inventive) engineers and programmers, who are focused on getting the job done without considering the need for a clear human interface. The result is inscrutable logic encoded into procedures that are not readable by ordinary mortals, assuming that most people don't know a database from first base.

Change is required in the way that procedures are designed, but people don't want to change if what they have works. They have to be shown that what they think works really isn't working, and that can get expensive. Some towns don't install a traffic light until someone is killed at the intersection.

One reason that procedures are encoded is efficiency of computation. Years ago, programmers resorted to assembly language programs to save expensive memory and speed up slow applications. Today's computers have several orders of magnitude more memory and speed. Tomorrow's computers could make today's look ridiculous, especially if quantum weirdness can be tamed. Efficiency of computation is not an issue with automation-scale applications.

Another factor that complicates procedures is the number of translations that must occur between the user's requirements and the functioning machine. An engineer must understand what the user wants, which requires an engineer who has worked as a user. A programmer must understand what the engineer wants, but it is more difficult to find programmers who understand anything but getting code to work in a computer. The problem must be reduced to mathematics and branching tests.

The U.S. FDA requires a paper-laden trail through the V-model (Figure 1) to assure that software will do what the user required. Users aren't always good at defining exactly what they want ("I'll know it when I see it"), which leads to multiple iterations of the V-model until the result looks like what the user wants. However, that may change when the user tries to use it, and discovers flaws in the human interface.

The situation today is not unlike the early days of telegraphy when a user drafted a message and took it to a telegraph office. A telegrapher, who knew how to get the message to its destination, translated it to Morse code. The receiving telegrapher decoded it, and gave it to a messenger who took it to the recipient. Then the telephone was invented, and most of that structure went away. The sender could deliver the message to the receiver directly. Well, "directly" if you don't count wiring, switching, trunk lines and central offices that made it possible. Obsolete Pony Express riders could say "I told you so" to the telegraphers.

Natural Language Should Be the Norm

First, it's necessary to stop thinking of controlled process equipment as computer peripherals. Process equipment is designed and purchased to provide process functions, such as mixing, distilling, heating, machining, assembly and packaging. The equipment is inert until it's controlled by a human, or automated by a computer or other machine.

1 of 2 < 1 | 2 View on one page
Show Comments
Hide Comments

Join the discussion

We welcome your thoughtful comments.
All comments will display your user name.

Want to participate in the discussion?

Register for free

Log in for complete access.

Comments

  • <p>Bill Hawkins is on the right track. Long ago, I was assigned to design and instrument (automate) a batch manufacturing process. The most difficult part of that assignment was to write the instruction manual (procedure) for the plant operators, who were an integral part of the operation in this semi-works plant. The had to know exactly what to do (dump a bag of X in to the reactor), when to do it (after a time, or after a temperature was reached), how to record what they did when they did anything, etc. At the time, there were no digital controllers, only pneumatic instruments and process variable controllers, but they needed to know when to switch them to AUTO so they didn't wind up. The detail was exacting, difficult, and time consuming. Then we had to create the procedures for each of the expected failure conditions to achieve a HOLD state, and a shutdown procedure. All of this had to be in natural language because there were no electronic controllers.</p> <p>Today, we cannot create procedures in a way that is any different from the "old days" except that the execution is now assigned to a program in a PLC, or to a control loop in a DCS. We still need to develop the procedures in our own language for all of the state transitions that are possible during normal, equipment failure, or emergency shutdown conditions. Fortunately, a "language" exists for this called Sequential Function Charts, and it's one of the IEC61131-3 standard programming languages for programmable logic controllers. Our natural language words can be keyed onto blocks, then blocks linked in a natural sequence that allows progression by time or event. Later, our words can be encoded to some language that the PLC or DCS can recognize.</p> <p>Bill was one of the original authors of ISA88 and is a recognized expert on batch automation. What Bill didn't say is that ALL processes are batch, even the ones we think of as continuous. The "batch" part of a continuous process is during: a) startup, b) shutdown, c) rate change, d) raw material/feedstock change, or e) any unexpected deviation from normal operation such as a pump failure or an overload trip. That is when we need procedures to take immediate action to preserve safety and equipment. Bill's point is right on the mark - we also must make sure the process operator is totally informed of the events, action taken, and given the option to perhaps do something else.</p>

    Reply

RSS feed for comments on this page | RSS feed for all comments