Control is the procedure of deciding and implementing action, it doesn't have to mean regulatory feedback algorithms. Here are some rules we might use as control actions, which are relevant to making a decision about what to wear:

**Read Part 1 of this article, which explains the mathematics behind natural language processing.**

- If it's cold and breezy, then wear long pants, sweater and jacket.
- If it's nice and calm, then wear short-sleeved shirt and lightweight pants.
- If it's cold and windy, then wear long pants, sweater, coat, hat and gloves.

Very generally, the rule structure is:

*IF (antecedent situation)**THEN (control action)*

And more specifically:

*IF (temperature is category ** AND wind is category) ** THEN (wear N items)*

Since there are three linguistic categories for each of two process variables as shown in Tables I and II, there would be 3² = 9 rules. Note that the rules are stated for the perfect or ultimate linguistic categories, but that most actual conditions will result in some validity to two adjacent temperature and two adjacent wind categories, making four of the nine rules simultaneously and partially valid.

Table I summarizes the nine rules. Here, one rule is of the form:

*IF (outdoor temperature is in "Cold" category and wind speed is in "Breezy" category) THEN (appropriate action is: wear seven items of clothes.)*

This sort of logic can be translated into natural language processing control (NLPC). For a heat exchanger this might be specifically:

*IF (temperature actuating error is positive-low, and rate of change is negative-large) THEN (incrementally adjust the controller output by +0.5%)*

The IF part of such rules is termed the antecedent, and the THEN part is the consequent.

Returning the discussion to the weather and clothing decision from Part 1 of this article (March '21, p. 32), starting with a temperature of 32 °C (89 °F) and a wind speed of 7 km/hr, the associated figures and equations yield temperature membership values of 0.0, 0.6 and 0.4 in the categories of Cold, Nice and Hot, respectively. These values also indicate membership in the three columns of Tables I and II.

Wind Speed

Windy | 9 Items | 6 Items | 4 Items |

Breezy | 7 Items | 5 Items | 3 Items |

Calm | 6 Items | 5 Items | 3 Items |

Temperature |
Cold | Nice | Hot |

Windy μ |
9 Items Truth = 0.1*0 = 0 |
6 Items Truth = 0.1*0.6 = 0.06 |
4 Items Truth = 0.1*0.4 = 0.04 |

Breezy μ |
7 Items Truth = 0.9*0 = 0 |
5 Items Truth = 0.9*0.6 = 0.54 |
3 Items Truth = 0.9*0.4 = 0.36 |

Calm μ |
6 Items Truth = 0*0 = 0 |
5 Items Truth = 0*0.6 = 0 |
3 Items Truth = 0*0.4 = 0 |

Cold μ |
Nice μ |
Hot μ |

Similarly, figures and associated equations from Part 1 of this article show that the wind speed of 7 km/hr has a 0.0 belongingness to the Calm category (and the lower row of Table II). It also has a 0.9 belongingness to the Breezy category and the middle row, and a 0.1 belongingness to the Windy category and its upper row.

There are two common methods to choose the belongingness to each cell in the Table II matrix. For the first, consider belongingness as a probability or likelihood, then the probability that the condition is in one particular cell is the probability that it's in the row of the cell and in the column of the cell. (Although, useful, the likelihood analogy is weak.) But with the likelihood viewpoint, products of row-column belongingness values represent the truth that a process variable set is in a particular cell. These cell values are also indicated in Table II.

Conveniently, the sum of all of the truth values is unity because: 1) the membership functions are linear, 2) adjacent membership functions share break points, and 3) the row-column membership product is used for the rule truth. (There are alternate approaches. I prefer this simple one.)

The truth of a rule is the weight or importance given to that rule, and the blended control action is the truth-weighted sum of all rules, as described in the general equation, and for this particular example:

Action = Σ_{AllRules}T_{K}A_{k}

Action = 0 • 9 + 0.06 • 6 + 0.04 • 4 + 0 • 7 + 0.54 • 5 + 0.36 • 3 + 0 • 6 + 0 • 5 + 0 • 3 = 4.3 Items

Here, T_{K} represents the truth of the kth rule, and A_{k} represents the action to be taken if the kth rule was perfectly true.

The calculated action “Wear 4.3 items” is the control action. If this was a controller output command to a valve, “Open 4.3 % more,” then the decimal part is acceptable. However, because the answer in this clothing example can only be an integer, one might round the result as is done in digital processing with a finite bit length for variables. The equation converts the qualitative rules and qualitative characterization of the process variables into a definitive, implementable value.

Another common convention to assess the truth of a rule is to use the minimum of the row-column membership values as the truth for the cell. In general then, the truths don't sum to unity, so an additional weighting uses the individual truths normalized by the sum of all truths. In my experience, the several approaches are equivalent.

The extension to analysis and control of higher dimension situations is straightforward. For example, relative humidity could be included as a third consideration in making the choice of what to wear. This would place three process variables in the antecedent making it of dimension three. For a 3D antecedent, the rule matrix would be a rectangular volume. If sun intensity and time duration are also considered in the decision process the antecedent would have five dimensions. The extension to higher-order antecedents is easily performed in programming, but it's not amenable to visualization.

NLPC uses human rules for automating decisions, without advanced mathematics (such as calculus or Laplace transforms), and it permits nonlinear action. If you have applications in which IF-THEN conditionals automate routine engineering or operator decisions or actions, then you're almost implementing NLPC. Better than the normal use of IF-THEN conditionals, NLPC permits smooth transitions between categories.

NLP provides a framework for standardization of how heuristic rules are implemented. Operators and managers can understand the linguistic logic, and they can state the rules in natural language. The recorded body of rules provides additional benefits in training and developing process understanding.

## Simple to implement, document

Normally, process experience among operators and engineers is sufficient to develop an NLPC application without performing either experimental process-response testing or controller-tuning explorations. NLPC can integrate user-defined action for feedforward and constraint avoidance.

However, NLPC needs break points defined, perhaps averaging three for the first linguistic category, then one more for each additional category for each process variable, and one control action for each rule. If NLPC is replacing proportional-integral (PI) control, then commonly there are two process variables, actuating error and rate of change of error. If each has five linguistic categories (zero and two “plus” and two “minus“ categories), then there are 25 rules and about 14 break points, summing to about 40 user-required values. Gain-scheduled PI control over four regions needs eight tuning values and three break points for a total of 11 values.

The simplicity of gain-scheduled PI control and the widespread familiarity with PI algorithms might override the benefits of using NLPC to replace PI for feedback control. Most NLPC vendors offer PI substitutes and software features that make it simple to set up improved control. But in my opinion, replacing PI in feedback control isn't where NLPC has its largest advantages.

NLPC should be considered as an automaton solution wherever engineers or operators observe, perceive, and take routine supervisory corrective action. Consider NLPC for automating process management action, rather than replacing feedback control. If you're either personally implementing or automating the implementation of heuristic rules, you have a potential application for NLPC.

Identify where your people are routinely observing something and taking corrective action, then consider automating that action with NLPC. For example, do they observe time-to-breakthrough on a carbon-bed absorber to adjust the absorb-to-steaming cycle period? Do they observe outlet composition on parallel reactors to adjust feed rate between reactors? Do they observe zero-crossing behavior of a controller to increase or decrease gain? Using the NLPC structure will standardize the multiple applications of heuristic supervisory activities throughout your enterprise.

As with any automation approach, the control rules and the categorization of process variables in NLPC reflect the knowledge of the creator, not necessarily the best logic, and they may even integrate folklore. That it works, does not mean it represents either a valid or best underlying understanding. Use an application to see if results affirm your intuitive understanding. Be willing to improve.

Further, like any control strategy or controller tuning, NLPC reflects the process understanding at one time, which may need to be revised when the process equipment is changed or used under significantly different conditions.

Finally, don’t call it fuzzy logic! Dr. Zadeh’s insight on logic and its mathematical formulation was transformational, simple and effective. He gets much respect! But his choice of that term does not evoke confidence or security in the minds of process managers. If you want to implement it within industry, call it natural language processing or whatever makes it acceptable in your community.

**Read Part 1 of this article, which explains the mathematics behind natural language processing.**

**About the author: R. Russell Rhinehart**