Checklist for Batch Optimization

Sept. 15, 2012
Batch processing is a critical part of many high value added biological and chemical processes. Batch processes are capable of higher conversions than continuous processes because there is no discharge flow until the batch is deemed complete. Continuous processes tend to have more analyzers, and be more mature, well defined, and well behaved. Batch processes are used to get the product to market faster.  Also, unknowns can be mitigated by holding the batch longer.

Batch processing is a critical part of many high value added biological and chemical processes. Batch processes are capable of higher conversions than continuous processes because there is no discharge flow until the batch is deemed complete. Continuous processes tend to have more analyzers, and be more mature, well defined, and well behaved. Batch processes are used to get the product to market faster.  Also, unknowns can be mitigated by holding the batch longer. The lack of process knowledge often leads to larger than necessary cycle times.  There are opportunities to increase yield or increase capacity by the use of analyzers, data analytics, and inferential measurements.

Most of batch variability can be traced to variability in the composition of what is added to the batch. For pharmaceuticals the principal source of variability are the cells that are in effect miniature incredibly complex bioreactors and the magical mixture of nutrients. For fermentation processes, the variability is in the composition and processing of the grains (e.g., corn fermentability) and in the recycle streams (e.g., backset for ethanol production). Analyzers of the feeds can provide an opportunity for an immediate adjustment in feed rate to provide more consistent batch times to the desired end point. For example, a simple feed rate controller based on a feed analyzer can make an immediate cutback in feed flow for a predicted increase in feed yield immediately reducing raw material use. 

Most batch processes are held longer than necessary by about 10% unless there is product degradation. There is also another 10% capacity that can be gained by reducing the source of variability and providing more optimum operating conditions. The total opportunity of 20% for a batch capacity increase is about 10 times larger than what is possible in mature continuous processes.

Data analytics can show the degree of deviation of a batch from normal. The normal batch is often an average of representative batches. A bad batch can be terminated or corrected. A drill down into the contributions to the principal components analysis (PCA) can help identify the sources of variability that should be addressed if not in time for this batch than at least for future batches.  A worm plot that is a XY or 3 dimensional plot of principal components can provide recognition of where batches have been and where they are going in terms of quality. The tail of the worm is the first batch and the head of the worm is the last batch in the series of batches analyzed. If the worm is uncoiling, batch operation may be headed for problems and preemptive action may be warranted.  

An increase in feed conversion capability often shows up as a shorter batch time to the desired endpoint. Either product inhibition for biological processes or reactant depletion for chemical processes results in no further conversion toward the end of the batch.

The key to batch optimization is the conversion rate. The conversion rate can be computed online from an inferential measurement of conversion (e.g. cooling rate for exothermic reactions and the oxygen uptake rate or carbon dioxide production rate for biological reactions), the rate of change of the product concentration (i.e., slope of the batch profile) measured by online, at-line analyzers, and offline analyzers or predicted by the projection to latent structures (PLS) offered by data analytics. For continuous measurements or at-line measurements with small cycle times, a deadtime block is used to provide a continuous train of values of the slope of the batch profile. The deadtime is chosen large enough to provide a good signal to noise ratio. A future process variable (PV) that is the batch end point can be predicted by multiplying the slope by the deadtime and adding this delta PV to the current PV as explained in the June 29, 2012 Control Talk blog "Future PV Values are the Future".   When the conversion rate approaches zero or the predicted concentration is at the desired endpoint the batch is deemed complete. Changes in the time to reach this end point can be used as an opportunity to reduce batch cycle time for an increase in cycle time or to reduce feed rate for an increase in yield. The changes in time to end point can be used to correct analyzers of feed composition. This opportunity is discussed in the September 2012 Control article "Get the Most out of Your Batch."

Data analytics can flag batches as being non representative. These deviant batches should not be used for adjusting feed conditions particularly if the Principle Component Analysis shows the major source of variability is an equipment or automation system problem.

When cooling system or vent system capability is the limitation to increasing feed rate to reduce batch cycle time, valve position controllers (VPC) can be used. The VPC PV is vent and coolant valve position, the VPC setpoint is the maximum throttle position with sufficient sensitivity, and the VPC output. Slide 14 in the ISA Automation Week 2011 presentation "Biological and Chemical Reactor Control Opportunities" paper and  presentation discuss  an example of the use of VPC to maximize feed to a chemical reactor. The diagram applies to fed-batch processes by simply disregarding the level controller. 

The slope can be used to optimize the profile. The slope of the batch profile can be optimized to increase capacity or yield as noted in the July 2008 Control article "Unlocking the Secrets of Batch Profiles." 

Coriolis flow meters on the reactant feeds can provide an incredibly accurate mass flow and density measurement. The density can be used to diagnose and correct the reactant concentration in the feed stream. The reaction stoichiometric equation can be enforced by extremely accurate reactant ratio flow control afforded by the Coriolis meters. For ethanol processes, the density is used for slurry solids concentration control.  

Feedforward control should be considered when feed rates are changed as a result of optimization. For fed-batch reactors, changes in the lead reactant feed rate can be used as a flow feedforward to help preemptively correct coolant and vent flow rates. The ISA Automation Week 2012 Paper "Improving the Efficiency of Process Control Optimization for Batch Chemical Systems" shows how to improve the control of the top temperature of stripping column on the overheads of a reactor by the use a feedforward of bottom temperature to preemptively change the reflux flow setpoint. This paper also shows how tight shutoff valves doing double duty of isolation and throttling severely limited the ability to do optimization. Control valves with backlash and stiction will introduce limit cycles that cannot be eliminated by tuning. The limit cycles and poorer resolution of these valves will prevent getting close to the optimum. When leakage must be eliminated, two types of valve in series should be installed. A tight shutoff rotary valves with a piston actuator and solenoid valves should serve as the isolation valve. A sliding stem valve with a diaphragm actuator, digital positioner, and low friction packing should be used for throttling as noted in the Sept. 1, 2012 Control Talk Blog "Checklist for Batch Temperature Control".

The setpoint for temperature control can be optimized based on first principle calculations. In the ISA Automation Week 2012 Paper "Batch Reactor Temperature Control Improvement", the batch temperature setpoint was optimized based on a solubility calculation. The batch temperature primary controller manipulated the setpoint of a secondary controller for the recirculation line heat exchanger outlet temperature. The reactor temperature had to above the solubility temperature to prevent precipitation and plugging of the exchanger. However, high reactor temperatures triggered side reactions reducing yield.  The reactor temperature was controlled to be just above an online computed solubility temperature. An intelligent reactor controller low output limit was computed based on the difference the reactor and heat exchanger outlet temperature setpoints and the solubility temperature. The low limit help keep the control valve open at the beginning of the batch when the heat release was greatest. The optimization achieved $1 million a year in financial benefits plus increased product quality.

Here is the checklist.

•1.       Is the drill down feature of principal components analysis (PCA) in data analytics used to identify the source of variability?

•2.       Is data analytics used to alert operations to potential bad batches?

•3.       Is data analytics used to screen out batches for optimization of feeds?

•4.       Is data analytics used to predict end points via projection to latent structures (PLS)?

•5.       Are Coriolis meters used on reactant feeds to diagnose and control raw material variability?

•6.       Is reactant mass flow ratio control used to enforce the stoichiometric ratio?

•7.       Are Near Infrared Analyzers (NIR) used on feeds to identify contaminants or to predict conversion?

•8.       Can cooling rate be used as an inferential measurement of conversion rate?

•9.       Can an online, at-line, or offline analyzer be used to measure conversion?

•10.   Is a deadtime block needed for online or fast at-line analyzers to increase signal noise ratio of computed conversion rate (e.g. slope of batch profile)?

•11.   Can the time to negligible conversion rate be used to optimize batch cycle time or feed rate?

•12.   Can the slope of the batch profile be optimized by fed-batch control of feed rate?

•13.   Can feedforward be used to preemptively deal with changes introduced by optimization?

•14.   Can the temperature setpoint be optimized based on first principle calculations?

About the Author

Greg McMillan | Columnist

Greg K. McMillan captures the wisdom of talented leaders in process control and adds his perspective based on more than 50 years of experience, cartoons by Ted Williams and Top 10 lists.

Sponsored Recommendations

2024 Industry Trends | Oil & Gas

We sit down with our Industry Marketing Manager, Mark Thomas to find out what is trending in Oil & Gas in 2024. Not only that, but we discuss how Endress+Hau...

Level Measurement in Water and Waste Water Lift Stations

Condensation, build up, obstructions and silt can cause difficulties in making reliable level measurements in lift station wet wells. New trends in low cost radar units solve ...

Temperature Transmitters | The Perfect Fit for Your Measuring Point

Our video introduces you to the three most important selection criteria to help you choose the right temperature transmitter for your application. We also ta...

2024 Industry Trends | Gas & LNG

We sit down with our Industry Marketing Manager, Cesar Martinez, to find out what is trending in Gas & LNG in 2024. Not only that, but we discuss how Endress...