
Cost versus pressure drop graph for a particular case of a heat exchanger

Example of a function in which stochastic algorithms can be applied to find the global optimum

Classification of stochastic search algorithms
3.1 Simulated Annealing
Simulated annealing (SA) is a metaheuristic technique based on an analogy of metal annealing. To describe this phenomenon, first consider a solid with a crystalline structure that is heated to melt, and then the molten metal is cooled to solidify again. If the temperature decreases rapidly, irregularities appear in the crystalline structure of the cooled solid, and the energy level of the solid is much greater than a perfectly crystalline structure. If the material is cooled slowly, the energy level will be minimal. The state of the system at any temperature level is described by the coordinate vector q. At a given temperature, while the system remains in equilibrium, the state changes randomly, but the transition to states with lower energy level is more likely at low temperatures.
-
0. Select a set of initial values for the search vector x, an initial temperature T, a lower limit for the temperature Tlow, and a limit for the maximum number of iterations L.
-
1. Make k = 0.
-
2. Make k = k + 1.
-
3. Randomly select values for the unknown vector x′.
-
4. If f(x′) − f(x) = 0, make x = x′.
-
5. If f(x′) − f(x) > 0, make x = x′ with a probability of exp-(f(x′) − f(x))/T.
-
6. If k < L, return to step 2; otherwise continue with step 7.
-
7. Reduce the temperature T = cT, where 0 < c < 1.
-
8. If T > Tlow, returns to step 1; otherwise terminate the search procedure.

General structure of the simulated annealing algorithm
SA depends on the random strategy to diversify the search. The basic SA algorithm uses the Metropolis criterion to accept a motion. In this sense, the movements in which the objective function decrease are always accepted, whereas the movements that increase the objective function are accepted but with a probability of exp (f(x′) − f(x)/T). When T approaches zero, the probability of accepting a movement in which the objective function increases is zero. Thus, when the temperature is high, many movements in which the objective function increases are accepted; in this way the method prevents it from being trapped prematurely in a local solution.
3.2 Genetic Algorithms
-
0. Make t = 0.
-
1. Initialize P(t).
-
2. Evaluate P(t).
-
3. Combine P(t) to produce C(t).
-
4. Evaluate C(t).
-
5. Select P(t + 1) from P(t) and C(t).
-
6. Do t ← t + 1.
-
7. Finish if one of the convergence criteria is met; otherwise return to step 3.

General structure of genetic algorithms
Usually, the initial population is randomly selected. Recombination typically involves the fusion and mutation operations to produce the offspring. In fact, there are only two types of operations in genetic algorithms: (1) genetic operations (fusion and mutation) and (2) the evolution (selection) operation.
Genetic operators simulate the hereditary genetic process to create new offspring in each generation. The evolutionary operation imitates the process of Darwinian evolution to create populations from generation to generation.
Fusion is the main genetic operator. It operates on two chromosomes at a time and generates the offspring by combining the characteristics of two chromosomes. The simplest way to carry out the fusion operation is by randomly selecting a cut point and generating a descendant by combining the segment of one parent to the right of the cut point and the segment of the other parent of the left side of the cut point.
This method works correctly with the representation of the genes through bit strings, for example, with binary variables. The general behavior of genetic algorithms depends to a great extent on the effectiveness of the fusion operation used.
The fusion ratio (designated pc) is defined as the fraction of the number of descendants produced in each generation relative to the total population size (usually referred to as pop_size). This fraction controls the expected number of pc × pop_size chromosomes that undergo the merge operation. A high number of the fusion ratio allow the exploration of a larger solution space and reduce the possibility of installing in a false optimum. But if this ratio is too high, it could result in a waste of computation time in exploring non-promising regions of solution space.
Mutation is a fundamental operation, which produces spontaneous random changes in several chromosomes. A simple way to perform the mutation operation is to alter one or more genes. In genetic algorithms, the mutation operation plays a crucial role in (a) replenishing the lost genes of the population during the selection process so that they can be combined in a new context or (b) providing genes that were not considered in the initial population.
The mutation rate (designated by pm) is defined as the percentage of individuals in the population generated by mutation. The mutation rate controls the newly introduced genes in the population to be tested. If it is very low, many genes that could be useful will never be tested. But, if it is very high, there will be many random permutations, descendants will begin to lose their parent resemblance, and the algorithm will lose the ability to learn historically in the search process.
The convergence criteria for genetic algorithms are as follows: (a) if the maximum number of generations is exceeded, (b) if the maximum computation time is reached, (c) if the optimal solution is localized, or (d) if there are no improvements in successive generations or with respect to computation time.
3.2.1 Example of Codification
![$$ {\displaystyle \begin{array}{l}\max \kern0.5em f\left({x}_1,{x}_2\right)=-{x}_1^2-{x}_2^2+3\left[\cos \left(\pi {x}_1\right)+\cos \left(\pi {x}_2\right)\right]\\ {}\mathrm{subject}\ \mathrm{to}\\ {}-5\le {x}_1\le 5\\ {}-5\le {x}_2\le 5\end{array}} $$](/epubstore/O/J-M-Ortega/Optimization-Of-Process-Flowsheets-Through-Metaheuristic-Techniques/OEBPS/images/462978_1_En_3_Chapter/462978_1_En_3_Chapter_TeX_Equa.png)
Figure 3.2 shows a three-dimensional graph of the behavior of the objective function with respect to optimization variables, note the large number of local solutions associated with the problem.




Position, m1 |
9 |
8 |
7 |
6 |
5 |
4 |
3 |
2 |
1 |
Value, 2m1–1 |
28 |
27 |
26 |
25 |
24 |
23 |
22 |
21 |
20 |
Binary, ym1 |
0 |
1 |
0 |
1 |
0 |
1 |
0 |
1 |
0 |
In this case, the decimal (subchain1) is equal to 170 and the variable x1 is equal to −1.673. Note that if all binary variables are equal to one, we reproduce an upper limit (in this example 5). On the other hand, if all binary variables are equal to zero, we have the lower limit (in this example −5).
Thus, combining these two genes to represent x1 and x2, we form an 18-position chromosome representing a solution of the objective function:

![$$ {\displaystyle \begin{array}{l}{v}_1=\left[000110011\kern1em 011001010\right]\\ {}{v}_2=\left[001101101\kern1em 011010110\right]\\ {}{v}_3=\left[010110011\kern1em 010001010\right]\\ {}{v}_4=\left[110110110\kern1em 110111010\right]\\ {}{v}_5=\left[011101000\kern1em 010010011\right]\end{array}} $$](/epubstore/O/J-M-Ortega/Optimization-Of-Process-Flowsheets-Through-Metaheuristic-Techniques/OEBPS/images/462978_1_En_3_Chapter/462978_1_En_3_Chapter_TeX_Equf.png)
![$$ {\displaystyle \begin{array}{l}{v}_1=\left[2.984,-1.751\right]\\ {}{v}_2=\left[2.123,-0.812\right]\\ {}{v}_3=\left[2.984,-1.751\right]\\ {}{v}_4=\left[-0.714,-1.340\right]\\ {}{v}_5=\left[-4.099,2.866\right]\end{array}} $$](/epubstore/O/J-M-Ortega/Optimization-Of-Process-Flowsheets-Through-Metaheuristic-Techniques/OEBPS/images/462978_1_En_3_Chapter/462978_1_En_3_Chapter_TeX_Equg.png)

To produce the new generation, a number of elite individuals are selected to prevail as such in the next generation (this operation is known as elite count), in our example only one elite individual will prevail in the next generation, and this will be the one that presents a better adaptability (in our example this chromosome corresponds to v2).





Subsequently, two random numbers are generated in the interval [0,1] to select from the cumulative probability the parents of the first individual generated by fusion. Thus, if the first random number is 0.301, then the first parent will be v2, since 0.301 falls between q1 and q2. In the same way, if the second random number is 0.453, the other parent will be v3. Now, by applying the crossover operation, a cutoff point is generated in a random fashion, and we merge the two chromosomes to have:

While the mutation operation involves making a random change in a gene on a chromosome. Thus, if in our example we generate a random number between 0 and 1 equal to 0.785, the selected chromosome to undergo a random change will be v4, and the element to be changed is also randomly selected as shown below:

In this way, all descendants of the current generation are generated, and in turn these will be the parents of the next generation. The process is repeated, and after 102 generations we obtain the solution of the problem, in which the objective function is 56.00 with values for the optimization variables of x1 = 0 and x2 = 0 (point representing the overall optimal solution of the problem).
3.2.2 Management of Restrictions











There are several additional forms for penalty terms that can be studied in specialized sources; see, for example, Gen and Cheng (1997).
3.3 GA Toolbox of MATLAB®
First, make sure you have installed the toolbox for genetic algorithms. This toolbox contains the functions necessary to solve optimization problems through GA.
The first thing we need to do is an M file that must accept the vector of the independent optimization variables, and it returns a scalar representing the variable to be optimized.
- 1.
From the File menu, select New.
- 2.
Select M File to open the file editor.
- 3.
In the editor, write the function to be optimized, for example, presented in this chapter, we would have the following:

- 4.
Record the f.m file in the working directory of MATLAB®.
There are two ways to use the GA in MATLAB®, one is through the command line and another is through the GA tool.

-
@f is the function that evaluates the capabilities of the chromosomes.
-
nvariables is the number of independent variables in the optimization function.
-
Options is an array that contains the options for the GA. If this argument is not included, the defaults are used.
-
x is the vector of optimization variables.
-
fval is the final value of the optimization function

Options for GA in MATLAB®
Option |
Parameter |
Values |
Description |
---|---|---|---|
Graphics options |
“PlotFcns” |
@gaplotbestf |
Graph the best individual through successive generations |
Population options |
“PopulationType” |
“doubleVector” |
Continuous variables |
“bitstring” |
Binary variables |
||
“PopulationSize” |
number |
Population size |
|
“InitialPopulation” |
[ ] |
Arrangement for the initial population |
|
“SelectionFcn” |
@selectionstochunif |
Uniform stochastic selection |
|
@selectionuniform |
Uniform selection |
||
@selectionroulette |
Roulette wheel selection |
||
Reproduction |
“EliteCount” |
number |
Number of best individuals remaining identical in the next generation |
“CrossoverFraction” |
fractional number |
Number of individuals reproducing by crossover |
|
Stop criteria |
“Generations” |
number |
Maximum number of generations |
“TimeLimit” |
number |
GA maximum computation time |
|
“FitnessLimit” |
number |
Limit value for the objective function |
|
“StallGenLimit” |
number |
Number of maximum permissible successive generations without improving the best chromosome |
|
“StallTimeLimit” |
number |
Maximum time allowed without improvements in the best chromosome |
|
Display in screen |
“Display” |
“‘off” |
Shows nothing |
“‘iter” |
Displays the value of the target function each GA generation |
||
“‘final” |
Shows the best individual in the final generation |
Additionally, MATLAB® has an application called GA tool. This application can be used to solve optimization problems with GA directly. To use the GA tool first, it is necessary to create an M file containing the objective function to be minimized. Afterward, open the GA tool working screen by entering the following command on the MATLAB® working screen gatool.

GA toolbox of MATLAB®
In the same way, in the toolbox we can include minimum and maximum limits for the optimization variables in the bound lower and upper fields in the form of arrays (i.e., for two variables [number1 number2]). Linear constraints of equality and inequality can be introduced in matrix form across fields A and b for inequalities and Aeq and beq for equalities. Finally, nonlinear constraints are introduced through an M file.
3.4 EMOO Tool in MS Excel

Flowchart of simple genetic algorithm sequence of genetic algorithm
However, almost all these applications have been done using programs/platforms that are not readily used in the industry. On the other hand, engineers are familiar with MS Excel® and use it in both research and industrial practice (Sharma et al. 2012). Hence, an Excel-based multi-objective optimization (EMOO) program may represent a good alternative to develop stochastic optimization algorithms.
An improved multi-objective differential evolution algorithm with a termination criterion for optimizing chemical processes was developed by Sharma and Rangaiah (2013), which works with a termination criterion using the non-dominated solutions obtained as the search process. The multi-objective optimization hybrid method is, namely, the improved multi-objective differential evolution (I-MODE).

Flowchart of I-MODE algorithm (Sharma and Rangaiah 2013)
For more details about how the I-MODE works and to obtain the optimization routine, you can consult the paper or contact the authors (Sharma and Rangaiah 2013).
For the optimization of a particular process, it is necessary to specify the values for the parameters associated with the used I-MODE algorithm, which are the following: population size (NP), number of generations (GenMax), taboo list size (TLS), taboo radius (TR), crossover fraction (Cr), and mutation fraction (F). In addition, it is necessary to select the decision variables and introduce the values for the lower and upper bounds expressed in homogenous units. All decision variables must be selected as a continuous or discontinuous variables, and the initial value for each is automatically the half between the minimum and the maximum possible values. The user interface of the optimization algorithm accepts inequality constraints, which can be introduced to run the optimization approach without any inequality constraint.
3.4.1 Main Program Interface

Main program interface of I-MODE algorithm
As can be seen, the I-MODE algorithm has four fundamental parts in its main program interface, which are objective functions, design variables, inequality constraints, and algorithm parameters. Each of these sections will then be analyzed.
Objective Functions:

Input objective function of I-MODE algorithm
The first box, which corresponds to Name, proposes a name for the function. In the second box, you are prompted to specify a cell value, that is, a cell where the target function is already specified. The last box corresponds to Goal, where you specify if you want to maximize or minimize the objective function, by default a minimization will appear. After specifying the information of the new objective function in each of the boxes, click on the Add button, and the specific information in the boxes will appear in the corresponding cell. If you want to cancel the introduction of a new click function on the Done button, this will erase all information previously entered in the boxes and will exit the Input Objective Function window.
Design Variables:

Input decision variables of I-MODE algorithm
As can be seen in Fig. 3.11, the input decision variables of I-MODE algorithm window contains four boxes where you must specify the name of the decision variable, the maximum and minimum values, as well as the type of variable, as appropriate. After entering the requested information, click on the Add button; otherwise to delete the information in the boxes, click on the Done button. The specific information in the boxes will appear in the corresponding cell.
Inequality Constraints:

Input constraints of I-MODE algorithm
The input constraints window contains four boxes in which you must enter the name that will be assigned to the constraint, the cell containing the constraint, the type of comparison, and the limit of the constraint. The information contained in the boxes structure the restriction, as shown by the example in the figure.
Algorithm Parameters :
This is the part of the main program interface where it is necessary to specify the parameters associated with the used I-MODE algorithm, which are the following: population size (NP), number of generations (GenMax), taboo list size (TLS), taboo radius (TR), crossover fraction (Cr), and mutation fraction (F). All these values are entered directly into the corresponding cell that indicates its name without needing to click on any button to add them. The numerical values of these parameters depend on the nature of the problem to be solved.
3.4.2 Objectives and Constraints

Objectives and constraints of I-MODE algorithm
The value must be entered in its respective cell. It is important to emphasize that the number of decision variables, objective functions, and inequality constraints is not limited to the number proposed but can add more uncertainty of the cells of the usual way in MS Excel®.
3.5 Stochastic Optimization Exercises
- 1.
Propose a chromosome to represent a string of bits for the variables x1 and x2. x1 is defined in the interval [−10 10], while x2 is defined in the interval [−20 20]. Accuracy of five decimal places is required.
- 2.
Propose a chromosome to model through a string of bits a set of variables given by x1, x2, and x3 with an accuracy of six decimal places. The variables are defined in the following intervals:
-
x1 defined in [0–1]
-
x2 defined in [0.1 0.9]
-
x3 defined in [0–0.8]
-
- 3.
For the example presented in the Section of GA, perform the calculations for the next ten generations, and see the behavior of the objective function as well as the evolution of the chromosomes. Plot the value of the objective function with respect to the generation number.
- 4.
Consider the following problem:


Behavior of the function of problem 4
As can be seen in Fig. 3.14, this problem presents a large number of local solutions in which the deterministic algorithms could fail to locate the global optimal solution.
Solve this problem using MATLAB® GA under the following conditions:
- (a)
For a population boundary of the optimization variables between [−20 20], with an elite count of 2 individuals, a fusion fraction of 0.8 and a limit for the infinite count time, as well as a maximum number of 50 generations.
First, solve the problem with a population size of five individuals. Then, solve the problem using a population size of 10 individuals, and finally solve the problem using a population size of 100 individuals.
Do you get the optimal solution in each of the population sizes? Explain the results.
- (b)
Now, change the maximum number of generations to 100. Then, solve the problem for 5, 10, and 20 individuals. In which cases do you get the optimal solution? Explain the results.
- (c)
For a population of 30 individuals and a maximum number of 100 generations, solve the problem with a value for the crossover fraction of 0.9, 0.5, and 0.2; what are the results? Explain the results obtained.
- (d)
Based on the results obtained in the previous sections, what would be the best values for GA parameters to solve this example?
- 5.
Consider the following problem originally proposed by Hock and Schittkowski (1981):

where
c1 = −6.089, c2 = −17.164, c3 = −34.054, c4 = −5.914, c5 = −24.721, c6 = −14.986, c7 = −24.100, c8 = −10.708, c9 = −26.662, c10 = −22.179.
- 6.
Solve the following problem using the MATLAB® GA toolbox:

- 7.
Solve GA using the problem defined below:

- 8.
The most commonly used type of heat exchanger is the shell-and-tube heat exchangers because they are robust units able to work in a wide range of pressure, flow, and temperature. Ponce-Ortega et al. (2009) developed an optimization approach based on genetic algorithms for the optimal design of this type of heat exchangers. This algorithm can be found in the following link:
Use this algorithm for solving the problems reported by Ponce-Ortega et al. (2009).
- 9.
Process cogeneration is an effective strategy for exploiting the positive aspects of combined heat and power in the process industry. Bamufleh et al. (2013) developed an optimization framework for designing process cogeneration systems with economic, environmental, and social aspects. This algorithm can be found in the following link:
Use this code to solve the problems reported by Bamufleh et al. (2013).
3.6 Nomenclature
- a j
-
Lower bound for the interval of xj
- b j
-
Upper bound for the interval of xj
- C
-
Population of decedents, set of solutions for the search vector
- Cr
-
Probability
- d j
-
Number of positions after the decimal point needed to represent the continuous variable xj
- EMOO
-
Excel-based multi-objective optimization
- f
-
Objective function
- G
-
Generation
- GA
-
Genetic algorithms
- g j
-
Set of inequality constraints where i = 1, 2, …, m1
- h j
-
Set of equality constraints where i = m1 + 1, …, m
- i
-
Individuals
- I-MODE
-
Improved multi-objective differential evolution
- k
-
Number of iterations
- L
-
Limit for the maximum number of iterations
- m j
-
Number of positions required by the subchain bit string to be able to represent the continuous variable
- MODE-TL
-
Multi-objective differential evolution taboo list
- MOO
-
Multi-objective optimization
- NP
-
Population size
- P
-
Population of parents, set of solutions for the search vector
- pc
-
Fusion ratio
- q
-
Coordinate vector
- r i
-
Variable penalty coefficient for the constraint i
- SA
-
Simulated annealing
- t
-
Number of generation (iteration of GA)
- T
-
Temperature
- TL
-
Taboo list
- T low
-
Lower limit for the temperature
- TLS
-
Taboo list size
- Tr
-
Taboo radius
- x
-
Optimization variable vector
- x′
-
Unknown vector
- x j
-
Continuous variable
- α
-
Penalty constant
- β
-
Penalty constant
- ρ
-
Penalty constant