© Springer International Publishing AG, part of Springer Nature 2019
José María Ponce-Ortega and Luis Germán Hernández-PérezOptimization of Process Flowsheets through Metaheuristic Techniques https://doi.org/10.1007/978-3-319-91722-1_3

3. Metaheuristic Optimization Programs

José María Ponce-Ortega1  and Luis Germán Hernández-Pérez1
(1)
Universidad Michoacana de San Nicolás de Hidalgo, Morelia, Michoacán, Mexico
 
There are optimization processes of industrial interest that involve functions that present a large number of local solutions, and therefore it is very difficult to determine the optimal solution using deterministic optimization techniques. For example, consider the case shown in Fig. 3.1, in which the cost function for the design of a heat exchanger relative to pressure drops is represented in a diagram. In this case, we can see that there are two local solutions, so if we use a local search procedure, the algorithm could be trapped in the solution that does not present the minimum cost since it complies with the stopping criteria of these optimization algorithms.
/epubstore/O/J-M-Ortega/Optimization-Of-Process-Flowsheets-Through-Metaheuristic-Techniques/OEBPS/images/462978_1_En_3_Chapter/462978_1_En_3_Fig1_HTML.png
Fig. 3.1

Cost versus pressure drop graph for a particular case of a heat exchanger

To solve these problems, we have proposed stochastic search algorithms based on natural phenomena such as simulated annealing (Kirkpatrick et al. 1983) and genetic algorithms (Goldberg 1989). These algorithms allow to search for problems that present a large number of local solutions as complex as the one shown in Fig. 3.2.
/epubstore/O/J-M-Ortega/Optimization-Of-Process-Flowsheets-Through-Metaheuristic-Techniques/OEBPS/images/462978_1_En_3_Chapter/462978_1_En_3_Fig2_HTML.png
Fig. 3.2

Example of a function in which stochastic algorithms can be applied to find the global optimum

In this chapter, we present the general structure of simulated annealing (SA) and genetic algorithms (GA) as stochastic algorithms (other stochastic search algorithms can be seen in Fig. 3.3). These algorithms are applied directly to unrestricted problems where the problem consists of Min f(x), with limits for the variables, axb.
/epubstore/O/J-M-Ortega/Optimization-Of-Process-Flowsheets-Through-Metaheuristic-Techniques/OEBPS/images/462978_1_En_3_Chapter/462978_1_En_3_Fig3_HTML.png
Fig. 3.3

Classification of stochastic search algorithms

3.1 Simulated Annealing

Simulated annealing (SA) is a metaheuristic technique based on an analogy of metal annealing. To describe this phenomenon, first consider a solid with a crystalline structure that is heated to melt, and then the molten metal is cooled to solidify again. If the temperature decreases rapidly, irregularities appear in the crystalline structure of the cooled solid, and the energy level of the solid is much greater than a perfectly crystalline structure. If the material is cooled slowly, the energy level will be minimal. The state of the system at any temperature level is described by the coordinate vector q. At a given temperature, while the system remains in equilibrium, the state changes randomly, but the transition to states with lower energy level is more likely at low temperatures.

To apply these ideas to a general optimization problem, we will designate the system state q as the optimization goal denoted as x. The energy level corresponds to the objective function, f(x). The basic steps of the optimization algorithm SA are shown below (see Fig. 3.4):
  • 0. Select a set of initial values for the search vector x, an initial temperature T, a lower limit for the temperature Tlow, and a limit for the maximum number of iterations L.

  • 1. Make k = 0.

  • 2. Make k = k + 1.

  • 3. Randomly select values for the unknown vector x′.

  • 4. If f(x′) − f(x) = 0, make x = x′.

  • 5. If f(x′) − f(x) > 0, make x = x′ with a probability of exp-(f(x′) − f(x))/T.

  • 6. If k < L, return to step 2; otherwise continue with step 7.

  • 7. Reduce the temperature T = cT, where 0 < c < 1.

  • 8. If T > Tlow, returns to step 1; otherwise terminate the search procedure.

/epubstore/O/J-M-Ortega/Optimization-Of-Process-Flowsheets-Through-Metaheuristic-Techniques/OEBPS/images/462978_1_En_3_Chapter/462978_1_En_3_Fig4_HTML.png
Fig. 3.4

General structure of the simulated annealing algorithm

SA depends on the random strategy to diversify the search. The basic SA algorithm uses the Metropolis criterion to accept a motion. In this sense, the movements in which the objective function decrease are always accepted, whereas the movements that increase the objective function are accepted but with a probability of exp (f(x′) − f(x)/T). When T approaches zero, the probability of accepting a movement in which the objective function increases is zero. Thus, when the temperature is high, many movements in which the objective function increases are accepted; in this way the method prevents it from being trapped prematurely in a local solution.

3.2 Genetic Algorithms

Genetic algorithms (GA) are stochastic search techniques based on the mechanism of natural selection and genetics. GAs are particularly useful in non-convex problems or that include discontinuous functions. GAs differ from conventional optimization techniques because instead of having an initial solution (values for the optimization variable vector x), we have a set of solutions for the search vector called population. Each individual in the population is called a chromosome, which represents a solution to the problem. The chromosomes evolve through successive iterations which are called generations. During each generation, chromosomes are evaluated, using some form of measuring their abilities. To create the next generation, the new chromosomes are called descendants, which are created either by combining two chromosomes of the current generation using the fusion operation or by modifying a chromosome at random using the mutation operation. The new generation is created by selecting, according to the value of their abilities, parents and descendants, and rejecting others to keep the size of the population constant. The more able chromosomes have a higher probability of being selected. After several generations, the algorithm converges to the best chromosome, which probably represents the optimal solution of the problem or a solution close to the optimal one. To explain this in detail, let us designate P(t) and C(t) as the parents and descendants of the current generation t. The general structure of the genetic algorithms (see Fig. 3.5) is described as follows:
  • 0. Make t = 0.

  • 1. Initialize P(t).

  • 2. Evaluate P(t).

  • 3. Combine P(t) to produce C(t).

  • 4. Evaluate C(t).

  • 5. Select P(t + 1) from P(t) and C(t).

  • 6. Do t ← t + 1.

  • 7. Finish if one of the convergence criteria is met; otherwise return to step 3.

/epubstore/O/J-M-Ortega/Optimization-Of-Process-Flowsheets-Through-Metaheuristic-Techniques/OEBPS/images/462978_1_En_3_Chapter/462978_1_En_3_Fig5_HTML.png
Fig. 3.5

General structure of genetic algorithms

Usually, the initial population is randomly selected. Recombination typically involves the fusion and mutation operations to produce the offspring. In fact, there are only two types of operations in genetic algorithms: (1) genetic operations (fusion and mutation) and (2) the evolution (selection) operation.

Genetic operators simulate the hereditary genetic process to create new offspring in each generation. The evolutionary operation imitates the process of Darwinian evolution to create populations from generation to generation.

Fusion is the main genetic operator. It operates on two chromosomes at a time and generates the offspring by combining the characteristics of two chromosomes. The simplest way to carry out the fusion operation is by randomly selecting a cut point and generating a descendant by combining the segment of one parent to the right of the cut point and the segment of the other parent of the left side of the cut point.

This method works correctly with the representation of the genes through bit strings, for example, with binary variables. The general behavior of genetic algorithms depends to a great extent on the effectiveness of the fusion operation used.

The fusion ratio (designated pc) is defined as the fraction of the number of descendants produced in each generation relative to the total population size (usually referred to as pop_size). This fraction controls the expected number of pc × pop_size chromosomes that undergo the merge operation. A high number of the fusion ratio allow the exploration of a larger solution space and reduce the possibility of installing in a false optimum. But if this ratio is too high, it could result in a waste of computation time in exploring non-promising regions of solution space.

Mutation is a fundamental operation, which produces spontaneous random changes in several chromosomes. A simple way to perform the mutation operation is to alter one or more genes. In genetic algorithms, the mutation operation plays a crucial role in (a) replenishing the lost genes of the population during the selection process so that they can be combined in a new context or (b) providing genes that were not considered in the initial population.

The mutation rate (designated by pm) is defined as the percentage of individuals in the population generated by mutation. The mutation rate controls the newly introduced genes in the population to be tested. If it is very low, many genes that could be useful will never be tested. But, if it is very high, there will be many random permutations, descendants will begin to lose their parent resemblance, and the algorithm will lose the ability to learn historically in the search process.

The convergence criteria for genetic algorithms are as follows: (a) if the maximum number of generations is exceeded, (b) if the maximum computation time is reached, (c) if the optimal solution is localized, or (d) if there are no improvements in successive generations or with respect to computation time.

3.2.1 Example of Codification

Consider the following problem:
$$ {\displaystyle \begin{array}{l}\max \kern0.5em f\left({x}_1,{x}_2\right)=-{x}_1^2-{x}_2^2+3\left[\cos \left(\pi {x}_1\right)+\cos \left(\pi {x}_2\right)\right]\\ {}\mathrm{subject}\ \mathrm{to}\\ {}-5\le {x}_1\le 5\\ {}-5\le {x}_2\le 5\end{array}} $$

Figure 3.2 shows a three-dimensional graph of the behavior of the objective function with respect to optimization variables, note the large number of local solutions associated with the problem.

To solve this problem, we first need to find a way to represent continuous optimization variables through binary strings. To perform this task, a strategy is to use a binary encoding of continuous variables. In this sense, we assume that the boundaries of the continuous variable xj are defined in the interval [aj, bj] and require a precision of decimal places; the mapping of that number can be represented by the following expression:
$$ {x}_j={a}_j+\mathrm{decimal}\left({\mathrm{subchain}}_j\right)\times \frac{b_j-{a}_j}{2^{m_j}-1} $$
where mj represents the number of positions required by the subchain bit string to be able to represent the continuous variable xj. To determine the value of mj, we use the following expression:
$$ {2}^{m_j-1}&lt;\left({b}_j-{a}_j\right)\times {10}^{d_j-1}\le {2}^{m_j}-1 $$
where dj represents the number of positions after the decimal point needed to represent the continuous variable xj. Thus, for our example, if we require three decimal places, we have the following for the variable x1:
$$ \left(5-0\right){10}^2=500 $$
then m1 is equal to 9 (since 29 − 1 < 500 ≤ 29 − 1). For the variable x2, m2 is equal to 9 also since it is defined in the same interval. Therefore, the total number of bits needed to represent a chromosome in the example analyzed is equal to m = m1 + m2 = 18.
The decimal function (subchainj) consists of the following summation:
$$ \mathrm{Decimal}\left({\mathrm{subchain}}_j\right)=\sum \limits_m{y}_{m_j}{2}^{m_j-1} $$
Thus, for the continuous variable of the present example x1, we have:

Position, m1

9

8

7

6

5

4

3

2

1

Value, 2m1–1

28

27

26

25

24

23

22

21

20

Binary, ym1

0

1

0

1

0

1

0

1

0

In this case, the decimal (subchain1) is equal to 170 and the variable x1 is equal to −1.673. Note that if all binary variables are equal to one, we reproduce an upper limit (in this example 5). On the other hand, if all binary variables are equal to zero, we have the lower limit (in this example −5).

Thus, combining these two genes to represent x1 and x2, we form an 18-position chromosome representing a solution of the objective function:

/epubstore/O/J-M-Ortega/Optimization-Of-Process-Flowsheets-Through-Metaheuristic-Techniques/OEBPS/images/462978_1_En_3_Chapter/462978_1_En_3_Figa_HTML.png
After explaining how to encode a binary variable through a binary string, we will randomly propose an initial population, considering a population size of five chromosomes as shown below:
$$ {\displaystyle \begin{array}{l}{v}_1=\left[000110011\kern1em 011001010\right]\\ {}{v}_2=\left[001101101\kern1em 011010110\right]\\ {}{v}_3=\left[010110011\kern1em 010001010\right]\\ {}{v}_4=\left[110110110\kern1em 110111010\right]\\ {}{v}_5=\left[011101000\kern1em 010010011\right]\end{array}} $$
Whose corresponding decimal values are as follows:
$$ {\displaystyle \begin{array}{l}{v}_1=\left[2.984,-1.751\right]\\ {}{v}_2=\left[2.123,-0.812\right]\\ {}{v}_3=\left[2.984,-1.751\right]\\ {}{v}_4=\left[-0.714,-1.340\right]\\ {}{v}_5=\left[-4.099,2.866\right]\end{array}} $$
Subsequently, we proceed with the evaluation of the capacities of each individual of the population through the objective function:
$$ {\displaystyle \begin{array}{l}\mathrm{eval}\left({v}_1\right)=f\left(2.984,-1.851\right)=37.160\\ {}\mathrm{eval}\left({v}_2\right)=f\left(2.123,-0.812\right)=45.117\\ {}\mathrm{eval}\left({v}_3\right)=f\left(2.984,-1.751\right)=37.099\\ {}\mathrm{eval}\left({v}_4\right)=f\left(-0.714,-1.340\right)=44.381\\ {}\mathrm{eval}\left({v}_5\right)=f\left(-4.099,2.866\right)=25.084\end{array}} $$

To produce the new generation, a number of elite individuals are selected to prevail as such in the next generation (this operation is known as elite count), in our example only one elite individual will prevail in the next generation, and this will be the one that presents a better adaptability (in our example this chromosome corresponds to v2).

The other individuals from the next population will be generated through crossover and mutation operations. The percentage of individuals generated by fusion in our example will be 80%, and therefore the remaining 20% will be generated by mutation (in our case, we will have three descendants by fusion and one by mutation). To perform this task, we must first select the parents of the next generation. To perform this operation, the basic genetic algorithm uses the procedure known as roulette wheel, this operation is explained below. First, we calculate the total capacities of the current population:
$$ F=\sum \limits_{k=1}^{\mathrm{pop}\_\mathrm{size}}\mathrm{eval}\left({v}_k\right) $$
In our case, this value is F = 188,843. Then, we calculate the probability of each individual as follows:
$$ {p}_k=\frac{\mathrm{eval}\left({v}_k\right)}{F} $$
In this case we have:
$$ {\displaystyle \begin{array}{l}{p}_1=0.196\\ {}{p}_2=0.238\\ {}{p}_3=0.196\\ {}{p}_4=0.234\\ {}{p}_5=0.132\end{array}} $$
Also, we need to determine the cumulative probability as follows:
$$ {q}_k=\sum \limits_{j=1}^k{p}_j $$
In this example, we have the following:
$$ {\displaystyle \begin{array}{l}{q}_1=0.196\\ {}{q}_2=0.435\\ {}{q}_3=0.632\\ {}{q}_4=0.867\\ {}{q}_5=1\end{array}} $$

Subsequently, two random numbers are generated in the interval [0,1] to select from the cumulative probability the parents of the first individual generated by fusion. Thus, if the first random number is 0.301, then the first parent will be v2, since 0.301 falls between q1 and q2. In the same way, if the second random number is 0.453, the other parent will be v3. Now, by applying the crossover operation, a cutoff point is generated in a random fashion, and we merge the two chromosomes to have:

/epubstore/O/J-M-Ortega/Optimization-Of-Process-Flowsheets-Through-Metaheuristic-Techniques/OEBPS/images/462978_1_En_3_Chapter/462978_1_En_3_Figb_HTML.png

While the mutation operation involves making a random change in a gene on a chromosome. Thus, if in our example we generate a random number between 0 and 1 equal to 0.785, the selected chromosome to undergo a random change will be v4, and the element to be changed is also randomly selected as shown below:

/epubstore/O/J-M-Ortega/Optimization-Of-Process-Flowsheets-Through-Metaheuristic-Techniques/OEBPS/images/462978_1_En_3_Chapter/462978_1_En_3_Figc_HTML.png

In this way, all descendants of the current generation are generated, and in turn these will be the parents of the next generation. The process is repeated, and after 102 generations we obtain the solution of the problem, in which the objective function is 56.00 with values for the optimization variables of x1 = 0 and x2 = 0 (point representing the overall optimal solution of the problem).

3.2.2 Management of Restrictions

The general procedure of genetic algorithms works correctly for problems of minimization or maximization of functions without restrictions. However, for handling additional constraints to the objective function, they must be treated in a special way. The most common strategy for the management of constraints through GA is to include a penalty function in the objective function, which includes the violation of the constraints within the objective function, and in this way the problem with constraints is transformed into an unrestricted problem by penalizing infeasible solutions. In general, there are two ways of representing the objective function including the penalty factor, one is by adding the objective function to the penalty term:
$$ \mathrm{eval}\left(\mathbf{x}\right)=f\left(\mathbf{x}\right)+p\left(\mathbf{x}\right) $$
For a minimization problem, we have the following:
$$ p\left(\mathbf{x}\right)=\left\{\begin{array}{l}0\kern0.5em \mathrm{if}\;\mathbf{x}\;\mathrm{is}\kern0.17em \mathrm{feasible}\\ {}&gt;0\kern0.5em \mathrm{if}\;\mathbf{x}\;\mathrm{is}\kern0.17em \mathrm{infeasible}\end{array}\right. $$
The second strategy is to multiply the value of the objective function by the penalty term:
$$ \mathrm{eval}\left(\mathbf{x}\right)=f\left(\mathbf{x}\right)p\left(\mathbf{x}\right) $$
In this second case, we have for a minimization problem:
$$ p\left(\mathbf{x}\right)=\left\{\begin{array}{l}1\kern0.5em \mathrm{if}\;\mathbf{x}\;\mathrm{is}\kern0.17em \mathrm{feasible}\\ {}&gt;1\kern0.5em \mathrm{if}\;\mathbf{x}\;\mathrm{is}\kern0.17em \mathrm{infeasible}\end{array}\right. $$
There are several ways to determine the penalty term. For example, suppose that you want to solve the following problem:
$$ {\displaystyle \begin{array}{l}\min \kern0.62em f(x)\\ {}\mathrm{suject}\kern0.62em \mathrm{to}\\ {}{g}_i(x)\ge 0,\kern1.6em i=1,2,\dots, m\end{array}} $$
If we take the form of addition for the function to be optimized without restrictions, in this case the penalty term would be equal to:
$$ p\left(\mathbf{x}\right)=\left\{\begin{array}{l}0,\kern0.62em \mathrm{if}\;\mathbf{x}\;\mathrm{is}\ \mathrm{feasible}\\ {}\sum \limits_{i=1}^m{r}_i{g}_i^2\left(\mathbf{x}\right),\kern0.62em \mathrm{if}\;\mathbf{x}\;\mathrm{is}\ \mathrm{feasible}\end{array}\right. $$
where ri is a variable penalty coefficient for the restriction i.
Another way to handle a constrained optimization problem is shown below. Consider the following optimization problem:
$$ {\displaystyle \begin{array}{l}\min \kern0.62em f(x)\\ {}\mathrm{suject}\kern0.62em \mathrm{to}\\ {}{g}_i(x)\ge 0,\kern1.6em i=1,2,\dots, {m}_1\\ {}{h}_i(x)=0,\kern1.6em i={m}_1+1,\dots, m\end{array}} $$
The evaluation of the objective function as an unrestricted problem will be given by the following expression:
$$ \mathrm{eval}\left(\mathbf{x}\right)=f\left(\mathbf{x}\right)+p\left(t,\mathbf{x}\right) $$
where
$$ p\left(t,\mathbf{x}\right)={\rho}_t^{\alpha}\sum \limits_{i=1}^m{d}_i^{\beta}\left(\mathbf{x}\right) $$
here t is the iteration of GA. The penalty term is given as follows:
$$ p\left(\mathbf{x}\right)=\Big\{{\displaystyle \begin{array}{l}0,\mathrm{if}\ \mathbf{x}\ \mathrm{is}\ \mathrm{feasible}\\ {}\mid {g}_i\left(\mathbf{x}\right)\mid, \mathrm{if}\ \mathbf{x}\ \mathrm{is}\ \mathrm{infeasible}\ \mathrm{to}\ 1\le i\le {m}_1\\ {}\mid {h}_i\left(\mathbf{x}\right)\mid, \mathrm{if}\ \mathbf{x}\ \mathrm{is}\ \mathrm{infeasible}\ \mathrm{to}\ {m}_1+1\le i\le m\end{array}} $$
/epubstore/O/J-M-Ortega/Optimization-Of-Process-Flowsheets-Through-Metaheuristic-Techniques/OEBPS/images/462978_1_En_3_Chapter/462978_1_En_3_Equx_HTML.png
where C is a constant.

There are several additional forms for penalty terms that can be studied in specialized sources; see, for example, Gen and Cheng (1997).

3.3 GA Toolbox of MATLAB®

First, make sure you have installed the toolbox for genetic algorithms. This toolbox contains the functions necessary to solve optimization problems through GA.

The first thing we need to do is an M file that must accept the vector of the independent optimization variables, and it returns a scalar representing the variable to be optimized.

To write an M file in MATLAB®, it is needed to follow these steps:
  1. 1.

    From the File menu, select New.

     
  2. 2.

    Select M File to open the file editor.

     
  3. 3.

    In the editor, write the function to be optimized, for example, presented in this chapter, we would have the following:

     
/epubstore/O/J-M-Ortega/Optimization-Of-Process-Flowsheets-Through-Metaheuristic-Techniques/OEBPS/images/462978_1_En_3_Chapter/462978_1_En_3_Equy_HTML.png
Here, z represents the objective function to be minimized, and x is the vector of optimization variables.
  1. 4.

    Record the f.m file in the working directory of MATLAB®.

     

There are two ways to use the GA in MATLAB®, one is through the command line and another is through the GA tool.

Firstly, we will explain how to solve the problem using the command line. The syntax of the MATLAB® GA routine from the command line is shown below:
/epubstore/O/J-M-Ortega/Optimization-Of-Process-Flowsheets-Through-Metaheuristic-Techniques/OEBPS/images/462978_1_En_3_Chapter/462978_1_En_3_Equz_HTML.png
where
  • @f is the function that evaluates the capabilities of the chromosomes.

  • nvariables is the number of independent variables in the optimization function.

  • Options is an array that contains the options for the GA. If this argument is not included, the defaults are used.

  • x is the vector of optimization variables.

  • fval is the final value of the optimization function

To change the default settings in GA in MATLAB®, you must include the following instruction before calling the GA:
/epubstore/O/J-M-Ortega/Optimization-Of-Process-Flowsheets-Through-Metaheuristic-Techniques/OEBPS/images/462978_1_En_3_Chapter/462978_1_En_3_Equaa_HTML.png
gaoptimset is a sub-routine that modifies the GA conditions used, and Parameter1 is the parameter to be modified and assigned the value value1. The same is true for Parameter2 and value2 and thus for all the parameters to be modified.
Some of the most important parameters that can be modified in the MATLAB® GA are shown in Table 3.1:
Table 3.1

Options for GA in MATLAB®

Option

Parameter

Values

Description

Graphics options

“PlotFcns”

@gaplotbestf

Graph the best individual through successive generations

Population options

“PopulationType”

“doubleVector”

Continuous variables

“bitstring”

Binary variables

“PopulationSize”

number

Population size

“InitialPopulation”

[ ]

Arrangement for the initial population

“SelectionFcn”

@selectionstochunif

Uniform stochastic selection

@selectionuniform

Uniform selection

@selectionroulette

Roulette wheel selection

Reproduction

“EliteCount”

number

Number of best individuals remaining identical in the next generation

“CrossoverFraction”

fractional number

Number of individuals reproducing by crossover

Stop criteria

“Generations”

number

Maximum number of generations

“TimeLimit”

number

GA maximum computation time

“FitnessLimit”

number

Limit value for the objective function

“StallGenLimit”

number

Number of maximum permissible successive generations without improving the best chromosome

“StallTimeLimit”

number

Maximum time allowed without improvements in the best chromosome

Display in screen

“Display”

“‘off”

Shows nothing

“‘iter”

Displays the value of the target function each GA generation

“‘final”

Shows the best individual in the final generation

Additionally, MATLAB® has an application called GA tool. This application can be used to solve optimization problems with GA directly. To use the GA tool first, it is necessary to create an M file containing the objective function to be minimized. Afterward, open the GA tool working screen by entering the following command on the MATLAB® working screen gatool.

This will open a screen (see Fig. 3.6) where you first enter the function to be optimized (@f) in the fitness function field, as well as the number of optimization variables. Subsequently, we can modify the default settings of the MATLAB® GA by modifying the parameters on the right side of the screen. Once you have all the conditions in which you want to run the GA for the specific problem, just press the start button, the algorithm is executed, and at the end it reports the results that we request.
/epubstore/O/J-M-Ortega/Optimization-Of-Process-Flowsheets-Through-Metaheuristic-Techniques/OEBPS/images/462978_1_En_3_Chapter/462978_1_En_3_Fig6_HTML.png
Fig. 3.6

GA toolbox of MATLAB®

In the same way, in the toolbox we can include minimum and maximum limits for the optimization variables in the bound lower and upper fields in the form of arrays (i.e., for two variables [number1 number2]). Linear constraints of equality and inequality can be introduced in matrix form across fields A and b for inequalities and Aeq and beq for equalities. Finally, nonlinear constraints are introduced through an M file.

3.4 EMOO Tool in MS Excel

Genetic algorithms (the flowchart is presented in Fig. 3.7) have proved to be a versatile and effective approach for solving optimization problems, but there are many situations in which the simple genetic algorithm does not perform particularly well, and various methods of hybridization have been proposed (Gen and Cheng 1997).
/epubstore/O/J-M-Ortega/Optimization-Of-Process-Flowsheets-Through-Metaheuristic-Techniques/OEBPS/images/462978_1_En_3_Chapter/462978_1_En_3_Fig7_HTML.png
Fig. 3.7

Flowchart of simple genetic algorithm sequence of genetic algorithm

However, almost all these applications have been done using programs/platforms that are not readily used in the industry. On the other hand, engineers are familiar with MS Excel® and use it in both research and industrial practice (Sharma et al. 2012). Hence, an Excel-based multi-objective optimization (EMOO) program may represent a good alternative to develop stochastic optimization algorithms.

An improved multi-objective differential evolution algorithm with a termination criterion for optimizing chemical processes was developed by Sharma and Rangaiah (2013), which works with a termination criterion using the non-dominated solutions obtained as the search process. The multi-objective optimization hybrid method is, namely, the improved multi-objective differential evolution (I-MODE).

In the I-MODE (whose flowsheet is presented in Fig. 3.8), a population of NP individuals is randomly initialized inside the bounds of decision variables. Then, values of the objectives and constraints are calculated for each individual of the initial population. The taboo list size (TLS) is half of the population size, and taboo list (TL) is randomly filled with 50% individuals of the initial population; initial individuals are also identified as target individuals (i). A trial individual is generated for each target individual by mutation and crossover on three randomly selected individuals from initial/current/parent population. The elements of the mutant vector compete with those of the target vector, with a probability Cr to generate a trial vector. Taboo check is implemented in the generation step of the trial vector of I-MODE, and the trial individual is generated repeatedly until it is away from each individual in the TL by a specified distance called taboo radius (Tr). The Euclidean distance between trial individual and each individual in TL is calculated in the normalized space of decision variables for accepting the trial individual. After that, objectives and constraints are calculated for the temporarily accepted trial individual. The trial individual is stored in the child population and added to TL. After generating the trial individuals for all the target individuals of the current population, non-dominated sorting of the combined current and child populations followed by crowding distance calculation, if required, is performed to select the individuals for the next generation (G). The best NP individuals are used as the population in the subsequent generation.
/epubstore/O/J-M-Ortega/Optimization-Of-Process-Flowsheets-Through-Metaheuristic-Techniques/OEBPS/images/462978_1_En_3_Chapter/462978_1_En_3_Fig8_HTML.png
Fig. 3.8

Flowchart of I-MODE algorithm (Sharma and Rangaiah 2013)

For more details about how the I-MODE works and to obtain the optimization routine, you can consult the paper or contact the authors (Sharma and Rangaiah 2013).

For the optimization of a particular process, it is necessary to specify the values for the parameters associated with the used I-MODE algorithm, which are the following: population size (NP), number of generations (GenMax), taboo list size (TLS), taboo radius (TR), crossover fraction (Cr), and mutation fraction (F). In addition, it is necessary to select the decision variables and introduce the values for the lower and upper bounds expressed in homogenous units. All decision variables must be selected as a continuous or discontinuous variables, and the initial value for each is automatically the half between the minimum and the maximum possible values. The user interface of the optimization algorithm accepts inequality constraints, which can be introduced to run the optimization approach without any inequality constraint.

3.4.1 Main Program Interface

In order to give the reader the necessary knowledge to adequately manage the I-MODE, below are general notions of the user interface that use this stochastic algorithm in MS Excel®. For a detailed description of all the sheets that compose the I-MODE algorithm, it is recommended to review the user guide. Figure 3.9 shows the main program interface of the I-MODE.
/epubstore/O/J-M-Ortega/Optimization-Of-Process-Flowsheets-Through-Metaheuristic-Techniques/OEBPS/images/462978_1_En_3_Chapter/462978_1_En_3_Fig9_HTML.png
Fig. 3.9

Main program interface of I-MODE algorithm

As can be seen, the I-MODE algorithm has four fundamental parts in its main program interface, which are objective functions, design variables, inequality constraints, and algorithm parameters. Each of these sections will then be analyzed.

Objective Functions:

As expected, this part is designed to introduce the objective function to be optimized by simply clicking the Add Objective Functions button. Next, a small window called Input Objective Function shown in Fig. 3.10 will appear where you have to fill three boxes.
/epubstore/O/J-M-Ortega/Optimization-Of-Process-Flowsheets-Through-Metaheuristic-Techniques/OEBPS/images/462978_1_En_3_Chapter/462978_1_En_3_Fig10_HTML.png
Fig. 3.10

Input objective function of I-MODE algorithm

The first box, which corresponds to Name, proposes a name for the function. In the second box, you are prompted to specify a cell value, that is, a cell where the target function is already specified. The last box corresponds to Goal, where you specify if you want to maximize or minimize the objective function, by default a minimization will appear. After specifying the information of the new objective function in each of the boxes, click on the Add button, and the specific information in the boxes will appear in the corresponding cell. If you want to cancel the introduction of a new click function on the Done button, this will erase all information previously entered in the boxes and will exit the Input Objective Function window.

Design Variables:

In this part of the main program interface, you must enter the decision variables; it is the selected variables which will be manipulated in order to find the optimal point. To begin, you have to click the Add Decision Variables button, after which the window shown in Fig. 3.11 will appear.
/epubstore/O/J-M-Ortega/Optimization-Of-Process-Flowsheets-Through-Metaheuristic-Techniques/OEBPS/images/462978_1_En_3_Chapter/462978_1_En_3_Fig11_HTML.png
Fig. 3.11

Input decision variables of I-MODE algorithm

As can be seen in Fig. 3.11, the input decision variables of I-MODE algorithm window contains four boxes where you must specify the name of the decision variable, the maximum and minimum values, as well as the type of variable, as appropriate. After entering the requested information, click on the Add button; otherwise to delete the information in the boxes, click on the Done button. The specific information in the boxes will appear in the corresponding cell.

Inequality Constraints:

In the inequality restrictions, part of the main program interface of I-MODE algorithm, the constraints of the optimization problem must be specified, if any. To proceed, click the Add Constraints button after which the window shown in the Fig. 3.12 will be displayed.
/epubstore/O/J-M-Ortega/Optimization-Of-Process-Flowsheets-Through-Metaheuristic-Techniques/OEBPS/images/462978_1_En_3_Chapter/462978_1_En_3_Fig12_HTML.png
Fig. 3.12

Input constraints of I-MODE algorithm

The input constraints window contains four boxes in which you must enter the name that will be assigned to the constraint, the cell containing the constraint, the type of comparison, and the limit of the constraint. The information contained in the boxes structure the restriction, as shown by the example in the figure.

Algorithm Parameters :

This is the part of the main program interface where it is necessary to specify the parameters associated with the used I-MODE algorithm, which are the following: population size (NP), number of generations (GenMax), taboo list size (TLS), taboo radius (TR), crossover fraction (Cr), and mutation fraction (F). All these values are entered directly into the corresponding cell that indicates its name without needing to click on any button to add them. The numerical values of these parameters depend on the nature of the problem to be solved.

3.4.2 Objectives and Constraints

Many of the values that are entered in the main program interface of the I-MODE are referenced to another MS Excel sheet where they contain mainly the objectives and constraints of the problem to be optimized. In the objectives and constraints sheet shown in Fig. 3.13, you can identify the decision variables, objectives, and constraints. It is convenient to write in this part the numerical values of each aspect and to refer to them in the sheet of main program interface, this with the purpose of its easy modification in subsequent optimizations.
/epubstore/O/J-M-Ortega/Optimization-Of-Process-Flowsheets-Through-Metaheuristic-Techniques/OEBPS/images/462978_1_En_3_Chapter/462978_1_En_3_Fig13_HTML.png
Fig. 3.13

Objectives and constraints of I-MODE algorithm

The value must be entered in its respective cell. It is important to emphasize that the number of decision variables, objective functions, and inequality constraints is not limited to the number proposed but can add more uncertainty of the cells of the usual way in MS Excel®.

3.5 Stochastic Optimization Exercises

  1. 1.

    Propose a chromosome to represent a string of bits for the variables x1 and x2. x1 is defined in the interval [−10 10], while x2 is defined in the interval [−20 20]. Accuracy of five decimal places is required.

     
  2. 2.
    Propose a chromosome to model through a string of bits a set of variables given by x1, x2, and x3 with an accuracy of six decimal places. The variables are defined in the following intervals:
    • x1 defined in [0–1]

    • x2 defined in [0.1 0.9]

    • x3 defined in [0–0.8]

     
  3. 3.

    For the example presented in the Section of GA, perform the calculations for the next ten generations, and see the behavior of the objective function as well as the evolution of the chromosomes. Plot the value of the objective function with respect to the generation number.

     
  4. 4.

    Consider the following problem:

     
$$ \min \kern0.5em f\left({x}_1,{x}_2\right)=\frac{-20\sin \left(0.1+\sqrt{{\left({x}_1-4\right)}^2+{\left({x}_2-4\right)}^2}\right)}{0.1+\sqrt{{\left({x}_1-4\right)}^2+{\left({x}_2-4\right)}^2}} $$
Figure 3.14 shows the behavior of the previous function:
/epubstore/O/J-M-Ortega/Optimization-Of-Process-Flowsheets-Through-Metaheuristic-Techniques/OEBPS/images/462978_1_En_3_Chapter/462978_1_En_3_Fig14_HTML.png
Fig. 3.14

Behavior of the function of problem 4

As can be seen in Fig. 3.14, this problem presents a large number of local solutions in which the deterministic algorithms could fail to locate the global optimal solution.

Solve this problem using MATLAB® GA under the following conditions:

  1. (a)

    For a population boundary of the optimization variables between [−20 20], with an elite count of 2 individuals, a fusion fraction of 0.8 and a limit for the infinite count time, as well as a maximum number of 50 generations.

    First, solve the problem with a population size of five individuals. Then, solve the problem using a population size of 10 individuals, and finally solve the problem using a population size of 100 individuals.

    Do you get the optimal solution in each of the population sizes? Explain the results.

     
  2. (b)

    Now, change the maximum number of generations to 100. Then, solve the problem for 5, 10, and 20 individuals. In which cases do you get the optimal solution? Explain the results.

     
  3. (c)

    For a population of 30 individuals and a maximum number of 100 generations, solve the problem with a value for the crossover fraction of 0.9, 0.5, and 0.2; what are the results? Explain the results obtained.

     
  4. (d)

    Based on the results obtained in the previous sections, what would be the best values for GA parameters to solve this example?

     
  1. 5.

    Consider the following problem originally proposed by Hock and Schittkowski (1981):

     
$$ {\displaystyle \begin{array}{l}\min f(x)=\sum \limits_{j=1}^{10}{x}_j\left({c}_j+\ln \frac{x_j}{x_1+{x}_2+{x}_3+{x}_4+{x}_5+{x}_6+{x}_7+{x}_8+{x}_9+{x}_{10}}\right)\\ {}\mathrm{subject}\ \mathrm{to}\\ {}{x}_1+2{x}_2+2{x}_3+{x}_6+{x}_{10}=2\\ {}{x}_4+2{x}_5+{x}_6+{x}_7=1\\ {}{x}_3+{x}_7+{x}_8+2{x}_9+{x}_{10}=1\\ {}{x}_i\ge 0.000001,\kern0.875em i=1,\dots, 10\end{array}} $$

where

c1 = −6.089, c2 = −17.164, c3 = −34.054, c4 = −5.914, c5 = −24.721, c6 = −14.986, c7 = −24.100, c8 = −10.708, c9 = −26.662, c10 = −22.179.

This problem includes a set of constraints that need to be included across a penalty term in the objective function. Solve the problem using the MATLAB® GA. The best solution to the problem is −47.760765; adjust the GA parameters to achieve this solution. What is your computation time?
  1. 6.

    Solve the following problem using the MATLAB® GA toolbox:

     
$$ {\displaystyle \begin{array}{l}\min \kern0.5em f(x)=\left\{\begin{array}{l}x\sin (x)-\cos (x),\kern1em 0\le x\le 10\\ {}x\ln (x),\kern1em 10\le x\le 20\\ {}{x}^2\sin (x),\kern1em 20\le x\le 30\end{array}\right.\\ {}0\le x\le 30\end{array}} $$
A special feature of this problem is that it includes discontinuous functions which are very difficult to manipulate with deterministic optimization methods.
  1. 7.

    Solve GA using the problem defined below:

     
$$ {\displaystyle \begin{array}{l}\min \kern0.62em f\left(x,y\right)= xy+y\sin (x)-x\cos (y)\\ {}-5\le x\le 5\\ {}-5\le y\le 5\end{array}} $$
Determine the best parameters for GAs that allow finding the optimal solution in a shorter time.
  1. 8.

    The most commonly used type of heat exchanger is the shell-and-tube heat exchangers because they are robust units able to work in a wide range of pressure, flow, and temperature. Ponce-Ortega et al. (2009) developed an optimization approach based on genetic algorithms for the optimal design of this type of heat exchangers. This algorithm can be found in the following link:

    http://​extras.​springer.​com

    Use this algorithm for solving the problems reported by Ponce-Ortega et al. (2009).

     
  2. 9.

    Process cogeneration is an effective strategy for exploiting the positive aspects of combined heat and power in the process industry. Bamufleh et al. (2013) developed an optimization framework for designing process cogeneration systems with economic, environmental, and social aspects. This algorithm can be found in the following link:

    http://​extras.​springer.​com

    Use this code to solve the problems reported by Bamufleh et al. (2013).

     

3.6 Nomenclature

a j

Lower bound for the interval of xj

b j

Upper bound for the interval of xj

C

Population of decedents, set of solutions for the search vector

Cr

Probability

d j

Number of positions after the decimal point needed to represent the continuous variable xj

EMOO

Excel-based multi-objective optimization

f

Objective function

G

Generation

GA

Genetic algorithms

g j

Set of inequality constraints where i = 1, 2, …, m1

h j

Set of equality constraints where i = m1 + 1, …, m

i

Individuals

I-MODE

Improved multi-objective differential evolution

k

Number of iterations

L

Limit for the maximum number of iterations

m j

Number of positions required by the subchain bit string to be able to represent the continuous variable

MODE-TL

Multi-objective differential evolution taboo list

MOO

Multi-objective optimization

NP

Population size

P

Population of parents, set of solutions for the search vector

pc

Fusion ratio

q

Coordinate vector

r i

Variable penalty coefficient for the constraint i

SA

Simulated annealing

t

Number of generation (iteration of GA)

T

Temperature

TL

Taboo list

T low

Lower limit for the temperature

TLS

Taboo list size

Tr

Taboo radius

x

Optimization variable vector

x′

Unknown vector

x j

Continuous variable

α

Penalty constant

β

Penalty constant

ρ

Penalty constant