© Springer International Publishing AG, part of Springer Nature 2019
Victor Manuel Hernández-Guzmán and Ramón Silva-OrtigozaAutomatic Control with ExperimentsAdvanced Textbooks in Control and Signal Processinghttps://doi.org/10.1007/978-3-319-75804-6_3

3. Ordinary Linear Differential Equations

Victor Manuel Hernández-Guzmán1  and Ramón Silva-Ortigoza2
(1)
Universidad Autonoma de Queretaro, Facultad de Ingenieria, Querétaro, Querétaro, Mexico
(2)
Instituto Politécnico Nacional, CIDETEC, Mexico City, Mexico
 

The automatic control techniques employed in classical control require knowledge of the mathematical model of the physical system to be controlled. As has been shown in Chap. 2, these mathematical models are differential equations. The controller is designed as another differential equation that must be connected in closed-loop to the system to be controlled. This results in another differential equation representing the closed-loop control system. This differential equation is forced by the controller to possess the mathematical properties that ensure that its solution evolves as desired. This means that the controlled variable evolves as desired. Hence, it is important to know the properties of a differential equation determining how its solution behaves in time. Although several different approaches exist to solve differential equations, the use of the Laplace transform is the preferred method in classical control. This is the reason why the Laplace transform method is employed in this chapter, to study linear ordinary differential equations with constant coefficients.

Let f(t) be a function of time, the Laplace transform of f(t) is represented by F(s) and is computed as [2], pp. 185, [3], pp. 285:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} F(s)=\mathcal{L}\{f(t)\}=\int_{0}^{\infty}f(t)e^{-st}dt.{} \end{array} \end{aligned} $$
(3.1)
The fundamental property of the Laplace transform employed to solve linear ordinary differential equations with constant coefficients is the following [2], pp. 192, [3], pp. 293:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \mathcal{L}\Bigg\{\frac{d^{n}f(t)}{dt^{n}}\Bigg\}=\\ &\displaystyle &\displaystyle s^{n}F(s)-s^{n-1}f(0)-s^{n-2}f^{(1)}(0)-\cdots-sf^{(n-2)}(0)-f^{(n-1)}(0), {} \end{array} \end{aligned} $$
(3.2)
where the exponent in parenthesis indicates the order of the derivative with respect to time. The following property is also useful [2], pp. 193,[3], pp. 293:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \mathcal{L}\Bigg\{\int_0^tf(\tau)d\tau\Bigg\}=\frac{F(s)}{s}.{} \end{array} \end{aligned} $$
(3.3)
Other important properties of the Laplace transform are the final value theorem [1], pp. 25, [3], pp. 304:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \lim_{t\to\infty}f(t)=\lim_{s\to 0}sF(s),{} \end{array} \end{aligned} $$
(3.4)
which is only valid if the indicated limits exist, and the initial value theorem [3], pp. 304:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} f(0^+)=\lim_{t\to0^+}f(t)=\lim_{s\to\infty}sF(s).{} \end{array} \end{aligned} $$
(3.5)

3.1 First-Order Differential Equation

Consider the following linear, ordinary, first-order, differential equation with constant coefficients:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \dot y+a\; y=k\; u, y(0)=y_0,{} \end{array} \end{aligned} $$
(3.6)
where k and a are real constants, y 0 is a real constant known as the initial condition or initial value of y(t) and  $$\dot y=\frac {dy}{dt}$$ . The solution of this differential equation is a function of time y(t), which satisfies (3.6) when replaced. With this aim, it is required that constants a, k, y 0, and the function of time u be known. Use of (3.2) in (3.6) yields:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} sY(s)-y_0+a\; Y(s)=k\; U(s).{} \end{array} \end{aligned} $$
(3.7)
Solving for Y (s):
 $$\displaystyle \begin{aligned} \begin{array}{rcl} Y(s)=\frac{k}{s+a} U(s)+\frac{1}{s+a}y_0.{} \end{array} \end{aligned} $$
(3.8)
As stated above, to continue, it is necessary to know u as a function of time. Assume that:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} u=A, U(s)=\frac{A}{s},{} \end{array} \end{aligned} $$
(3.9)
where A is a constant. Hence, (3.8) can be written as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} Y(s)=\frac{kA}{s(s+a)} +\frac{1}{s+a}y_0.{} \end{array} \end{aligned} $$
(3.10)
According to the partial fraction expansion method [2], Ch. 4, [3], pp. 319, if a ≠ 0, then it is possible to write:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \frac{kA}{s(s+a)}&\displaystyle =&\displaystyle \frac{B}{s+a}+\frac{C}{s},{} \end{array} \end{aligned} $$
(3.11)
where B and C are constants to be computed. The value of B can be computed, multiplying both sides of (3.11) by the factor (s + a) and then evaluating the resulting expressions at s = −a:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \left.\frac{kA}{s(s+a)}(s+a)\right|{}_{s=-a}&\displaystyle =&\displaystyle \left.\frac{B}{s+a}(s+a)\right|{}_{s=-a}+\left.\frac{C}{s}(s+a)\right|{}_{s=-a}, \end{array} \end{aligned} $$
i.e.,
 $$\displaystyle \begin{aligned} \begin{array}{rcl} B&\displaystyle =&\displaystyle \left.\frac{kA}{s}\right|{}_{s=-a}=-\frac{kA}{a}.{} \end{array} \end{aligned} $$
(3.12)
The value of C is computed multiplying both sides of (3.11) by the factor s and then evaluating the resulting expressions at s = 0:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \left.\frac{kA}{s(s+a)}s\right|{}_{s=0}&\displaystyle =&\displaystyle \left.\frac{B}{s+a}s\right|{}_{s=0}+\left.\frac{C}{s}s\right|{}_{s=0}, \end{array} \end{aligned} $$
i.e.,
 $$\displaystyle \begin{aligned} \begin{array}{rcl} C&\displaystyle =&\displaystyle \left.\frac{kA}{s+a}\right|{}_{s=0}=\frac{kA}{a}.{} \end{array} \end{aligned} $$
(3.13)
Replacing (3.11), (3.12), (3.13) in (3.10) yields:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} Y(s)&\displaystyle =&\displaystyle -\frac{kA}{a}\frac{1}{s+a}+\frac{kA}{a}\frac{1}{s}+\frac{y_0}{s+a}.{} \end{array} \end{aligned} $$
(3.14)
From tabulated Laplace transform formulas [4], Ch. 32, it is known that:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \mathcal{L}\{\mathrm{e}^{\beta t}\}=\frac{1}{s-\beta},{} \end{array} \end{aligned} $$
(3.15)
where β is any real constant. Using (3.15) it is found that the inverse Laplace transform of (3.14) is given as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} y(t)&\displaystyle =&\displaystyle -\frac{kA}{a}\mathrm{e}^{-at}+\frac{kA}{a}+y_0\mathrm{e}^{-at},{} \end{array} \end{aligned} $$
(3.16)
which is the solution of (3.6).

Example 3.1

To verify that (3.16) is the solution of (3.6) proceed as follows. First, compute the first time derivative of  $$y(t)=-\frac {kA}{a}\mathrm {e}^{-at}+\frac {kA}{a}+y_0\mathrm {e}^{-at}$$ , i.e.,
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \dot y(t)&\displaystyle =&\displaystyle kA \mathrm{e}^{-at}- a y_0\mathrm{e}^{-at}, \end{array} \end{aligned} $$
and replace  $$\dot y(t)$$ and y(t) in (3.6) to obtain:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \dot y(t)+ay(t)=kA \mathrm{e}^{-at}- a y_0\mathrm{e}^{-at}+ a\left(-\frac{kA}{a}\mathrm{e}^{-at}+\frac{kA}{a}+y_0\mathrm{e}^{-at}\right)=kA=ku,\end{array} \end{aligned} $$
because u = A has been defined in (3.9). This means that the expression for y(t) given in (3.16) satisfies (3.6); thus, it has been verified that (3.16) is the solution of (3.6). This procedure must be followed to verify the solution of any differential equation.
The solution given in (3.16) can be decomposed into two parts:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} y(t)&\displaystyle =&\displaystyle y_n(t)+y_f(t),{} \end{array} \end{aligned} $$
(3.17)
 $$\displaystyle \begin{aligned} \begin{array}{rcl} y_n(t)&\displaystyle =&\displaystyle \left(-\frac{kA}{a}+y_0\right)\mathrm{e}^{-at}, \end{array} \end{aligned} $$
(3.18)
 $$\displaystyle \begin{aligned} \begin{array}{rcl} y_f(t)&\displaystyle =&\displaystyle \frac{kA}{a}, \end{array} \end{aligned} $$
(3.19)
where y n(t) and y f(t) are called the natural response and the forced response respectively. The forced response y f(t) receives this name because it is caused by the presence of the function of time u. On the other hand, the natural response y n(t) receives this name because it is produced by the structure (i.e., the nature) of the differential equation, i.e., by terms  $$\dot y+a y$$ on the left hand of (3.6). Let us explain this in more detail. Follow the solution procedure presented above, but assume for a moment that y 0 = 0. The partial fraction expansion presented in (3.11) and the subsequent application of the inverse Laplace transform show that y(t) is given as the sum of several functions of time that result from the fractions  $$\frac {1}{s}$$ and  $$\frac {1}{s+a}$$ . Notice that the presence of  $$\frac {1}{s}$$ in (3.11) is due to the fact that  $$U(s)=\frac {A}{s}$$ appears in (3.10), which is because u = A where A is a constant. Notice that the forced response y f(t) is a constant resulting from the term  $$\frac {kA}{a}\frac {1}{s}$$ in (3.14), i.e., it arises from fraction  $$\frac {1}{s}$$ . This corroborates that y f(t) only depends on the function of time u.

On the other hand, fraction  $$\frac {1}{s+a}$$ is due to terms  $$\dot y+a y$$ as  $$\mathcal {L}\{\dot y+a y\}=(s+a)Y(s)$$ . According to (3.15), the consequence of this fraction is function eat, which constitutes the natural response y n(t). This corroborates that y n(t) is only determined by the differential equation structure (i.e., its nature). One important consequence of this fact is that y n(t) is always given by the function eat no matter what the value of u is. The reader can review the procedure presented above to solve this differential equation, thereby realizing that the initial condition y 0 always appears as a part of the natural response y n(t).

In control systems, y and u are known as the output and the input respectively. The polynomial resulting from the application of the Laplace transform in the differential equation, i.e., s + a, which arises from  $$\mathcal {L}\{\dot y+a y\}=(s+a)Y(s)$$ , is known as the characteristic polynomial . The solution in (3.16), or equivalently in (3.17), can evolve according to one of the following cases.
  1. 1.

    If a > 0, i.e., if the only root of the characteristic polynomial s = −a is real and negative, then limt y n(t) = 0 and limt y(t) = y f(t).

     
  2. 2.

    If a < 0, i.e., if the only root of the characteristic polynomial s = −a is positive, then y n(t) and y(t) grow without limit as time increases.

     
  3. 3.

    The case when a = 0, i.e., when the only root of the characteristic polynomial is zero s = 0, cannot be studied from results that have been obtained so far and it is studied as a special case in the next section. However, to include all the cases, let us talk about what is to be found in the next section. When the only root of the characteristic polynomial is at s = −a = 0, the natural response is constant y n(t) = y 0, ∀t ≥ 0 and the forced response is the integral of the input  $$y_f(t)=k\int _0^t u(t)dt$$ .

     

3.1.1 Graphical Study of the Solution

According to the previous section, if a > 0, then the natural response goes to zero over time: limt y n(t) = 0; hence, when time is large enough the solution of the differential equation is equal to the forced response:  $$\lim _{t\to \infty }y(t)=y_f(t)=\frac {kA}{a}$$ . This means that the faster the natural response y n(t) tends toward zero (which occurs as a > 0 is larger), the faster the complete solution y(t) reaches the forced response y f(t). Hence, the natural response can be seen as a means of transport, allowing the complete solution y(t) to go from the initial value y(0) = y 0 to the forced response y f(t). Existence of such a means of transport is justified recalling that the solution of the differential equation in (3.6), i.e., y(t), is a continuous function of time. This means that the solution cannot go from y(t) = y 0 to y(t) = y f(t) ≠ y 0 in a zero time interval.

The graphical representation of the solution in (3.17), when y 0 = 0 and a > 0, i.e.,
 $$\displaystyle \begin{aligned} \begin{array}{rcl} y(t)&amp;\displaystyle =&amp;\displaystyle \frac{kA}{a}\left(1-\mathrm{e}^{-at}\right),{} \end{array} \end{aligned} $$
(3.20)
is depicted in Fig. 3.1. An important parameter in first-order differential equations is time constant τ, which is defined as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \tau=\frac{1}{a}.{} \end{array} \end{aligned} $$
(3.21)
This is indicative of how fast the natural response vanishes, i.e., how fast y(t) approaches the forced response y f(t). Replacing (3.21) into (3.20), it is easy to verify that:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} y(\tau)&amp;\displaystyle =&amp;\displaystyle 0{.}632\frac{kA}{a}. \end{array} \end{aligned} $$
(3.22)
Finally, it is important to stress that y(t) grows without a limit if a < 0. If a = 0, then y(t) = kAt also grows without a limit (see Sect. 3.2).
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig1_HTML.png
Fig. 3.1

Graphical representation of y(t) in (3.20)

3.1.2 Transfer Function

Consider a zero initial condition, i.e., y 0 = 0, then (3.8) can be written as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} Y(s)=\frac{k}{s+a} U(s), \end{array} \end{aligned} $$
or, equivalently:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \frac{Y(s)}{U(s)}=G(s)=\frac{k}{s+a}. {} \end{array} \end{aligned} $$
(3.23)
The function G(s) is known as the transfer function of the differential equation in (3.6). The polynomial in the denominator of G(s), i.e., s + a, is known as the characteristic polynomial and its roots are known as the poles of G(s). In this case, there is only one pole at s = −a. According to the discussion above, the following can be stated with respect to y(t), i.e., the output of the transfer function G(s):
  • If pole located at s = −a is negative, i.e., if a > 0, then the natural response y n(t) vanishes and the complete solution y(t) approaches the forced response y f(t) as time increases. If u(t) = A is a constant, then  $$y_f(t)=\frac {kA}{a}$$ .

  • The faster y(t) approaches y f(t), the farther to the left of the origin s = 0 is placed the pole at s = −a. This can be quantified using the time constant  $$\tau =\frac {1}{a}$$ . A large time constant implies a slow response whereas a small time constant implies a fast response.

  • If k = a > 0, then it is said that the transfer function G(s) has a unitary gain in a steady state because, according to the final value theorem (3.4), if u(t) = A is a constant then:
     $$\displaystyle \begin{aligned} \begin{array}{rcl} \lim_{t\to\infty}y(t)=\lim_{s\to 0}sY(s)=\lim_{s\to 0}s\frac{k}{s+a}\frac{A}{s}=\frac{k}{a}A=A.{} \end{array} \end{aligned} $$
    (3.24)
    In general, the steady-state gain of the transfer function in (3.23) is computed as  $$\frac {k}{a}$$ .
  • If the pole located at s = −a is positive , i.e., if a < 0, then the complete solution y(t) grows without limit. Notice that this implies that the pole is located on the right half of the plane s (see Fig. 3.2).
    /epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig2_HTML.png
    Fig. 3.2

    Location of the poles of the transfer function in (3.23). It is usual to represent a pole on the s plane using a cross “×”

  • The case when the pole is located at s = a = 0 cannot be studied from the above computations and it is analyzed as a special case in the next section. However, again to summarize all the possible cases, we present here results that are obtained in the next section. When the pole is located at s = a = 0, the natural response neither vanishes nor increases without limit and the forced response is given as the integral of the input  $$y_f(t)=k\int _0^t u(t)dt$$ .

Example 3.2

Consider the tank containing water depicted in Fig. 3.3. Assume that the tank section C is constant. Water enters the tank at a rate given by the input flow q i (m3/s). Water leaves the tank at a rate given by the output flow q o (m3/s) through a valve with hydraulic resistance R. The water level in the tank is represented by h. The mathematical model describing this system is obtained using the mass conservation law . As water is not compressible, this law can be stated in terms of mass or volume.
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig3_HTML.png
Fig. 3.3

A water level system

Let ΔV i and ΔV o be the water volumes entering and leaving the tank respectively during a time interval Δt. During this time interval, the water volume ΔV  increases inside the tank at a rate given as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \frac{\varDelta V}{\varDelta t}=\frac{\varDelta V_i}{\varDelta t}-\frac{\varDelta V_o}{\varDelta t}.{} \end{array} \end{aligned} $$
(3.25)
On the other hand:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \varDelta V&amp;\displaystyle =&amp;\displaystyle C\varDelta h,\\ \varDelta V_i&amp;\displaystyle =&amp;\displaystyle q_i\varDelta t,\\ \varDelta V_o&amp;\displaystyle =&amp;\displaystyle q_o \varDelta t. \end{array} \end{aligned} $$
Replacing this in (3.25) and considering small increments of time such that Δt → dt and Δh → dh:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} C\frac{dh}{d t}=q_i-q_o.{} \end{array} \end{aligned} $$
(3.26)
Assuming that the water flow is laminar through the output valve [1], pp. 155, then:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} q_o=\frac{h}{R}.{} \end{array} \end{aligned} $$
(3.27)
To understand the reason for this expression, consider the following situations.
  1. a)

    Suppose that the opening of the output valve is kept without change, i.e., R remains constant, and the water level h increases. Everyday experience and the expression in (3.27) corroborate that the output flow q o increases under this situation.

     
  2. b)

    Suppose that the water level h is kept constant, because there is water entering the tank to exactly compensate for water leaving the tank. Then, slowly close the output valve, i.e., slowly increase R. Everyday experience and the expression in (3.27) corroborate that the output flow q o decreases in this situation.

     
Substituting (3.27) in (3.26), it is found that:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} C\frac{dh}{d t}=q_i-\frac{h}{R}. \end{array} \end{aligned} $$
Hence, the mathematical model of the water level system depicted in Fig. 3.3 is given as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} C\frac{dh}{d t}+\frac{1}{R}h=q_i. \end{array} \end{aligned} $$
Dividing by C, the following linear, ordinary, first-order differential equation with constant coefficients is finally obtained:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \frac{dh}{d t}+a h=k q_i,\\ a=\frac{1}{RC}, k=\frac{1}{C}.{} \end{array} \end{aligned} $$
(3.28)
The reason why this differential equation is the mathematical model of the water level system is because, solving it, the evolution over time of the water level h(t) can be known if the input flow q i(t), the tank section C, the hydraulic resistance R, and the initial water level are known.

Example 3.3

Consider the water level system depicted in Fig. 3.3. It was found in Example 3.2 that the mathematical model is given as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \frac{dh}{d t}+a h=k q_i,\\ a=\frac{1}{RC}&gt;0, k=\frac{1}{C}.{} \end{array} \end{aligned} $$
(3.29)
This differential equation can be written as in (3.6) if it is assumed that y = h, u = q i and y 0 = h 0. Hence, if q i = A, the solution h(t) is similar to that in (3.16), i.e.,:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} h(t)&amp;\displaystyle =&amp;\displaystyle -\frac{kA}{a}\mathrm{e}^{-at}+\frac{kA}{a}+h_0\mathrm{e}^{-at}, h_0=h(0).{} \end{array} \end{aligned} $$
(3.30)
Let us analyze (3.30).
  1. 1.
    Assume that the tank is initially empty, i.e., h 0 = 0, and water enters the tank with a constant rate q i = A > 0. Under these conditions, the water level h(t) evolves as depicted in Fig. 3.1 by assuming that h(t) = y(t). Notice that  $$a=\frac {1}{RC}&gt;0$$ . From this figure, the solution in (3.30), and using everyday experience, several cases can be studied.
    • If A is chosen to be larger, then the final water level  $$\lim _{t\to \infty }h(t)=\frac {kA}{a}=RA$$ is also larger.

    • If the output valve is slowly closed, i.e., R is slowly increased, then the final water level  $$\lim _{t\to \infty }h(t)=\frac {kA}{a}=RA$$ slowly increases.

    • The tank cross-section has no effect on the final water level, i.e., the final water level is the same in a thin tank and a thick tank.

    • If the product RC is larger, then a is smaller; hence, the system response is slower, i.e., more time is required for the level to reach the value  $$h(\tau )=0{.}632\frac {kA}{a}=0{.}632RA$$ and, as consequence, also to reach the final water level  $$\lim _{t\to \infty }h(t)=\frac {kA}{a}=RA$$ . This is because the function eat tends toward zero more slowly when  $$a=\frac {1}{RC}&gt;0$$ is smaller. Notice that a larger value of RC can be obtained by increasing the tank cross-section C, or decreasing the output valve opening, i.e., increasing R.

     
  2. 2.
    Suppose that no water is entering the tank, i.e., q i(t) = A = 0, and the initial water level is not zero, i.e., h 0 > 0. From the solution in (3.30), and everyday experience can verify it, the water level decreases (recall that  $$a=\frac {1}{RC}&gt;0$$ ) until the tank is empty, i.e., limt h(t) = 0 (see Fig. 3.4). This behavior is faster as RC is smaller, i.e., when a is larger, and slower as RC is larger, i.e., when a is smaller.
    /epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig4_HTML.png
    Fig. 3.4

    Water level evolution when h 0 > 0 and q i(t) = A = 0 (R 1 C 1 < R 2 C 2)

     
  3. 3.

    The case when a < 0 is not possible in an open-loop water level system because C and R cannot be negative. However, a < 0 may occur in closed-loop systems as a consequence of the interconnection of diverse components.

     
  4. 4.

    The case when a = 0 is studied in the next section.

     

3.2 An Integrator

Consider the following differential equation:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \dot y&amp;\displaystyle =&amp;\displaystyle ku,{} \end{array} \end{aligned} $$
(3.31)
where k is a real constant different from zero. Notice that this equation is obtained as a particular case of (3.6) when a = 0. By direct integration:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \int_{y(0)}^{y(t)}dy&amp;\displaystyle =&amp;\displaystyle \int_0^t ku\;dt,\\ y(t)&amp;\displaystyle =&amp;\displaystyle k\int_0^t u(t)\;dt+y(0),{}\\ y(t)&amp;\displaystyle =&amp;\displaystyle y_n(t)+y_f(t),\\ y_n(t)&amp;\displaystyle =&amp;\displaystyle y(0), y_f(t)=k\int_0^tu(t)\;dt. \end{array} \end{aligned} $$
(3.32)
In this case, y n(t) remains constant in time. Notice that, if u = A is a constant, the forced response increases without a limit y f(t) = kAt. The solution y(t) = y(0) + kAt is depicted in Fig. 3.5 for a) y(0) ≠ 0 and u = A > 0 a constant, and b) y(0) ≠ 0 and u = A = 0. Physical systems represented by this class of equation are called integrators because, if y(0) = 0, the solution y(t) is the integral of the excitation u.
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig5_HTML.png
Fig. 3.5

Solution of differential equation in (3.31). (a) y 0 ≠ 0 and u = A > 0 is a constant. (b) y 0 ≠ 0 and u = A = 0

Example 3.4

Consider the water level system depicted in Fig. 3.3. Suppose that the output valve is closed, i.e., R →. Then, the mathematical model in (3.28) becomes:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \frac{dh}{d t}=k q_i, \end{array} \end{aligned} $$
because  $$a=\frac {1}{RC}\to 0$$ when R →. Then, the level evolution over time is described by (3.32), i.e.,:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} h(t)=h(0)+k\int_0^tq_i(t)\;dt. \end{array} \end{aligned} $$
Hence, the water level system behaves in this case as an integrator. If q i = A > 0, then:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} h(t)=h(0)+kAt.{} \end{array} \end{aligned} $$
(3.33)
Figure 3.5 can be employed, assuming h(t) = y(t) and q i(t) = u, to graphically depict the solution in (3.33) for the cases i) h(0) > 0 and q i = A > 0 is a constant, and ii) h(0) > 0 and q i = A = 0. Notice that the water level remains constant if no water enters the tank, i.e., when A = 0, and the level increases without a limit when A > 0 is constant. The reader can resort to everyday experience to verify that these two situations actually occur in a practical water level system.

Example 3.5

The differential equation in (3.31) can also be solved using partial fraction expansion, i.e., using the method employed in Sect. 3.1. From (3.2) and (3.31) it is found that:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} sY(s)-y_0=k\; U(s). \end{array} \end{aligned} $$
Solving for Y (s) yields:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} Y(s)=\frac{k}{s} U(s)+\frac{1}{s}y_0.{} \end{array} \end{aligned} $$
(3.34)
Assume that:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} u=A, U(s)=\frac{A}{s}, \end{array} \end{aligned} $$
where A is a constant. Hence, (3.34) can now be written as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} Y(s)=\frac{kA}{s^2} +\frac{1}{s}y_0.{} \end{array} \end{aligned} $$
(3.35)
According to the partial fraction expansion method [2], Ch. 4, [3], Ch. 7, it must be written:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \frac{kA}{s^2}&amp;\displaystyle =&amp;\displaystyle \frac{B}{s}+\frac{C}{s^2},{} \end{array} \end{aligned} $$
(3.36)
where B and C are two constants. C is computed multiplying both sides of (3.36) by s 2 and evaluating at s = 0, i.e.,:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \left. s^2\frac{kA}{s^2}\right|{}_{s=0}&amp;\displaystyle =&amp;\displaystyle \left.\frac{B}{s}s^2\right|{}_{s=0}+\left.\frac{C}{s^2}s^2\right|{}_{s=0}. \end{array} \end{aligned} $$
Hence, C = kA is found. B is computed multiplying both sides of (3.36) by s 2, differentiating once with respect to s and evaluating at s = 0:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \left.\frac{d}{ds}\left(s^2\frac{kA}{s^2}\right)\right|{}_{s=0}&amp;\displaystyle =&amp;\displaystyle \left.\frac{d}{ds}\left(\frac{B}{s}s^2\right)\right|{}_{s=0} +\left.\frac{d}{ds}\left(\frac{C}{s^2}s^2\right)\right|{}_{s=0}. \end{array} \end{aligned} $$
Hence, B = 0 is found. Thus, using these values and (3.36), the expression in (3.35) can be written as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} Y(s)=\frac{kA}{s^2} +\frac{1}{s}y_0.{} \end{array} \end{aligned} $$
(3.37)
From tabulated formulas, [4], Ch. 32, it is well known that:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \mathcal{L}\{t\}=\frac{1}{s^2}.{} \end{array} \end{aligned} $$
(3.38)
Using (3.38), it is found that the inverse Laplace transform of (3.37) is given as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} y(t)&amp;\displaystyle =&amp;\displaystyle kAt+y_0,{} \end{array} \end{aligned} $$
(3.39)
which is the solution of (3.31). This solution can be decomposed into two parts:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} y(t)&amp;\displaystyle =&amp;\displaystyle y_n(t)+y_f(t),{}\\ y_n(t)&amp;\displaystyle =&amp;\displaystyle y_0,\\ y_f(t)&amp;\displaystyle =&amp;\displaystyle kAt, \end{array} \end{aligned} $$
(3.40)
where y n(t) and y f(t) are the natural and the forced responses respectively. As explained Sect. 3.1, the natural response is given by the Laplace transform of term  $$\dot y$$ on the left-hand side of (3.31). As a consequence of this, the term  $$\frac {1}{s}y_0$$ appears in (3.37), i.e., y n(t) = y 0 in (3.40). The forced response y f(t) is produced by u = A. However, in this case, the forced response also receives the effect of the Laplace transform of the term  $$\dot y$$ on the left-hand side of (3.31), as the combination of both of them results in  $$\frac {kA}{s^2}$$ and, hence, y f(t) = kAt.
The characteristic polynomial of the differential equation in (3.31) is s; hence, it only has one root at s = 0. Thus, the corresponding transfer function G(s), when y 0 = 0:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \frac{Y(s)}{U(s)}=G(s)=\frac{k}{s},{} \end{array} \end{aligned} $$
(3.41)
only has one pole at s = 0. Notice that the input  $$U(s)=\frac {A}{s}$$ also has one pole at s = 0. The combination of these repeated poles in (3.35) produces the forced response y f(t) = kAt, i.e., a first-degree polynomial of time, whereas the input u = A is a zero-degree polynomial of time. Notice that the forced response corresponding to the differential equation in (3.6) is  $$y_f(t)=\frac {kA}{a}$$ , i.e., a zero-degree polynomial of time, whereas the input is u = A, i.e., a zero-degree polynomial of time. From these examples we arrive at the following important conclusion:

“The forced response of a first-order differential equation with a constant input, is also a constant if its characteristic polynomial has no root at s = 0.”

In Fig. 3.2 the location, on the s plane of the pole of the transfer function in (3.41) is depicted, i.e., the pole at s = 0. The reader must learn to relate the pole location on the s plane to the corresponding time response y(t). See the cases listed before Example 3.2.

Example 3.6

Consider the electric circuit depicted in Fig. 3.6. According to the Example 2.​9, the mathematical model of this circuit is given as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \dot v_0+\frac{1}{RC}v_0=\dot v_i. \end{array} \end{aligned} $$
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig6_HTML.png
Fig. 3.6

A series RC circuit

Suppose that v i(t) is a step signal with amplitude A, i.e.,  $$V_i(s)=\frac {A}{s}$$ . The solution v 0(t) can be found to proceed as follows. Using the Laplace transform in the previous expression:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} V_0(s)\left(s+\frac{1}{RC}\right)&amp;\displaystyle =&amp;\displaystyle sV_i(s)-v_i(0)+v_0(0),\\ V_0(s)&amp;\displaystyle =&amp;\displaystyle \frac{s}{s+\frac{1}{RC}}V_i(s)+\frac{v_0(0)-v_i(0)}{s+\frac{1}{RC}}. \end{array} \end{aligned} $$
Replacing  $$V_i(s)=\frac {A}{s}$$ and using the inverse Laplace transform:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} V_0(s)&amp;\displaystyle =&amp;\displaystyle \frac{s}{s+\frac{1}{RC}}\frac{A}{s}+\frac{v_0(0)-v_i(0)}{s+\frac{1}{RC}},\\ V_0(s)&amp;\displaystyle =&amp;\displaystyle \frac{A}{s+\frac{1}{RC}}+\frac{v_0(0)-v_i(0)}{s+\frac{1}{RC}},\\ v_0(t)&amp;\displaystyle =&amp;\displaystyle (A+v_0(0)-v_i(0))\mathrm{e}^{-\frac{1}{RC}t}. \end{array} \end{aligned} $$
Applying KVL at t = 0, we have:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} v_i(0)=v_0(0)+v_c(0), \end{array} \end{aligned} $$
where v c(0) represents the initial voltage at the capacitor. Then, we have:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} v_0(t)&amp;\displaystyle =&amp;\displaystyle (A-v_c(0))\mathrm{e}^{-\frac{1}{RC}t}.{} \end{array} \end{aligned} $$
(3.42)
In Fig. 3.7 this time response, when v i(t) is the square wave represented by the dashed line and the time constant is RC = 0.1[s], is represented. At t = 0 we have v c(0) = 0 and A = 1. Thus:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} v_0(t)&amp;\displaystyle =&amp;\displaystyle A\mathrm{e}^{-10t}.{} \end{array} \end{aligned} $$
(3.43)
This response is observed between t = 0 and t = 1[s]. Notice that v 0(0) = A = 1[V] is correctly predicted by (3.43). Also notice that t = 1[s] represents ten times the time constant, i.e., RC = 0.1[s]. Hence, v 0(t) is very close to zero at t = 1[s]. Applying the KVL at t = t 1 = 1[s]:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} v_i(t_1)&amp;\displaystyle =&amp;\displaystyle v_0(t_1)+v_c(t_1),\\ v_c(t_1)&amp;\displaystyle =&amp;\displaystyle A=1[V],{} \end{array} \end{aligned} $$
(3.44)
as v i(t 1) = A = 1[V] and v 0(t 1) = 0.
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig7_HTML.png
Fig. 3.7

Time response of the circuit in Fig. 3.6. Continuous: v 0(t). Dashed: v i(t)

On the other hand, from t = t 1 = 1[s] to t = 2[s] we have that A = −1. Note that (3.42) is still valid in this time interval if t = t − t 1 is redefined and (3.42) is rewritten as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} v_0(t-t_1)&amp;\displaystyle =&amp;\displaystyle (A-v_c(t_1))\mathrm{e}^{-\frac{1}{RC}(t-t_1)}, \\ v_0(t-t_1)&amp;\displaystyle =&amp;\displaystyle -2\mathrm{e}^{-10(t-t_1)},{} \end{array} \end{aligned} $$
(3.45)
as v c(t 1) = 1[V] (see (3.44)) and A = −1[V] at t = t 1[s]. In Fig. 3.7, it is observed that v 0 = −2[V], at t = t 1, is correctly predicted by (3.45). Also notice that, again, the time interval length 1[s] represents ten times the time constant, i.e., RC = 0.1[s]. Hence, v 0(t − t 1) is very close to zero at t = 2[s]. The above procedure can be repeated to explain the results in Fig. 3.7 for any 0 ≤ t ≤ 10.
Figure 3.7 has been obtained using the MATLAB/Simulink diagram presented in Fig. 3.8. The signal generator block is programmed to have a square wave form, unit amplitude, and 0.5[Hz] as the frequency. The negative unitary gain block is included because the signal delivered by the signal generator block starts with a negative semicircle. Once the simulation stops, the following MATLAB code is executed in an m-file to draw Fig. 3.7:
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig8_HTML.png
Fig. 3.8

MATLAB/Simulink diagram for the circuit in Fig. 3.6

nn=length(InputOutput(:,1));
n=nn-1;
Ts=10/n;
t=0:Ts:10;
plot(t,InputOutput(:,1),'k--',t,InputOutput(:,2),'k-')
axis([-0.5 10 -2.5 2.5])
Notice that the transfer function of the circuit in Fig. 3.6 is:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \frac{V_0(s)}{V_i(s)}&amp;\displaystyle =&amp;\displaystyle G(s)=\frac{s}{s+\frac{1}{RC}}, {} \end{array} \end{aligned} $$
(3.46)
when v 0(0) = 0 and v i(0) = 0, or equivalently, when v c(0) = 0. Although this is a first-order transfer function, the main difference with respect to the transfer function defined in (3.23) is that a first-order polynomial appears at the numerator of  $$G(s)=\frac {s}{s+\frac {1}{RC}}$$ . It is concluded from Figs. 3.1 and 3.7 that the polynomial at the numerator of a transfer function affects time response too, despite the stability not being affected by the numerator of a transfer function.
In this respect, we can use (3.46), the initial value theorem in (3.5), and  $$V_i(s)=\frac {A}{s}$$ to find:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} v_0(0^+)&amp;\displaystyle =&amp;\displaystyle \lim_{t\to0^+}v_0(t)=\lim_{s\to\infty}sV_0(s),\\ &amp;\displaystyle =&amp;\displaystyle \lim_{s\to\infty}s\frac{s}{s+\frac{1}{RC}}\frac{A}{s}=A. \end{array} \end{aligned} $$
The roots of the polynomial at the numerator of a transfer function are called the transfer function zeros . Thus, it is concluded that the discontinuity in the time response, when a step input is applied, is due to the fact that the transfer function has the same number of poles and zeros. In the next example, it is shown that a different location of a zero of a transfer function has different effects on the time response.

Example 3.7

Consider the electric circuit shown in Fig. 3.9. This circuit is also known as a phase-lead network. The mathematical model of this circuit was obtained in (2.​49) and it is rewritten here for ease of reference:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} V_0(s)=\frac{s+b}{s+a}V_i(s)-\frac{v_c(0)}{s+a}, b=\frac{1}{R_1C}, a=\frac{1}{C}\frac{R_1+R_2}{R_1R_2}, \end{array} \end{aligned} $$
where v c(0) represents the initial voltage at the capacitor and a > b > 0. The solution v 0(t) is computed in the following. Substituting v i(t) = A, i.e.,  $$V_i(s)=\frac {A}{s}$$ :
 $$\displaystyle \begin{aligned} \begin{array}{rcl} V_0(s)=\frac{(s+b)A}{s(s+a)}-\frac{v_c(0)}{s+a}. {} \end{array} \end{aligned} $$
(3.47)
As a ≠ 0, the partial fraction expansion results in:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \frac{(s+b)A}{s(s+a)}=\frac{B}{s}+\frac{C}{s+a}. {} \end{array} \end{aligned} $$
(3.48)
Constant B is computed as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \left.s\frac{(s+b)A}{s(s+a)}\right|{}_{s=0}&amp;\displaystyle =&amp;\displaystyle \left.s\frac{B}{s}\right|{}_{s=0}+\left.s\frac{C}{s+a}\right|{}_{s=0}, \\ B&amp;\displaystyle =&amp;\displaystyle \frac{bA}{a}.{} \end{array} \end{aligned} $$
(3.49)
Constant C is computed as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \left.(s+a)\frac{(s+b)A}{s(s+a)}\right|{}_{s=-a}&amp;\displaystyle =&amp;\displaystyle \left.(s+a)\frac{B}{s}\right|{}_{s=-a}+\left.(s+a)\frac{C}{s+a}\right|{}_{s=-a}, \\ C&amp;\displaystyle =&amp;\displaystyle \frac{(b-a)A}{-a}. {} \end{array} \end{aligned} $$
(3.50)
Replacing (3.48), (3.49), (3.50), in (3.47), it is found that:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} V_0(s)=\frac{bA}{a}\frac{1}{s}+\frac{(b-a)A}{-a}\frac{1}{s+a}-\frac{v_c(0)}{s+a}.\end{array} \end{aligned} $$
Hence, using the inverse Laplace transform:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} v_0(t)=\frac{bA}{a}+\left(\frac{(b-a)A}{-a}-v_c(0)\right)\mathrm{e}^{-at}. {} \end{array} \end{aligned} $$
(3.51)
This time response is depicted in Fig. 3.10 when v i(t) is the square wave shown with a dashed line, b = 5, a = 10 and v c(0) = 0. At t = 0, A = 1[V]. Then, (3.51) becomes:
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig9_HTML.png
Fig. 3.9

Phase-lead network

/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig10_HTML.png
Fig. 3.10

Time response of the circuit in Fig. 3.9. Continuous: v 0(t). Dashed: v i(t)

 $$\displaystyle \begin{aligned} \begin{array}{rcl} v_0(t)=\frac{1}{2}+\frac{1}{2}\mathrm{e}^{-10t}.\end{array} \end{aligned} $$
Thus, v 0(0) = 1[V] and  $$v_0(t_1)\approx \frac {1}{2}$$ [V], for t 1 = 1[s], are correctly predicted. Notice that a time interval of 1[s], between t = 0 and t = t 1, is ten times the time constant of  $$\frac {1}{a}=0{.}1$$ [s]; hence,  $$\frac {1}{2}\mathrm {e}^{-10t}\approx 0$$ when t = t 1 = 1[s]. On the other hand, applying the KVL to the circuit in Fig. 3.9 it is found that;
 $$\displaystyle \begin{aligned} \begin{array}{rcl} v_i(t_1)&amp;\displaystyle =&amp;\displaystyle v_c(t_1)+v_0(t_1),\\ v_i(t_1)&amp;\displaystyle =&amp;\displaystyle A=1, v_0(t_1)=\frac{R_2}{R_1+R_2}A,\\ v_c(t_1)&amp;\displaystyle =&amp;\displaystyle v_i(t_1)-v_0(t_1)=\frac{1}{2}=\frac{R_1}{R_1+R_2}. \end{array} \end{aligned} $$
At t = t 1 = 1[s], v i(t) changes to A = −1. If t = t − t 1 is redefined, (3.51) is still valid if it is rewritten as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} v_0(t-t_1)&amp;\displaystyle =&amp;\displaystyle \frac{bA}{a}+\left(\frac{(b-a)A}{-a}-v_c(t_1)\right)\mathrm{e}^{-a(t-t_1)}, \\ v_0(t-t_1)&amp;\displaystyle =&amp;\displaystyle \frac{-1}{2}+\left(\frac{-1}{2}-\frac{R_1}{R_1+R_2}\right)\mathrm{e}^{-10(t-t_1)}, \\ v_0(t-t_1)&amp;\displaystyle =&amp;\displaystyle \frac{-1}{2}-\mathrm{e}^{-10(t-t_1)},\end{array} \end{aligned} $$
since  $$\frac {R_1}{R_1+R_2}=\frac {1}{2}$$ . Thus, Fig. 3.10 corroborates that v 0(t 1) = −1.5[V], t 1 = 1[s], and  $$v_0\approx -\frac {1}{2}$$ [V] at t = 2[s], because  $$-\mathrm {e}^{-10(t-t_1)}\approx 0$$ at t = 2[s], are correctly predicted again.
Figure 3.10 has been obtained using the MATLAB/Simulink simulation diagram presented in Fig. 3.11. The signal generator block is programmed to have a square wave form, unit amplitude, and 0.5[Hz] as the frequency. The negative, unit gain block is included because the signal delivered by the signal generator block starts with a negative semicircle. Once the simulation stops, the following MATLAB code is executed in an m-file to draw Fig. 3.10:
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig11_HTML.png
Fig. 3.11

MATLAB/Simulink simulation diagram for circuit in Fig. 3.9

nn=length(InputOutput(:,1));
n=nn-1;
Ts=5/n;
t=0:Ts:5;
plot(t,InputOutput(:,1),'k--',t,InputOutput(:,2),'k-')
axis([-0.25 5 -2 2])
Notice that the transfer function of the circuit in Fig. 3.9, given as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \frac{V_0(s)}{V_i(s)}=G(s)=\frac{s+b}{s+a},{} \end{array} \end{aligned} $$
(3.52)
when v c(0) = 0, has a zero at s = −b < 0, whereas the transfer function of the circuit in Fig. 3.6 has a zero at s = 0. The main difference produced by this fact is that the time response of the circuit in Fig. 3.6 reaches a zero value in a steady state whereas the steady-state response of the circuit in Fig. 3.9 is given as  $$\frac {b}{a}A\neq 0$$ .
Finally, we can use (3.52), the initial value theorem in (3.5), and  $$V_i(s)=\frac {A}{s}$$ to find:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} v_0(0^+)&amp;\displaystyle =&amp;\displaystyle \lim_{t\to0^+}v_0(t)=\lim_{s\to\infty}sV_0(s),\\ &amp;\displaystyle =&amp;\displaystyle \lim_{s\to\infty}s\frac{s+b}{s+a}\frac{A}{s}=A. \end{array} \end{aligned} $$
Thus, it is concluded again that the discontinuity in time response when a step input is applied is due to the fact that the transfer function has the same number of poles and zeros.

3.3 Second-Order Differential Equation

Consider the following linear ordinary second-order differential equation with constant coefficients:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \ddot y+2\zeta \omega_n \dot y+\omega_n^2 y=k\omega_n^2 u, y(0)=y_0,\dot y(0)=\dot y_0,{} \end{array} \end{aligned} $$
(3.53)
where ω n > 0 and k are real nonzero constants. Assume that u = A is a real constant. Using (3.2) and U(s) = As, the following can be written:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} &amp;\displaystyle &amp;\displaystyle s^2 Y(s)-sy_0-\dot y_0+2\zeta\omega_n(sY(s)-y_0)+\omega_n^2 Y(s)=k\omega_n^2 U(s), \end{array} \end{aligned} $$
to obtain:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} Y(s)&amp;\displaystyle =&amp;\displaystyle k\frac{\omega_n^2A}{(s^2+2\zeta\omega_ns+\omega_n^2)s}+\frac{y_0(s+2\zeta\omega_n)+ \dot y_0}{s^2+2\zeta\omega_n s+\omega_n^2}.{} \end{array} \end{aligned} $$
(3.54)
First, suppose that all of the initial conditions are zero  $$\dot y_0=0$$ , y 0 = 0. Then:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} Y(s)&amp;\displaystyle =&amp;\displaystyle k\frac{A\;\omega_n^2}{(s^2+2\zeta\omega_n s+\omega_n^2)s}. \end{array} \end{aligned} $$
Also, suppose that 0 ≤ ζ < 1 is a real constant, then:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} s^2+2\zeta\omega_n s+\omega_n^2&amp;\displaystyle =&amp;\displaystyle (s-a)(s-\bar{a}),{}\\ a&amp;\displaystyle =&amp;\displaystyle \sigma+j\omega_d,\bar{a}=\sigma-j\omega_d, j=\sqrt{-1},\\ \omega_d&amp;\displaystyle =&amp;\displaystyle \omega_n\sqrt{1-\zeta^2}&gt;0,\sigma=-\zeta\omega_n\leq 0. \end{array} \end{aligned} $$
(3.55)
Notice that:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} s^2+2\zeta\omega_n s+\omega_n^2&amp;\displaystyle =&amp;\displaystyle (s-[\sigma+j\omega_d])(s-[\sigma-j\omega_d])= (s-\sigma)^2+\omega_d^2. \end{array} \end{aligned} $$
According to the partial fraction expansion method [2], Ch. 4, [3], Ch. 7, in this case the following can be written:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} Y(s)=k\frac{A\omega_n^2}{(s^2+2\zeta\omega_n s+\omega_n^2)s}&amp;\displaystyle =&amp;\displaystyle k\frac{A\omega_n^2}{[(s-\sigma)^2+\omega_d^2]s}=\frac{Bs+C}{(s-\sigma)^2+\omega_d^2}+\frac{D}{s},\\ {} \end{array} \end{aligned} $$
(3.56)
where B, C, and D are constants to be computed. The inverse Laplace transform of (3.56) is given as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} y(t)&amp;\displaystyle =&amp;\displaystyle kA\left[1-\frac{\mathrm{e}^{-\zeta\omega_n t}}{\sqrt{1-\zeta^2}}\;\sin\left(\omega_d t+\phi\right)\right],{}\\ \phi&amp;\displaystyle =&amp;\displaystyle \arctan\frac{\sqrt{1-\zeta^2}}{\zeta}. \end{array} \end{aligned} $$
(3.57)
This expression constitutes the solution of (3.53) when all of the initial conditions are zero, i.e., when  $$\dot y_0=0$$ , y 0 = 0. The detailed procedure to find (3.57) from (3.56) is presented in the following. Readers who are not interested in these technical details may continue from (3.62) onward.
Multiplying both sides of (3.56) by the factor  $$(s-\sigma )^2+\omega _d^2$$ and evaluating at s = a (see (3.55)) yields:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} k\left.\frac{A\omega^2_n}{[(s-\sigma)^2+\omega_d^2]s}[(s-\sigma)^2+\omega_d^2]\right|{}_{s=a}&amp;\displaystyle =&amp;\displaystyle \left.\frac{Bs+C}{(s-\sigma)^2+\omega_d^2}[(s-\sigma)^2+\omega_d^2]\right|{}_{s=a}\\ &amp;\displaystyle &amp;\displaystyle +\left.\frac{D}{s}[(s-\sigma)^2+\omega_d^2]\right|{}_{s=a}, \end{array} \end{aligned} $$
to obtain:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} k\frac{A\omega_n^2}{\sigma+j\omega_d}&amp;\displaystyle =&amp;\displaystyle B(\sigma+j\omega_d)+C.\end{array} \end{aligned} $$
Multiplying the left-hand term by (σ −  d)∕(σ −  d):
 $$\displaystyle \begin{aligned} \begin{array}{rcl} kA\omega_n^2\frac{(\sigma-j\omega_d)}{\sigma^2+\omega_d^2}&amp;\displaystyle =&amp;\displaystyle B\sigma+C+jB\omega_d.\end{array} \end{aligned} $$
Equating the imaginary parts:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} B&amp;\displaystyle =&amp;\displaystyle -\frac{kA \omega_n^2}{\sigma^2+\omega_d^2}. {} \end{array} \end{aligned} $$
(3.58)
Equating the real parts:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} B\sigma+C&amp;\displaystyle =&amp;\displaystyle \frac {kA\omega_n^2\sigma}{\sigma^2+\omega_d^2}.\end{array} \end{aligned} $$
Replacing (3.58):
 $$\displaystyle \begin{aligned} \begin{array}{rcl} C&amp;\displaystyle =&amp;\displaystyle \frac{2kA\omega_n^2\sigma}{\sigma^2+\omega_d^2}. {} \end{array} \end{aligned} $$
(3.59)
Using (3.58) and (3.59), the following can be written:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \frac{Bs+C}{(s-\sigma)^2+\omega_d^2}&amp;\displaystyle =&amp;\displaystyle \frac{-\frac{kA \omega_n^2}{\sigma^2+\omega_d^2}s +\frac{2kA\omega_n^2\sigma}{\sigma^2+\omega_d^2}}{(s-\sigma)^2+\omega_d^2},\\ &amp;\displaystyle =&amp;\displaystyle -\frac{kA \omega_n^2}{(\sigma^2+\omega_d^2)}\frac{(s-\sigma)} {[(s-\sigma)^2+\omega_d^2]} +\frac{kA \omega_n^2\sigma}{(\sigma^2+\omega_d^2)}\frac{1} {[(s-\sigma)^2+\omega_d^2]}. \end{array} \end{aligned} $$
Using the tabulated Laplace transform pairs [4], Ch. 32:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \mathcal{L}^{-1}\Bigg\{\frac{s-\sigma} {(s-\sigma)^2+\omega_d^2}\Bigg\}=\mathrm{e}^{\sigma t}\cos{}(\omega_d t),\\ \mathcal{L}^{-1}\Bigg\{\frac{\omega_d} {(s-\sigma)^2+\omega_d^2}\Bigg\}=\mathrm{e}^{\sigma t}\sin{}(\omega_d t), \end{array} \end{aligned} $$
the following is obtained:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \mathcal{L}^{-1}\Bigg\{\frac{Bs+C}{(s-\sigma)^2+\omega_d^2}\Bigg\}&amp;\displaystyle =&amp;\displaystyle -\frac{kA\omega_n^2}{\sigma^2+\omega_d^2}\mathrm{e}^{\sigma t}\cos{}(\omega_d t)+\frac{kA\omega_n^2\sigma}{\omega_d(\sigma^2+\omega_d^2)}\mathrm{e}^{\sigma t}\sin{}(\omega_d t),\\ &amp;\displaystyle =&amp;\displaystyle \frac{kA\omega_n^2}{\sigma^2+\omega_d^2}\mathrm{e}^{\sigma t}[-\cos{}(\omega_d t)+ \frac{\sigma}{\omega_d}\sin{}(\omega_d t)],\\ =\frac{kA\omega_n^2}{\sigma^2+\omega_d^2}\mathrm{e}^{\sigma t}&amp;\displaystyle &amp;\displaystyle [\sin\beta\cos{}(\omega_d t)+\cos\beta \sin{}(\omega_d t)]\sqrt{\left(\frac{\sigma}{\omega_d}\right)^2+1}. \end{array} \end{aligned} $$
Notice that some relations in Fig. 3.12a have been employed in the last step. On the other hand, it is possible to continue writing:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \sin\beta\cos{}(\omega_d t)+\cos\beta \sin{}(\omega_d t)&amp;\displaystyle =&amp;\displaystyle \frac{1}{2}[\sin{}(\beta-\omega_d t)+\sin{}(\beta+\omega_d t)]+\\ &amp;\displaystyle &amp;\displaystyle +\frac{1}{2}[\sin{}(\omega_d t-\beta)+\sin{}(\omega_d t+\beta)],\\ &amp;\displaystyle =&amp;\displaystyle \sin{}(\omega_d t+\beta), \end{array} \end{aligned} $$
because  $$\sin {}(-x)=-\sin {}(x)$$ . Hence:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \mathcal{L}^{-1}\Bigg\{\frac{Bs+C}{(s-\sigma)^2+\omega_d^2}\Bigg\} &amp;\displaystyle =&amp;\displaystyle kA\omega_n^2\frac{\sqrt{\left(\frac{\sigma}{\omega_d}\right)^2+1}}{\sigma^2+\omega_d^2}e^{\sigma t}\sin{}(\omega_d t+\beta).\end{array} \end{aligned} $$
On the other hand, from Fig. 3.12a:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \tan\beta&amp;\displaystyle =&amp;\displaystyle \frac{-1}{\sigma/\omega_d}=\frac{-1}{(-\zeta\omega_n)/(\omega_n\sqrt{1-\zeta^2})} =\frac{-1}{\frac{-\zeta}{\sqrt{1-\zeta^2}}},\\ \beta&amp;\displaystyle =&amp;\displaystyle \pi+\arctan\frac{\sqrt{1-\zeta^2}}{\zeta}, \end{array} \end{aligned} $$
where signs of the adjacent and the opposite sides have been taken into account to conclude that β is an angle on the third quadrant (see Fig. 3.12b). This allows the following simplification [4], Ch. 5:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \sin{}(\omega_d t+\beta)&amp;\displaystyle =&amp;\displaystyle \sin\left(\omega_d t+\pi+\phi\right), \\ &amp;\displaystyle =&amp;\displaystyle -\sin\left(\omega_d t+\phi\right),\\ \phi&amp;\displaystyle =&amp;\displaystyle \arctan\frac{\sqrt{1-\zeta^2}}{\zeta}, \end{array} \end{aligned} $$
and thus:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \mathcal{L}^{-1}\Bigg\{\frac{Bs+C}{(s-\sigma)^2+\omega_d^2}\Bigg\} &amp;\displaystyle =&amp;\displaystyle -kA\omega_n^2\frac{\sqrt{\left(\frac{\sigma}{\omega_d}\right)^2+1}}{\sigma^2+\omega_d^2}\mathrm{e}^{\sigma t}\sin\left(\omega_d t+\phi\right),\\ &amp;\displaystyle =&amp;\displaystyle -kA\omega_n^2\frac{\sqrt{\sigma^2+\omega_d^2}}{\omega_d (\sigma^2+\omega_d^2)}\mathrm{e}^{\sigma t}\sin\left(\omega_d t+\phi\right),\\ &amp;\displaystyle =&amp;\displaystyle -kA\omega_n^2\frac{1}{\omega_d \sqrt{\sigma^2+\omega_d^2}}\mathrm{e}^{\sigma t}\sin\left(\omega_d t+\phi\right),\\ \mathcal{L}^{-1}\Bigg\{\frac{Bs+C}{(s-\sigma)^2+\omega_d^2}\Bigg\} &amp;\displaystyle =&amp;\displaystyle -kA\frac{1}{\sqrt{1-\zeta^2} }\mathrm{e}^{\sigma t}\sin\left(\omega_d t+\phi\right), {} \end{array} \end{aligned} $$
(3.60)
where σ = −ζω n and  $$\omega _d=\omega _n\sqrt {1-\zeta ^2}$$ have been employed. On the other hand, D in (3.56) is easily computed multiplying both sides of that expression by factor s and evaluating at s = 0:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} D&amp;\displaystyle =&amp;\displaystyle \left.\frac{kA\;\omega_n^2}{s^2+2\zeta\omega_n s+\omega_n^2}\right|{}_{s=0}=kA.{} \end{array} \end{aligned} $$
(3.61)
Using (3.56), (3.60) and (3.61), the following solution is found:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} y(t)&amp;\displaystyle =&amp;\displaystyle kA\left[1-\frac{\mathrm{e}^{-\zeta\omega_n t}}{\sqrt{1-\zeta^2}}\;\sin\left(\omega_d t+\phi\right)\right]. \end{array} \end{aligned} $$
If it is assumed now that the initial conditions are not zero, then:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} y(t)&amp;\displaystyle =&amp;\displaystyle kA\left[1-\frac{\mathrm{e}^{-\zeta\omega_n t}}{\sqrt{1-\zeta^2}}\;\sin\left(\omega_d t+\phi\right)\right]+p(t),{} \end{array} \end{aligned} $$
(3.62)
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig12_HTML.png
Fig. 3.12

Some important relations for the procedure in Sect. 3.3

where:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} p(t)=\mathcal{L}^{-1}\left\{\frac{y_0(s+2\zeta\omega_n)+ \dot y_0}{s^2+2\zeta\omega_n s+\omega_n^2}\right\}.{} \end{array} \end{aligned} $$
(3.63)
The inverse Laplace transform defining p(t) may be computed using a similar procedure to that presented above. Moreover, the reader should realize that p(t) is given by a function as that in (3.60) with coefficients depending on both y 0 and  $$\dot y_0$$ .
The solution given in (3.57) can be decomposed into two parts:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} y(t)&amp;\displaystyle =&amp;\displaystyle y_n(t)+y_f(t),{} \end{array} \end{aligned} $$
(3.64)
 $$\displaystyle \begin{aligned} \begin{array}{rcl} y_n(t)&amp;\displaystyle =&amp;\displaystyle -kA\frac{\mathrm{e}^{-\zeta\omega_n t}}{\sqrt{1-\zeta^2}}\;\sin\left(\omega_d t+\phi\right),{} \end{array} \end{aligned} $$
(3.65)
 $$\displaystyle \begin{aligned} \begin{array}{rcl} y_f(t)&amp;\displaystyle =&amp;\displaystyle kA,{} \end{array} \end{aligned} $$
(3.66)
where y n(t) and y f(t) are the natural and the forced responses respectively. Notice that, according to (3.56) and (3.60), the natural response in (3.65) is due only to the characteristic polynomial  $$s^2+2\zeta \omega _n s+\omega _n^2=(s-\sigma )^2+\omega _d^2$$ , which results from applying the Laplace transform to terms  $$\ddot y+2\zeta \omega _n\dot y+\omega _n^2y$$ . Hence, no matter what the input u is, the natural response y n(t) is given as in (3.65). On the other hand, notice that the forced response in (3.66) is due to term Ds in (3.56), which is introduced by the constant input u = A, as U(s) = As. Also notice that, according to the paragraph after (3.63), when the initial conditions are different from zero they only affect the natural response in the sense that its coefficient depends on the initial conditions y 0 and  $$\dot y_0$$ , but the function of time is always given as  $$\mathrm {e}^{-\zeta \omega _n t}\;\sin \left (\omega _d t+\phi \right )$$ or its time derivative. Furthermore, p(t) belongs to the natural response when the initial conditions are different from zero.
The reader may review the procedure presented above after (3.57) to corroborate that, in the case when − 1 < ζ < 0 with k > 0 and ω n > 0, the solution in (3.64) or in (3.57) becomes:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} y(t)&amp;\displaystyle =&amp;\displaystyle y_n(t)+y_f(t),{}\\ y_n(t)&amp;\displaystyle =&amp;\displaystyle kA\frac{\mathrm{e}^{-\zeta\omega_n t}}{\sqrt{1-\zeta^2}}\;\sin\left(\omega_d t-\arctan\left(\frac{\sqrt{1-\zeta^2}}{abs(\zeta)}\right)\right), \\ y_f(t)&amp;\displaystyle =&amp;\displaystyle kA, \end{array} \end{aligned} $$
(3.67)
where abs(⋅) stands for the absolute value function.
Employing (3.64), when 0 ≤ ζ < 1, and (3.67), when − 1 < ζ < 0, it is concluded that the solution of the differential equation in (3.53) has one of the following behaviors:
  1. 1.

    If 0 < ζ < 1, i.e., if both roots of the characteristic polynomial located at s = −ζω n ±  d have negative real parts − ζω n < 0, then limt y n(t) = 0 and limt y(t) = y f(t).

     
  2. 2.

    If − 1 < ζ < 0, i.e., if both roots of the characteristic polynomial located at s = −ζω n ±  d have positive real parts − ζω n > 0, then y n(t) and y(t) grow without a limit, describing oscillations, as time increases.

     
  3. 3.
    If ζ = 0, i.e., if both roots of the characteristic polynomial located at s = −ζω n ±  d have zero real parts − ζω n = 0, then from (3.64) the following is obtained:
     $$\displaystyle \begin{aligned} \begin{array}{rcl} y(t)&amp;\displaystyle =&amp;\displaystyle y_n(t)+y_f(t),{}\\ y_n(t)&amp;\displaystyle =&amp;\displaystyle -kA\cos{}(\omega_n t), y_f(t)=kA, \end{array} \end{aligned} $$
    (3.68)
    when the initial conditions are zero, i.e., y n(t) is an oscillatory function whose amplitude neither increases nor decreases. This means that although y(t) does not grow without a limit, it will not reach the forced response y f(t) in a steady state.
     
  4. 4.

    The behavior obtained when ζ is out of the range − 1 < ζ < 1 is studied in the subsequent sections as the cases when the roots of the characteristic polynomial are real and repeated or real and different.

     
Finally, notice that, according to the above discussion and (3.62), (3.63), the solution of a second-order differential equation may present sustained oscillations (ζ = 0) even if the input is zero (u = 0) if the initial conditions are different from zero. This explains the oscillations in the electronic circuits studied in Chap. 9 (see last paragraph in Sect. 9.​3.​1).

3.3.1 Graphical Study of the Solution

The graphical form of solution presented in (3.64) for the differential equation in (3.53) is studied in the following. Recall that it is assumed that 0 ≤ ζ < 1, ω n > 0, k > 0 and that the initial conditions are zero y 0 = 0,  $$\dot y_0=0$$ . Notice that the natural response tends toward zero as time increases if 0 < ζ < 1:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \lim_{t\to\infty}y_n(t)&amp;\displaystyle =&amp;\displaystyle \lim_{t\to\infty}\left(-kA\frac{\mathrm{e}^{-\zeta\omega_n t}}{\sqrt{1-\zeta^2}}\;\sin\left(\omega_d t+\phi\right)\right)=0, \end{array} \end{aligned} $$
(3.69)
because ζω n > 0. Also recall that, in the case when the initial conditions are not zero, the natural response also tends toward zero as time increases. Hence, when time is large, the solution of the differential equation is equal to the forced response:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \lim_{t\to\infty}y(t)&amp;\displaystyle =&amp;\displaystyle y_f(t)=kA. \end{array} \end{aligned} $$
(3.70)
This means that the faster the natural response vanishes y n(t), i.e., as ζω n > 0 is larger, the faster the complete solution y(t) reaches the forced response y f(t). As was noted for first-order differential equations, the natural response allows the complete solution y(t) to go from the initial conditions (zero in this case) to the forced response y f(t).
Two important parameters of the response in (3.64) are the rise time t r and overshoot M p, which are shown in Fig. 3.13. Overshoot can be measured as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} M_p(\%)=\frac{y_p-y_f}{y_f}\times 100, \end{array} \end{aligned} $$
where y f = kA stands for the forced response. Recall that it is assumed that  $$y_0=\dot y_0=0$$ . Carefully analyzing (3.64), it can be shown that [1], pp. 232:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} t_r&amp;\displaystyle =&amp;\displaystyle \frac{1}{\omega_d}\left[\pi-\arctan\left(\frac{\sqrt{1-\zeta^2}}{\zeta}\right)\right],{}\\ M_p(\%)&amp;\displaystyle =&amp;\displaystyle 100 \times \mathrm{e}^{-\frac{\zeta}{\sqrt{1-\zeta^2}}\pi}.\end{array} \end{aligned} $$
(3.71)
Figures 3.14 and 3.15 show how the solution in (3.64) is affected when the parameters ζ and ω n change. Notice that overshoot M p is affected only by ζ, whereas rise time t r is mainly affected by ω n, although ζ also has a small effect on t r.
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig13_HTML.png
Fig. 3.13

y(t) in (3.64)

/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig14_HTML.png
Fig. 3.14

The solution in (3.64) when different values for ζ are used and ω n is kept constant

/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig15_HTML.png
Fig. 3.15

The solution in (3.64) when different values of ω n are used: ω n2 = 2ω n1, ω n3 = 3ω n1. Parameter ζ is kept constant

Finally, it is important to point out that, in the case when u = A = 0 but some of the initial conditions  $$\dot y_0$$ or y 0 are different from zero, y(t) behaves as the time function in (3.65) and is graphically represented as the oscillatory part without the constant component, i.e., it oscillates around y = 0, in any of the Figs. 3.13, 3.14 or 3.15. This is because of function p(t) appearing in (3.62).

3.3.2 Transfer Function

If the initial conditions are zero y 0 = 0,  $$\dot y_0=0$$ , then (3.54) can be written as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} Y(s)&amp;\displaystyle =&amp;\displaystyle \frac{k\omega_n^2}{s^2+2\zeta\omega_ns+\omega_n^2}U(s), \end{array} \end{aligned} $$
or:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \frac{Y(s)}{U(s)}=G(s)=\frac{k\omega_n^2}{s^2+2\zeta\omega_ns+\omega_n^2},{} \end{array} \end{aligned} $$
(3.72)
where G(s) is the transfer function of the differential equation in (3.53). Notice that G(s), defined in (3.72), has a second-degree characteristic polynomial, i.e., G(s) has two poles. Hence, G(s) is a second-order transfer function. According to (3.55) these poles are located at s = a and  $$s=\bar a$$ , where:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} a&amp;\displaystyle =&amp;\displaystyle \sigma+j\omega_d,\bar{a}=\sigma-j\omega_d. \end{array} \end{aligned} $$
It is important to stress that these poles are complex conjugate only under the condition − 1 < ζ < 1. The reader can verify that these poles are real if ζ does not satisfy this condition. Such a case is studied in the subsequent sections.
According to the study in the previous sections, the following can be stated for the output of the transfer function y(t):
  • If both poles have a negative real part σ = −ζω n < 0, i.e., ζ > 0, then the natural response y n(t) vanishes and the complete response y(t) reaches the forced response y f(t) as time increases. When the input is a constant u(t) = A, then the forced response is y f(t) = kA.

  • The natural response y n(t) vanishes faster as ζω n is larger. As is depicted in Fig. 3.16, the system response remains enveloped by two exponential functions whose vanishing rate is determined by ζω n.
    /epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig16_HTML.png
    Fig. 3.16

    The response of a second-order system is enveloped by two exponential functions when the input is constant and the initial conditions are zero

  • The rise time t r decreases if ω n increases. This is corroborated by observing that, according to (3.55), if ω n increases then ω d also increases and, according to (3.71), t r decreases.

  • Overshoot M p decreases if ζ increases.

  • According to Fig. 3.17: i) the rise time t r decreases as the complex conjugate poles are located farther to the left of s = 0 because ω n increases (see (3.71) and  $$\omega _d=\omega _n\sqrt {1-\zeta ^2}$$ ), ii) the parameter ζ increases and, hence, overshoot M p decreases, as the angle θ is larger because  $$\zeta =\sin {}(\theta )$$ , iii) y n(t) vanishes faster as the complex conjugate poles are located farther to the left of s = 0 because ζω n is larger.
    /epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig17_HTML.png
    Fig. 3.17

    One pole of G(s) in (3.72) when 0 < ζ < 1

  • If k = 1 then the transfer function G(s) is said to have unitary gain in a steady state because, when u = A is a constant, the final value theorem (3.4) can be used to find that:
     $$\displaystyle \begin{aligned} \begin{array}{rcl} \lim_{t\to\infty}y(t)=\lim_{s\to 0}sY(s)=\lim_{s\to 0}s\frac{k\omega_n^2}{s^2+2\zeta\omega_ns+\omega_n^2}\frac{A}{s}=\frac{k\omega_n^2}{\omega_n^2}A=A. \end{array} \end{aligned} $$
    The constant k represents the steady-state gain of the transfer function in (3.72).
  • If − 1 < ζ < 0, then the real part of the poles σ = −ζω n is positive; hence, the complete response y(t) grows without limit as time increases. Notice that this implies that the complex conjugate poles are located on the right half of the plane s.

  • The parameter ζ is known as the damping coefficient because it determines how oscillatory the response y(t) is (see (3.71), for overshoot, and Fig. 3.14).

  • The parameter ω d is known as the damped natural frequency because, according to the previous discussion (see (3.64)), ω d is the oscillation frequency in y(t) when the damping ζ is different from zero. Notice that the frequency ω d is the imaginary part of the poles of the transfer function in (3.72) (see (3.55)).

  • The parameter ω n is known as the undamped natural frequency because, according to (3.64) and (3.68), ω n is the oscillation frequency when damping is zero.

Example 3.8

Consider the mass-spring-damper system depicted in Fig. 3.18. According to Example 2.​2, Chap. 2, the mass position x is given by the following second-order differential equation:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} m\ddot x&amp;\displaystyle =&amp;\displaystyle -Kx-b\dot x+f,{} \end{array} \end{aligned} $$
(3.73)
where K is the spring stiffness constant, b is the viscous friction coefficient, m is the body mass, and f is an applied external force. The expression in (3.73) can be written as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \ddot x+\frac{b}{m}\dot x+\frac{K}{m}x&amp;\displaystyle =&amp;\displaystyle \frac{1}{m}f, \end{array} \end{aligned} $$
which can be rewritten as the expression in (3.53):
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \ddot x+2\zeta \omega_n \dot x+\omega_n^2 x=k\omega_n^2 f,{}\\ 2\zeta\omega_n=\frac{b}{m}, \omega_n^2=\frac{K}{m}, k=\frac{1}{K}, \end{array} \end{aligned} $$
(3.74)
where x is the output y, whereas f is the input u. Hence, x(t) is given as y(t) in (3.64) if 0 ≤ ζ < 1 when the initial conditions are zero. Notice that all constants in (3.73) are positive: the mass m, the viscous friction coefficient b, and the spring stiffness coefficient K only can be positive. According to (3.74), the damping coefficient is given as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \zeta= \frac{b}{2\sqrt{mK}}.{} \end{array} \end{aligned} $$
(3.75)
If a constant force f = A is applied, the results in Figs. 3.13, 3.14, and 3.15 can be employed to conclude the following.
  • If there is no friction, i.e., if b = 0, then ζ = 0 and the mass will oscillate forever around the position  $$x_f=kA=\frac {1}{K}A$$ . In the case when the external force is zero f = 0 but some of the initial conditions  $$\dot x_0$$ or x 0 are different from zero, then the mass will oscillate again, though now around the position x = 0, as described by p(t) appearing in (3.62).
    /epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig18_HTML.png
    Fig. 3.18

    A mass-spring-damper system

  • If the friction coefficient b > 0 increases, then ζ > 0 also increases and the mass oscillations will disappear because the system will have more damping. It is shown in the subsequent sections that the poles are real and repeated when ζ = 1. In all of these cases, the mass will stop moving at  $$x=x_f=kA=\frac {1}{K}A$$ . In the case when the external force is zero f = 0 but some initial conditions,  $$\dot x_0$$ or x 0, are different from zero, the mass will stop at x = 0. This is again described by the function p(t) appearing in (3.62). Notice that, according to (3.75), the damping ζ also increases if the mass m or the spring stiffness constant K decreases. To understand the reason for this, consider the opposite case: larger values of both m and K produce larger forces ( $$m\ddot x$$ for the mass and Kx for the spring), which, hence, are less affected by the friction force ( $$b\dot x$$ ) and thus, the mass can remain in movement for a longer period of time.

  • As the rise time t r inversely depends on the undamped natural frequency  $$\omega _n=\sqrt {\frac {K}{m}}$$ , then a more rigid spring (with a larger K) or a smaller mass produces faster responses (t r smaller). On the other hand, advantage can be taken of the fact that the oscillation frequency is given by  $$\omega _d=\omega _n\sqrt {1-\zeta ^2}$$ with  $$\omega _n=\sqrt {\frac {K}{m}}$$ to adjust the rate of a mechanical clock based on a mass-spring-damper system: if the clock “lags,” it is necessary to increase its oscillation frequency, or rate, which is achieved by applying tension to the spring to increase K.

The case when − 1 < ζ < 0 is not possible in this example because all of the constants b, m, and K are positive. However, the situation when − 1 < ζ < 0 commonly occurs in feedback systems as a result of the interconnection of diverse components.

3.4 Arbitrary-Order Differential Equations

In Exercise 3.6, it was shown that some differential equations can be written in terms of time derivatives of both the unknown function y(t) and the excitation function u(t). Moreover, the initial conditions of both variables can be combined to obtain the initial conditions in other circuit components. According to this reasoning, a linear ordinary differential equation with arbitrary order n and constant coefficients can always be written as:
 $$\displaystyle \begin{aligned} y^{(n)}+a_{n-1}y^{(n-1)}+\cdots+a_{1}\dot{y}+a_{0}y=b_{0}u+b_{1}\dot{u}+\cdots+ b_{m}u^{(m)},{} \end{aligned} $$
(3.76)
where n ≥ m. If n < m the differential equation has no physical meaning, i.e., there is no physical system in practice whose mathematical model can be represented by such a differential equation and, thus, it is not of any interest for engineers. Notice that all of the derivatives are related to time. Because of this feature, these differential equations are also called dynamical systems . The variable y is the unknown variable of the differential equation, whereas u is a function of time that is also called the excitation. The objective of solving a differential equation is to find a function y(t) that satisfies the equality defined by the differential equation. To find y(t), it is necessary to know u(t), the real constants a i, i = 0, …, n − 1, b j, j = 0, …, m, and the set of n + m constants y(0),  $$\dot {y}(0)$$ y (n−1)(0), u(0),  $$\dot {u}(0)$$ ,…,u (m−1)(0), which are known as the initial conditions of the problem, representing values that the unknown and the excitation, in addition to their time derivatives, have at t = 0.
Applying the Laplace transform (3.2) to (3.76), the following is found:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} s^{n}Y(s)+a_{n-1}s^{n-1}Y(s)+\cdots&amp;\displaystyle +&amp;\displaystyle a_{1}sY(s)+a_{0}Y(s)+P(s)\\ &amp;\displaystyle &amp;\displaystyle =b_{0}U(s)+b_{1}sU(s)+\cdots+ b_{m}s^{m}U(s), \end{array} \end{aligned} $$
where P(s) is a polynomial in s whose coefficients depend on the initial conditions y(0),  $$\dot {y}(0)$$ ,…,y (n−1)(0), u(0),  $$\dot {u}(0)$$ ,…,u (m−1)(0) and the coefficients of the differential equation. Hence, it is possible to write:
 $$\displaystyle \begin{aligned} Y(s)=\frac{b_{0}+b_{1}s+\cdots+ b_{m}s^{m}}{s^{n}+a_{n-1}s^{n-1}+\cdots+a_{1}s+a_{0}}\;U(s)-\frac{P(s)}{s^{n}+a_{n-1}s^{n-1}+\cdots+a_{1}s+a_{0}}. \end{aligned}$$
Suppose that u(t) = A is a constant, i.e., U(s) = As. Then:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} Y(s)&amp;\displaystyle =&amp;\displaystyle \frac{B(s)}{N(s)}\frac{A}{s}-\frac{P(s)}{N(s)}, \\ N(s)&amp;\displaystyle =&amp;\displaystyle s^{n}+a_{n-1}s^{n-1}+\cdots+a_{1}s+a_{0},\\ B(s)&amp;\displaystyle =&amp;\displaystyle b_{0}+b_{1}s+\cdots+ b_{m}s^{m}. \end{array} \end{aligned} $$
Also suppose for the moment that all of the initial conditions are zero. This means that P(s) = 0 and it is possible to write:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} Y(s)&amp;\displaystyle =&amp;\displaystyle \frac{B(s)}{N(s)}\frac{A}{s}.\end{array} \end{aligned} $$
According to the partial fraction expansion method [2], Ch. 4, [3], Ch. 7, the solution in time y(t) depends on the roots of the characteristic polynomial N(s), i.e., on poles of the following transfer function :
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \frac{Y(s)}{U(s)}&amp;\displaystyle =&amp;\displaystyle G(s)=\frac{B(s)}{N(s)} \end{array} \end{aligned} $$
(3.77)
and factor  $$\frac {1}{s}$$ introduced by U(s) = As. The variable y(t) is known as the output and u(t) is known as the input of the transfer function G(s). The order of G(s) is defined as the degree of the characteristic polynomial N(s). As a n degree polynomial has n roots, then a n order transfer function has n poles (see paragraph after (3.23)). On the other hand, the roots of the polynomial B(s) are known as the zeros of the transfer function G(s). If B(s) has the degree m then G(s) has m zeros. Recall the condition n ≥ m imposed at the beginning of this section. It is assumed that B(s) and N(s) do not have common roots.

The solution for all of the possible cases for the roots of the polynomial N(s) are studied in the next sections, i.e., all of the possible cases for the poles of G(s). This provides the necessary information to understand how a system of arbitrary order n responds.

3.4.1 Real and Different Roots

Suppose that N(s) has k ≤ n real and different roots, which are different from s = 0, i.e.,:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} N(s)&amp;\displaystyle =&amp;\displaystyle N_0(s)(s-p_1)(s-p_2)\cdots(s-p_{k}), \end{array} \end{aligned} $$
where p i ≠ 0, i = 1, …, k, with p j ≠ p i if i ≠ j, stand for the roots of N(s), whereas N 0(s) is a polynomial containing the remaining n − k roots of N(s), where none of them is located at s = 0. According to the partial fraction expansion method [2], Ch. 4, [3], Ch. 7, in this case the following must be written:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} Y(s)=\frac{B(s)}{N(s)}\frac{A}{s}&amp;\displaystyle =&amp;\displaystyle Q(s)+ \frac{c_1}{s-p_1}+\frac{c_2}{s-p_2}+\cdots +\frac{c_{k}}{s-p_{k}}+\frac{f}{s},{} \end{array} \end{aligned} $$
(3.78)
where c i, i = 1, …, k, and f are real constants to be computed, whereas Q(s) represents all fractions corresponding to the roots of N 0(s). Constant c i can be computed as follows. Multiply both sides of (3.78) by the factor (s − p i) and evaluate the resulting expression at s = p i to obtain:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} c_i=\left.\frac{B(s)}{N(s)}\frac{A}{s}(s-p_i)\right|{}_{s=p_i}, i=1,\dots,n. \end{array} \end{aligned} $$
A similar procedure allows us to compute:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} f=\left.\frac{B(s)A}{N(s)}\right|{}_{s=0}. \end{array} \end{aligned} $$
Using the tabulated inverse Laplace transform, [4], Ch.32:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \mathcal{L}^{-1}\Bigg\{\frac{c_i}{s-a}\Bigg\}=c_i\mathrm{e}^{at}, \end{array} \end{aligned} $$
the following is finally obtained:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} y(t)=q(t)+ c_1\;\mathrm{e}^{p_1 t}+c_2\;\mathrm{e}^{p_2 t}+\cdots +c_{k}\;\mathrm{e}^{p_{k} t}+ f, \end{array} \end{aligned} $$
where  $$q(t)=\mathcal {L}^{-1}\{Q(s)\}$$ . Notice that the following can be written:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} y(t)&amp;\displaystyle =&amp;\displaystyle y_n(t)+y_f(t),\\ y_n(t)&amp;\displaystyle =&amp;\displaystyle q(t)+c_1\;\mathrm{e}^{p_1 t}+c_2\;\mathrm{e}^{p_2 t}+\cdots +c_{k}\;\mathrm{e}^{p_{k} t},{}\\ y_f(t)&amp;\displaystyle =&amp;\displaystyle f, \end{array} \end{aligned} $$
(3.79)
where the natural response y n(t) only depends on the roots of the characteristic polynomial N(s), whereas the forced response y f(t) only depends on the fraction  $$\frac {1}{s}$$ introduced by U(s). Consider the following possibilities.
  1. 1.

    If p i < 0 for all i = 1, …, k, then  $$\lim _{t\to \infty }(c_1\;\mathrm {e}^{p_1 t}+c_2\;\mathrm {e}^{p_2 t}+\cdots +c_{k}\;\mathrm {e}^{p_{k} t})=0$$ and limt y(t) = q(t) + y f(t).

     
  2. 2.

    If p i > 0 for at least one i = 1, …, k, then  $$(c_1\;\mathrm {e}^{p_1 t}+c_2\;\mathrm {e}^{p_2 t}+\cdots +c_{k}\;\mathrm {e}^{p_{k} t})\to \infty $$ , as t → and limt y(t) = .

     

Example 3.9

Consider the following second-order differential equation also given in (3.53):
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \ddot y+2\zeta \omega_n \dot y+\omega_n^2 y=k\omega_n^2 u, y(0)=y_0,\dot y(0)=\dot y_0.{} \end{array} \end{aligned} $$
(3.80)
If ζ > 1, then the only two roots, p 1 and p 2, of the characteristic polynomial  $$N(s)=s^2+2\zeta \omega _n s+\omega _n^2$$ are real and different:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} p_1&amp;\displaystyle =&amp;\displaystyle -\zeta \omega_n+\omega_n\sqrt{\zeta^2-1},\\ p_2&amp;\displaystyle =&amp;\displaystyle -\zeta \omega_n-\omega_n\sqrt{\zeta^2-1},\\ p_1&amp;\displaystyle \neq&amp;\displaystyle p_2. \end{array} \end{aligned} $$
Also, both roots are negative:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} p_1&amp;\displaystyle &lt;&amp;\displaystyle 0, p_2&lt;0. \end{array} \end{aligned} $$
Although this is obvious for p 2, for p 1, a little observation is required: notice that  $$-\zeta \omega _n+\omega _n\sqrt {\zeta ^2}=0$$ , and since  $$-\zeta \omega _n+\omega _n\sqrt {\zeta ^2}&gt; -\zeta \omega _n+\omega _n\sqrt {\zeta ^2-1}$$ , then  $$-\zeta \omega _n+\omega _n\sqrt {\zeta ^2-1}&lt;0$$ . In this case q(t) = 0, because Q(s) = 0 because N 0(s) = 1, i.e., N 0(s) has no roots. In Fig. 3.19 the solution y(t) of the differential equation in (3.80) is shown, when ζ > 1, u = A and the initial conditions are zero. Notice that y(t) does not present oscillations in this case. Recall that, when − 1 < ζ < 1, the oscillation frequency is equal to the imaginary part of both roots. Hence, when ζ > 1, there is no oscillation, because the imaginary part of both roots is zero; thus, the oscillation frequency is also zero.
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig19_HTML.png
Fig. 3.19

Solution of the second-order differential equation in (3.80), when ζ > 1, and in (3.85), when ζ = 1, with u = A and zero initial conditions. (a) Time response. (b) Location of poles when ζ > 1

Example 3.10

Consider again the mass-spring-damper system in Example 3.8. Recall that (3.74) describes the mass movement. Assume that:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \zeta= \frac{b}{2\sqrt{mK}}&gt;1. \end{array} \end{aligned} $$
Then, according to the previous example, the roots p 1 and p 2 of the characteristic polynomial  $$s^2+2\zeta \omega _n s+\omega _n^2 $$ are real, different, and negative. Notice that this case stands when the viscous friction coefficient b > 0 is very large or when the spring stiffness constant K > 0 or mass m is very small. Also notice that one root ( $$p_1=-\zeta \omega _n+\omega _n\sqrt {\zeta ^2-1}&lt;0$$ ) approaches the origin as ζ > 1 becomes larger. According to the discussion above in the present section, this means that the movement of the mass is slower as ζ grows, because it takes more time to function  $$\mathrm {e}^{p_1 t}$$ to vanish (despite the function  $$\mathrm {e}^{p_2 t}$$ vanishing faster because  $$p_2=-\zeta \omega _n-\omega _n\sqrt {\zeta ^2-1}&lt;0$$ moves farther to the left of the origin). This explains why slower responses are depicted in Fig. 3.19, as ζ > 1 is larger.

Example 3.11

Consider the following differential equation:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \ddot y+c \dot y+d y=e u, y(0)=y_0,\dot y(0)=\dot y_0, \end{array} \end{aligned} $$
with c, d and e some real constants. The roots p 1 and p 2 of the characteristic polynomial N(s) = s 2 + cs + d are given as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} p_1&amp;\displaystyle =&amp;\displaystyle -\frac{c}{2}+\frac{\sqrt{c^2-4d}}{2},\\ p_2&amp;\displaystyle =&amp;\displaystyle -\frac{c}{2}-\frac{\sqrt{c^2-4d}}{2}. \end{array} \end{aligned} $$
The following cases have to be considered:
  • If d < 0, then:
     $$\displaystyle \begin{aligned} \begin{array}{rcl} p_1&amp;\displaystyle =&amp;\displaystyle -\frac{c}{2}+\frac{\sqrt{c^2+4\;abs(d)}}{2},\\ p_2&amp;\displaystyle =&amp;\displaystyle -\frac{c}{2}-\frac{\sqrt{c^2+4\;abs(d)}}{2}, \end{array} \end{aligned} $$
    and both roots are real and different, with one positive and the other negative, no matter what the value of c. It is shown, in Example 7.​5 of Chap. 7, that this case corresponds to a mass-spring-damper system with a negative spring stiffness constant and that this is obtained in practice when a simple pendulum works around its inverted configuration (also called unstable).
  • If c > 0 and d > 0, the case studied in Sect. 3.3is retrieved when 1 > ζ > 0, and the case studied in the present section is retrieved when ζ > 1.

  • If c < 0 and d > 0, with abs(c) small and d large, the case studied in Sect. 3.3 is retrieved when − 1 < ζ < 0. The situation when ζ < −1 is also obtained in this case when abs(c) is large and d is small, but both roots are positive and different:
     $$\displaystyle \begin{aligned} \begin{array}{rcl} p_1&amp;\displaystyle =&amp;\displaystyle -\zeta \omega_n+\omega_n\sqrt{\zeta^2-1}&gt;0,\\ p_2&amp;\displaystyle =&amp;\displaystyle -\zeta \omega_n-\omega_n\sqrt{\zeta^2-1}&gt;0,\\ p_1&amp;\displaystyle \neq&amp;\displaystyle p_2. \end{array} \end{aligned} $$
  • The cases when ζ = −1 and ζ = 1 are studied in the next section.

3.4.2 Real and Repeated Roots

Suppose that N(s) has one root that is repeated k times and it is different from zero, i.e.,:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} N(s)&amp;\displaystyle =&amp;\displaystyle N_0(s)(s-p)^k, \end{array} \end{aligned} $$
with p ≠ 0 and N 0(s) a polynomial that contains the remaining n − k roots in N(s), and none of them is located at s = 0. According to the partial fraction expansion method [2], Ch. 4, [3], Ch. 7, the following must be written:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} Y(s){=}\frac{B(s)}{N(s)}\frac{A}{s}=Q(s){+}\frac{c_{k}}{(s{-}p)^k}{+} \frac{c_{k{-}1}}{(s{-}p)^{k{-}1}}{+}\cdots{+} \frac{c_{2}}{(s{-}p)^2}{+}\frac{c_{1}}{s{-}p}{+}\frac{f}{s},\\ {} \end{array} \end{aligned} $$
(3.81)
where c i, i = 1, …, k, and f represent real constants to be computed, whereas Q(s) represents the fractions corresponding to roots of N 0(s). One method of computing c k is by multiplying both sides of (3.81) by the factor (sp)k and evaluating at s = p to obtain:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} c_k=\left.(s-p)^k\frac{B(s)A}{N(s)s}\right|{}_{s=p}. \end{array} \end{aligned} $$
On the other hand, the constants c i, i = 1, …, k − 1, can be computed by multiplying both sides of (3.81) by the factor (sp)k, differentiating k − i times with regard to s and evaluating at s = p to obtain:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} c_i=\frac{1}{(k-i)!}\left.\frac{d^{k-i}}{ds^{k-i}}\left(\frac{B(s)A}{N(s)s}\right)\right|{}_{s=p}, i=1,\dots,k-1. \end{array} \end{aligned} $$
Finally, the constant f is computed by multiplying both sides of (3.81) by the factor s and evaluating at s = 0 to find:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} f=\left.\frac{B(s)A}{N(s)}\right|{}_{s=0}. \end{array} \end{aligned} $$
Using the tabulated inverse Laplace transform [4], Ch. 32:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \mathcal{L}^{-1}\Bigg\{\frac{1}{(s-a)^j}\Bigg\}=\frac{t^{j-1}}{(j-1)!}\;\mathrm{e}^{at}, j=1,2,3,\dots, 0!=1, \end{array} \end{aligned} $$
the following is found:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} y(t)&amp;\displaystyle =&amp;\displaystyle q(t)+c_{k}\frac{t^{k-1}}{(k-1)!}\;\mathrm{e}^{pt}+ c_{k-1}\frac{t^{k-2}}{(k-2)!}\;\mathrm{e}^{pt}+\cdots\\ &amp;\displaystyle +&amp;\displaystyle c_{2}t\;\mathrm{e}^{pt}+c_{1}\;\mathrm{e}^{pt}+ f, \end{array} \end{aligned} $$
where  $$q(t)=\mathcal {L}^{-1}\{Q(s)\}$$ . Notice that the following can be written:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} y(t)&amp;\displaystyle =&amp;\displaystyle y_n(t)+y_f(t),\\ y_n(t)&amp;\displaystyle =&amp;\displaystyle q(t)+c_{k}\frac{t^{k-1}}{(k-1)!}\;\mathrm{e}^{pt}+ c_{k-1}\frac{t^{k-2}}{(k-2)!}\;\mathrm{e}^{pt}+\cdots\\ &amp;\displaystyle &amp;\displaystyle + c_{2}t\;\mathrm{e}^{pt}+c_{1}\;\mathrm{e}^{pt},{}\\ y_f(t)&amp;\displaystyle =&amp;\displaystyle f. \end{array} \end{aligned} $$
(3.82)
It is remarked again that y n(t) only depends on the roots of the characteristic polynomial N(s) and y f(t) only depends on the fraction  $$\frac {1}{s}$$ , introduced by U(s). Consider the following possibilities:
  1. 1.
    If p < 0, it is useful to compute the following limit, where j is any positive integer number:
     $$\displaystyle \begin{aligned} \begin{array}{rcl} \lim_{t\to\infty}t^j\;\mathrm{e}^{pt}=\lim_{t\to\infty}\frac{t^j}{\mathrm{e}^{-pt}}, \end{array} \end{aligned} $$
    which represents an indetermination, because − pt → +. Then, L’Hopital’s rule [12], pp. 303, can be employed:
     $$\displaystyle \begin{aligned} \begin{array}{rcl} \lim_{t\to\infty}t^j\;\mathrm{e}^{pt}&amp;\displaystyle =&amp;\displaystyle \lim_{t\to\infty}\frac{t^j}{\mathrm{e}^{-pt}}= \lim_{t\to\infty}\frac{\frac{dt^j}{dt}}{\frac{d\mathrm{e}^{-pt}}{dt}} =\lim_{t\to\infty}\frac{j\;t^{j-1}}{-p\;\mathrm{e}^{-pt}}. \end{array} \end{aligned} $$
    As an indetermination appears again, L’Hopital’s rule can be applied several times to obtain:
     $$\displaystyle \begin{aligned} \begin{array}{rcl} \lim_{t\to\infty}t^j\;\mathrm{e}^{pt}&amp;\displaystyle =&amp;\displaystyle \lim_{t\to\infty}\frac{t^j}{\mathrm{e}^{-pt}}= \lim_{t\to\infty}\frac{\frac{dt^j}{dt}}{\frac{d\mathrm{e}^{-pt}}{dt}} =\lim_{t\to\infty}\frac{j\;t^{j-1}}{-p\;\mathrm{e}^{-pt}},\\ &amp;\displaystyle =&amp;\displaystyle \lim_{t\to\infty}\frac{j(j-1)\;t^{j-2}}{(-p)^2\;\mathrm{e}^{-pt}}=\cdots =\lim_{t\to\infty}\frac{j!}{(-p)^j\;\mathrm{e}^{-pt}}=0. \end{array} \end{aligned} $$
    Applying this result to the solution obtained above, it is concluded that:
     $$\displaystyle \begin{aligned} \begin{array}{rcl} \lim_{t\to\infty}\left(c_{k}\frac{t^{k-1}}{(k-1)!}\;\mathrm{e}^{pt}+ c_{k-1}\frac{t^{k-2}}{(k-2)!}\;\mathrm{e}^{pt}+\cdots + c_{2}t\;\mathrm{e}^{pt}+c_{1}\;\mathrm{e}^{pt}\right)=0, \end{array} \end{aligned} $$
    and limt y(t) = q(t) + y f(t) for any p < 0, no matter how close to zero p is.
     
  2. 2.
    If p > 0, it is clear that:
     $$\displaystyle \begin{aligned} \begin{array}{rcl} \lim_{t\to\infty}\left(c_{k}\frac{t^{k-1}}{(k-1)!}\;\mathrm{e}^{pt}+ c_{k-1}\frac{t^{k-2}}{(k-2)!}\;\mathrm{e}^{pt}+\cdots + c_{2}t\;\mathrm{e}^{pt}+c_{1}\;\mathrm{e}^{pt}\right)\to \infty, \end{array} \end{aligned} $$
    as t → and limt y(t) = .
     
  3. 3.
    Finally, consider the case when the characteristic polynomial N(s) has k real and repeated roots at p = 0, i.e., N(s) = N 0(s)s k, where N 0(s) is a polynomial containing the remaining n − k roots of N(s). Then, according to the partial fraction expansion method [2], Ch. 4, [3], Ch. 7, in this case the following must be written:
     $$\displaystyle \begin{aligned} \begin{array}{rcl} Y(s)&amp;\displaystyle =&amp;\displaystyle \frac{B(s)A}{N(s)s}=Q(s)+\frac{c_{k+1}}{s^{k+1}}+ \frac{c_{k}}{s^{k}}+\frac{c_{k-1}}{s^{k-1}}+\cdots+ \frac{c_{2}}{s^2}+\frac{c_{1}}{s}, \end{array} \end{aligned} $$
    where Q(s) contains fractions corresponding to roots of N 0(s). Using the tabulated Laplace transform [4], Ch. 32:
     $$\displaystyle \begin{aligned} \begin{array}{rcl} \mathcal{L}^{-1}\Bigg\{\frac{1}{s^j}\Bigg\}=\frac{t^{j-1}}{(j-1)!}, j=1,2,3,\dots,0!=1,{} \end{array} \end{aligned} $$
    (3.83)
    yields:
     $$\displaystyle \begin{aligned} \begin{array}{rcl} y(t)&amp;\displaystyle =&amp;\displaystyle q(t)+c_{k+1}\frac{t^{k}}{k!}\;+ c_{k}\frac{t^{k-1}}{(k-1)!}\;+\cdots+ c_{2}t\;+c_{1},{} \end{array} \end{aligned} $$
    (3.84)
    where  $$q(t)=\mathcal {L}^{-1}\{Q(s)\}$$ . In this case:
     $$\displaystyle \begin{aligned} \begin{array}{rcl} y_n(t)&amp;\displaystyle =&amp;\displaystyle q(t)+ c_{k}\frac{t^{k-1}}{(k-1)!}\;+\cdots+ c_{2}t\;+c_{1},\\ y_f(t)&amp;\displaystyle =&amp;\displaystyle c_{k+1}\frac{t^{k}}{k!}, \end{array} \end{aligned} $$
    since the natural response only depends on the k real and repeated roots at s = 0 that N(s) has, which, according to (3.83), only introduces terms included in y n(t). It is clear that y n(t) → and y(t) → when t → for k ≥ 2 and that y n(t) is a constant if k = 1. On the other hand, the forced response y f(t) represents the integral of u(t) iterated k times.
     

Example 3.12

Consider the following second-order differential equation, also given in (3.53):
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \ddot y+2\zeta \omega_n \dot y+\omega_n^2 y=k\omega_n^2 u, y(0)=y_0,\dot y(0)=\dot y_0.{} \end{array} \end{aligned} $$
(3.85)
If ζ = 1, then both roots, p 1 y p 2, of the characteristic polynomial  $$N(s)=s^2+2\zeta \omega _n s+\omega _n^2$$ are real, repeated, and negative:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} p_1&amp;\displaystyle =&amp;\displaystyle p_2=-\zeta \omega_n. \end{array} \end{aligned} $$
The solution y(t) of the differential equation in (3.85) is depicted in Fig. 3.19, when ζ = 1, u = A and the initial conditions are zero. Notice that y(t) does not exhibit oscillations in this case because both roots have a zero imaginary part. This case, ζ = 1, is at the edge between an oscillatory response ζ < 1 and a response without oscillations ζ > 1. In fact, the case ζ = 1 represents the fastest response without oscillations because, when ζ > 1, the response is slower as ζ increases.

Example 3.13

Consider again the mass-spring-damper system in Example 3.8. Recall that (3.74) describes the movement of mass. Assume that:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \zeta= \frac{b}{2\sqrt{mK}}=1. \end{array} \end{aligned} $$
Then, in this case, the characteristic polynomial  $$s^2+2\zeta \omega _n s+\omega _n^2 $$ only has a real and negative root:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} p&amp;\displaystyle =&amp;\displaystyle -\zeta \omega_n, \end{array} \end{aligned} $$
which is repeated twice. The mass position x(t) evolves exactly as y(t) in Fig. 3.19. This class of response may be very useful if a fast mass movement is required, but without oscillations.

Example 3.14

Suppose that there is neither a spring nor a damper in Fig. 3.18, i.e., assuming that b = 0 and K = 0, the differential equation in (3.73) becomes:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} m\ddot x=f.{} \end{array} \end{aligned} $$
(3.86)
Notice that in this case the characteristic polynomial is s 2, which only has a real root at p = 0 repeated twice. Now suppose that all of the initial conditions are zero and that the mass receives a small disturbance force described by:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} f=\left\{\begin{array}{cc} \varepsilon_0, &amp; 0\leq t\leq t_1\\ 0, &amp; \mathrm{otherwise} \end{array} \right.,{} \end{array} \end{aligned} $$
(3.87)
where ε 0 > 0 and t 1 > 0 are small constant numbers. By direct integration of (3.86), it is found:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \dot x(t)=\frac{1}{m}\int_0^t f(\tau) d\tau+\dot x(0). \end{array} \end{aligned} $$
Again, integrating yields (assuming that  $$\dot x(0)=0$$ ):
 $$\displaystyle \begin{aligned} \begin{array}{rcl} x(t)&amp;\displaystyle =&amp;\displaystyle \frac{1}{m}\int_0^t\left\{\int_0^r f(\tau) d\tau\right\} dr+ x(0),\\ x(t)&amp;\displaystyle =&amp;\displaystyle \frac{1}{m}\int_0^t\left\{\int_0^r f(\tau) d\tau\right\} dr, x(0)=0. \end{array} \end{aligned} $$
Replacing (3.87), the following is found:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} x(t)&amp;\displaystyle =&amp;\displaystyle \left\{\begin{array}{cc} \frac{1}{2m}\varepsilon_0 t^2, &amp;\displaystyle 0\leq t\leq t_1\\ \frac{1}{2m}\varepsilon_0 t_1^2+\frac{1}{m}\varepsilon_0 t_1(t-t_1), &amp;\displaystyle t&gt;t_1 \end{array} \right.. \end{array} \end{aligned} $$
Notice that x(t) → as t → (see Fig. 3.20) despite f = 0 for t > t 1, i.e., despite the forced response being zero for all t > t 1. This means that the natural response x n(t) → when t →, which corroborates results obtained in the present section for real roots at zero, which are repeated at least twice. The reader may resort to everyday experience to verify that a slight hit (f in (3.87)) on the mass m is enough for the mass to begin moving to never stop (x(t) →) if no friction exists between the mass and the floor.
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig20_HTML.png
Fig. 3.20

Mass position in Fig. 3.18 when it is disturbed and neither a spring nor a damper exists

Example 3.15

Consider the second-order differential equation given in (3.53):
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \ddot y+2\zeta \omega_n \dot y+\omega_n^2 y=k\omega_n^2 u, y(0)=y_0,\dot y(0)=\dot y_0. \end{array} \end{aligned} $$
If ζ = −1, then both roots, p 1 and p 2, of the characteristic polynomial  $$N(s)=s^2+2\zeta \omega _n s+\omega _n^2$$ are real, repeated, and positive:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} p_1&amp;\displaystyle =&amp;\displaystyle p_2=-\zeta \omega_n&gt;0. \end{array} \end{aligned} $$

3.4.3 Complex Conjugate and Nonrepeated Roots

Consider the following second-degree polynomial:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} s^2+bs+a=(s-p_1)(s-p_2), p_1=c_1+jd_1, p_2=c_2+jd_2, \end{array} \end{aligned} $$
where p 1 and p 2 are the complex, but not conjugate roots, i.e., c 1, c 2, d 1, d 2, are real numbers such that c 1 ≠ c 2 if d 1 = −d 2 or d 1 ≠ − d 2 if c 1 = c 2. We have that:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} (s-p_1)(s-p_2)&amp;\displaystyle =&amp;\displaystyle s^2-(p_1+p_2)s+p_1p_2,\\ b=p_1+p_2&amp;\displaystyle =&amp;\displaystyle c_1+c_2+j(d_1+d_2), \\ a=p_1p_2&amp;\displaystyle =&amp;\displaystyle c_1c_2-d_1d_2+j(c_2d_1+c_1d_2). \end{array} \end{aligned} $$

Note that, because of conditions c 1 ≠ c 2 if d 1 = −d 2 or d 1 ≠ − d 2 if c 1 = c 2, one of the coefficients a or b is a complex number, i.e., the polynomial s 2 + bs + a has at least one complex coefficient. In the previous sections, we have seen that physical systems have characteristic polynomials whose coefficients are given only in terms of properties that can be quantified using real numbers, i.e., mass, inertia, friction coefficients, stiffness constant, electrical resistance, inductance, capacitance, etc. Thus, the characteristic polynomials of physical systems cannot have a complex root without including its corresponding complex conjugate. This means that in automatic control we are only interested in characteristic polynomials that have pairs of complex conjugate roots.

Suppose that N(s) has k pairs of complex conjugate and nonrepeated roots, i.e., according to Sect. 3.3, the following can be written:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} N(s)&amp;\displaystyle =&amp;\displaystyle N_0(s)\prod_{i=1}^{i=k}[(s-\sigma_i)^2+\omega_{di}^2], \end{array} \end{aligned} $$
where σ i ≠ σ j or ω di ≠ ω dj if i ≠ j, whereas N 0(s) is a polynomial containing the remaining n − 2k roots of N(s), none of them located at s = 0. Then, according to the partial fraction expansion method [2], Ch. 4, [3], Ch. 7, in this case the following must be written:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} Y(s)=\frac{B(s)}{N(s)}\frac{A}{s}&amp;\displaystyle =&amp;\displaystyle Q(s)+\sum_{i=1}^{i=k}\frac{F_is+C_i}{[(s-\sigma_i)^2+\omega_{di}^2]}+\frac{f}{s}, \end{array} \end{aligned} $$
where Q(s) contains fractions corresponding to roots of N 0(s), f is a constant to be computed as in the previous sections, i.e.,:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} f=\left.\frac{B(s)A}{N(s)}\right|{}_{s=0}, \end{array} \end{aligned} $$
whereas F i and C i, i = 1, …, k, are constants to be computed as B and C were computed in Sect. 3.3. Hence, using (3.60) the following can be written:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} y(t) &amp;\displaystyle =&amp;\displaystyle y_n(t)+y_f(t),{}\\ y_n(t)&amp;\displaystyle =&amp;\displaystyle q(t)+\sum_{i=1}^{i=k}\beta_i\mathrm{e}^{-\zeta_i\omega_{ni} t}\sin\left(\omega_{di} t+\phi_i\right),\\ y_f(t)&amp;\displaystyle =&amp;\displaystyle f,\end{array} \end{aligned} $$
(3.88)
where β i, ω ni, ζ i y ϕ i, i = 1, …, k, are real constants (see Sect. 3.3), whereas  $$q(t)=\mathcal {L}^{-1}\{Q(s)\}$$ . The evolution of the solution in (3.88) satisfies one of the following cases.
  1. 1.

    If 0 < ζ i < 1 for all i = 1, …, k, i.e., if the real parts of all of the complex conjugate roots are negative, − ζ i ω ni < 0 for all i = 1, …, k, then y(t) oscillates such that  $$\lim _{t\to \infty }\sum _{i=1}^{i=k}\beta _i\mathrm {e}^{-\zeta _i\omega _{ni} t}\sin \left (\omega _{di} t+\phi _i\right )=0$$ and limt y(t) = q(t) + y f(t).

     
  2. 2.

    If − 1 < ζ i < 0 for at least one i = 1, …, k, i.e., if the real part is positive, − ζ i ω ni > 0 for at least one of the complex conjugate roots i = 1, …, k, then y(t) oscillates such that y n(t) and y(t) grow without a limit as time increases.

     
  3. 3.

    If ζ i = 0 for all i = 1, …, k, i.e., if the real parts of all the complex conjugate and nonrepeated roots are zero, − ζ i ω ni = 0 for all i = 1, …, k, then  $$\sum _{i=1}^{i=k}\beta _i\mathrm {e}^{-\zeta _i\omega _{ni} t}\sin \left (\omega _{di} t+\phi _i\right )$$ does not disappear as time increases, but nor does it grow without a limit. Thus, although y(t) may not grow without a limit (depending on the behavior of q(t)), it does not converge to y f(t). Notice that for this situation to stand, it is enough that ζ i = 0 for at least one i and 0 < ζ i < 1 for all of the remaining i.

     

It is important to point out that, in the cases when ζ ≥ 1 or ζ ≤−1, real roots are obtained and one of the cases in Sects. 3.4.1 or 3.4.2 is retrieved.

Example 3.16

The mathematical model of the mechanical system composed of two bodies and three springs depicted in Fig. 3.21 has been obtained in Example 2.​4 in Chap. 2. From this result, the case when the spring at the left is not present can be studied. This is achieved by assuming that K 1 = 0, i.e.,:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} &amp;\displaystyle &amp;\displaystyle \ddot x_1 +\frac{b}{m_1}(\dot x_{1}-\dot x_{2}) +\frac{K_2}{m_1}(x_1-x_2) =\frac{1}{m_1}F(t),\\ &amp;\displaystyle &amp;\displaystyle \ddot x_2-\frac{b}{m_2}(\dot x_{1}-\dot x_{2})+\frac{K_3}{m_2}x_{2} -\frac{K_2}{m_2}(x_1-x_2) =0. \end{array} \end{aligned} $$
Applying the Laplace transform (3.2) to each one of these differential equations and assuming that all of the initial conditions are zero, the following is found:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} X_1(s)&amp;\displaystyle =&amp;\displaystyle \frac{\frac{1}{m_1}F(s)+\left(\frac{b}{m_1}s+\frac{K_2}{m_1}\right)X_2(s)}{s^2+\frac{b}{m_1}s+\frac{K_2}{m_1}},\\ X_2(s)&amp;\displaystyle =&amp;\displaystyle \frac{\left(\frac{b}{m_2}s+\frac{K_2}{m_2}\right)X_1(s)}{s^2+\frac{b}{m_2}s+\frac{K_2+K_3}{m_2}}. \end{array} \end{aligned} $$
Replacing the second of these equations in the first one, and rearranging, it is found that:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} X_1(s)&amp;\displaystyle =&amp;\displaystyle \frac{\frac{1}{m_1}\left(s^2+\frac{b}{m_2}s+\frac{K_2+K_3}{m_2}\right)F(s)}{\left(s^2+\frac{b}{m_1}s+\frac{K_2}{m_1}\right) \left(s^2+\frac{b}{m_2}s+\frac{K_2+K_3}{m_2}\right)-\left(\frac{b}{m_1}s+\frac{K_2}{m_1}\right)\left(\frac{b}{m_2}s+\frac{K_2}{m_2}\right)}. \end{array} \end{aligned} $$
To simplify the required algebra, assume that b = 0, then:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} X_1(s)&amp;\displaystyle =&amp;\displaystyle \frac{\frac{1}{m_1}\left(s^2+\frac{K_2+K_3}{m_2}\right)}{\left(s^2+\frac{K_2}{m_1}\right) \left(s^2+\frac{K_2+K_3}{m_2}\right)-\frac{K_2^2}{m_1m_2}}F(s), \end{array} \end{aligned} $$
or:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} X_1(s)&amp;\displaystyle =&amp;\displaystyle \frac{\frac{1}{m_1}\left(s^2+\frac{K_2+K_3}{m_2}\right)}{s^4+\left(\frac{K_2}{m_1}+\frac{K_2+K_3}{m_2}\right)s^2+\frac{K_2K_3}{m_1m_2}}F(s).{} \end{array} \end{aligned} $$
(3.89)
Performing the indicated operation, it is found that:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} s^4+\left(\frac{K_2}{m_1}+\frac{K_2+K_3}{m_2}\right)s^2+\frac{K_2K_3}{m_1m_2}=(s^2+c)(s^2+e), \end{array} \end{aligned} $$
for some numbers c and e such that:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} c+e=\frac{K_2}{m_1}+\frac{K_2+K_3}{m_2}, ce=\frac{K_2K_3}{m_1m_2}, \end{array} \end{aligned} $$
i.e.,:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} c&amp;\displaystyle =&amp;\displaystyle \frac{\frac{K_2}{m_1}+\frac{K_2+K_3}{m_2}+\sqrt{ \left(\frac{K_2}{m_1}-\frac{K_2+K_3}{m_2}\right)^2+\frac{4K_2^2}{m_1m_2} }}{2}&gt;0,\\ e&amp;\displaystyle =&amp;\displaystyle \frac{\frac{K_2}{m_1}+\frac{K_2+K_3}{m_2}-\sqrt{ \left(\frac{K_2}{m_1}-\frac{K_2+K_3}{m_2}\right)^2+\frac{4K_2^2}{m_1m_2} }}{2}&gt;0. \end{array} \end{aligned} $$
Then, the following can be written:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} X_1(s)&amp;\displaystyle =&amp;\displaystyle \frac{\frac{1}{m_1}\left(s^2+\frac{K_2+K_3}{m_2}\right)}{(s^2+c)(s^2+e)}F(s).{} \end{array} \end{aligned} $$
(3.90)
As e and c are positive and different, it is ensured that the characteristic polynomial in (3.90) has two pairs of complex conjugate and different roots (in fact, the roots are imaginary conjugate and different). Hence, mass m 1, whose position is represented by x 1(t), moves by oscillating and it never stops. However, the amplitude of this oscillatory movement does not grow without a limit. These oscillations are performed at two angular frequencies defined by  $$\omega _{d1}=\omega _{n1}=\sqrt {c}$$ and  $$\omega _{d2}=\omega _{n2}=\sqrt {e}$$ . Recall that the damping of these oscillations is zero.
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig21_HTML.png
Fig. 3.21

A system with two bodies and three springs

3.4.4 Complex Conjugated and Repeated Roots

Assume that the characteristic polynomial N(s) has a pair of complex conjugate roots that are repeated k > 1 times, i.e.,:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} N(s)&amp;\displaystyle =&amp;\displaystyle N_0(s)(s^2+2\zeta\omega_n s+\omega_n^2)^{k}=N_0(s)[(s-\sigma)^2+\omega_d^2]^{k}, \end{array} \end{aligned} $$
where N 0(s) is a polynomial that includes the remaining n − 2k roots of N(s), without any of them located at s = 0. Then, the partial fraction expansion method establishes that in this case [2], Ch. 4:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} Y(s)=\frac{B(s)}{N(s)}\frac{A}{s}&amp;\displaystyle =&amp;\displaystyle Q(s)+\frac{F_1s+C_1}{[(s-\sigma)^2+\omega_d^2]^{k}}+ \frac{F_2s+C_2}{[(s-\sigma)^2+\omega_d^2]^{k-1}}+\cdots\\ &amp;\displaystyle &amp;\displaystyle +\frac{F_{k-1}s+C_{k-1}}{[(s-\sigma)^2+\omega_d^2]^{2}} +\frac{F_{k}s+C_{k}}{(s-\sigma)^2+\omega_d^2}+\frac{f}{s}, \end{array} \end{aligned} $$
where Q(s) contains fractions corresponding to the roots of N 0(s). In the case of real roots, it was found that when roots are not repeated, they introduce functions such as ept, where p is the corresponding root. When such a root is repeated j times, it was found that the functions introduced become t j−1 ept. In the case of complex conjugate and repeated roots, a similar situation occurs. In this case, any formal proof is not presented because of cumbersome notation, and only intuitive ideas are given to understand the reason for the resulting functions.
Using the inverse Laplace transform in (3.60), it was found that a complex conjugate nonrepeated root introduces the time function:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \mathcal{L}^{-1}\Bigg\{\frac{Bs+C}{(s-\sigma)^2+\omega_d^2}\Bigg\}&amp;\displaystyle =&amp;\displaystyle \beta \mathrm{e}^{\sigma t}\sin{}(\omega_d t+\phi), \end{array} \end{aligned} $$
for some constants β and ϕ. Then, in the case of complex conjugated and repeated roots, y(t) is given as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} y(t)&amp;\displaystyle =&amp;\displaystyle y_n(t)+y_f(t),{}\\ y_n(t)&amp;\displaystyle =&amp;\displaystyle q(t)+\beta_{k} t^{k-1}\mathrm{e}^{\sigma t}\sin{}(\omega_d t+\phi_{k})+\beta_{k-1} t^{k-2}\mathrm{e}^{\sigma t}\sin{}(\omega_d t+\phi_{k-1})\\ &amp;\displaystyle &amp;\displaystyle +\cdots+\beta_2 t\mathrm{e}^{\sigma t}\sin{}(\omega_d t+\phi_2)+\beta_1 \mathrm{e}^{\sigma t}\sin{}(\omega_d t+\phi_1),\\ y_f(t)&amp;\displaystyle =&amp;\displaystyle f, \end{array} \end{aligned} $$
(3.91)
where  $$q(t)=\mathcal {L}^{-1}\{Q(s)\}$$ . The solution in (3.91) has one of the following behaviors.
  1. 1.

    If the complex conjugate root has a positive real part σ = −ζω n > 0, i.e., if − 1 < ζ < 0, it is easy to see that y n(t) → and y(t) → as t →.

     
  2. 2.
    If the complex conjugate root has a negative real part σ = −ζω n < 0, i.e., if 0 < ζ < 1, it is possible to proceed as in Sect. 3.4.2 to compute the following limit:
     $$\displaystyle \begin{aligned} \begin{array}{rcl} \lim_{t\to\infty}t^j\;\mathrm{e}^{\sigma t}\sin{}(\omega_d\;t+\phi)=0, \end{array} \end{aligned} $$
    for any positive integer j and any real negative σ. Then, for complex conjugate and repeated roots with a negative real part:
     $$\displaystyle \begin{aligned} \begin{array}{rcl} &amp;\displaystyle &amp;\displaystyle \lim_{t\to\infty}\left(\beta_{k} t^{k-1}\mathrm{e}^{\sigma t}\sin{}(\omega_d t+\phi_{k})+\beta_{k-1} t^{k-2}\mathrm{e}^{\sigma t}\sin{}(\omega_d t+\phi_{k-1})\right.\\ &amp;\displaystyle &amp;\displaystyle \left.+\cdots+\beta_2 t\mathrm{e}^{\sigma t}\sin{}(\omega_d t+\phi_2)+\beta_1 \mathrm{e}^{\sigma t}\sin{}(\omega_d t+\phi_1)\right)= 0, \end{array} \end{aligned} $$
    (3.92)
    and limt y(t) = q(t) + y f(t).
     
  3. 3.
    If the root has a zero real part, i.e., σ = 0, then:
     $$\displaystyle \begin{aligned} \begin{array}{rcl} y(t)&amp;\displaystyle =&amp;\displaystyle y_n(t)+y_f(t),\\ y_n(t)&amp;\displaystyle =&amp;\displaystyle q(t)+\beta_{k} t^{k-1}\sin{}(\omega_d t+\phi_{k})+\beta_{k-1} t^{k-2}\sin{}(\omega_d t+\phi_{k-1})+\cdots\\ &amp;\displaystyle &amp;\displaystyle +\beta_2 t\sin{}(\omega_d t+\phi_2)+\beta_1 \sin{}(\omega_d t+\phi_1).\\ y_f(t)&amp;\displaystyle =&amp;\displaystyle f. \end{array} \end{aligned} $$
    Notice that, in this case:
     $$\displaystyle \begin{aligned} \begin{array}{rcl} &amp;\displaystyle &amp;\displaystyle \beta_{k} t^{k-1}\sin{}(\omega_d t+\phi_{k})+\beta_{k-1} t^{k-2}\sin{}(\omega_d t+\phi_{k-1})+\cdots\\ &amp;\displaystyle &amp;\displaystyle +\beta_2 t\sin{}(\omega_d t+\phi_2)+\beta_1 \sin{}(\omega_d t+\phi_1)\to\infty, \end{array} \end{aligned} $$
    (3.93)
    as time increases in the case when the root is repeated at least twice, i.e., if k > 1. But if this root is not repeated, i.e., if k = 1, then:
     $$\displaystyle \begin{aligned} \begin{array}{rcl} &amp;\displaystyle &amp;\displaystyle \beta_{k} t^{k-1}\sin{}(\omega_d t+\phi_{k})+\beta_{k-1} t^{k-2}\sin{}(\omega_d t+\phi_{k-1})+\cdots\\ &amp;\displaystyle &amp;\displaystyle +\beta_2 t\sin{}(\omega_d t+\phi_2)+\beta_1 \sin{}(\omega_d t+\phi_1)=\beta_1 \sin{}(\omega_d t+\phi_1), \end{array} \end{aligned} $$
    (3.94)
    is an oscillatory function whose amplitude neither grows nor vanishes, i.e., although y(t) does not grow without a limit (depending on q(t)) it will never reach y f(t) in a steady state.
     

3.4.5 Conclusions

In the previous sections it has been assumed that all of the initial conditions are zero, i.e., that:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} -\frac{P(s)}{N(s)}=0.\end{array} \end{aligned} $$
If some of the initial conditions are not zero, then the previous expression is not valid and the solutions y(t) found in the previous sections have to be completed adding the function:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} w(t)&amp;\displaystyle =&amp;\displaystyle \mathcal{L}^{-1}\Bigg\{-\frac{P(s)}{N(s)}\Bigg\}.\end{array} \end{aligned} $$
As the denominator of:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} W(s)&amp;\displaystyle =&amp;\displaystyle \mathcal{L}\{w(t)\}=-\frac{P(s)}{N(s)}, \end{array} \end{aligned} $$
is still the characteristic polynomial N(s), then, according to the partial fraction expansion method, w(t) contains the same functions of time contained in the natural response y n(t) found in every case studied in the previous sections. This means that the complete natural response in every one of the cases studied in the previous sections must also include w(t).
It is said that the differential equation in (3.76) or, equivalently, that transfer function  $$G(s)=\frac {B(s)}{N(s)}$$ , is:
  • Stable if the natural response tends toward zero as time increases.

  • Unstable if the natural response grows without a limit as time increases.

  • Marginally stable if the natural response neither grows without limit nor tends toward zero as time increases.

Assume that all of the cases studied in the previous sections, i.e., regarding the roots of N(s), appear simultaneously. Recall that the roots of N(s) represent the poles of G(s). The following can be concluded:

Conditions for the Stability of a Transfer Function

  1. 1.

    If all poles of G(s) have negative real parts then G(s) is stable .

     
  2. 2.

    If all poles of G(s) have negative real parts, except for some nonrepeated poles having zero real parts then the transfer function G(s) is marginally stable .

     
  3. 3.

    If at least one pole of G(s) has a positive real part, then the transfer function G(s) is unstable .

     
  4. 4.

    If there is at least one pole of G(s) with a zero real part which is repeated at least twice, then G(s) is unstable.

     

On the other hand, it has also been seen that y f(t) depends on the input u(t) and that, in fact, both of them are represented by “similar” time functions if the characteristic polynomial has no root at s = 0, i.e., if G(s) has no pole at s = 0. This means that in a control system, the input variable u(t) can be used as the value it is desired that the output y(t) reaches, i.e., u(t) can be used to specify the desired value for y(t). This is accomplished as follows. If the transfer function is stable, i.e., if y n(t) → 0, then y(t) → y f(t); hence, the only thing that remains is to ensure that y f(t) = u. The conditions to achieve this when u = A is a constant are established in the following.

Conditions to Ensure limt y(t) = A

The complete response y(t) reaches the desired output u = A as time increases, i.e., y f(t) = A and y n(t) → 0, if G(s) is stable and the coefficients of the independent terms of the polynomials B(s) and N(s) are equal, i.e., if B(0) = N(0). This can be verified using the final value theorem (3.4):
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \lim_{t\to\infty}y(t)=\lim_{s\to 0}sY(s)=\lim_{s\to 0}s\frac{B(s)}{N(s)}\frac{A}{s}=\frac{B(0)}{N(0)}A=A.{} \end{array} \end{aligned} $$
(3.95)

Under these conditions, it is said that G(s) is a transfer function with unitary gain in a steady state . When u is not a constant and it is desired that limt y(t) = u(t), G(s) is also required to be stable, i.e., that y n(t) → 0, but some additional conditions must be satisfied. The determination of these conditions and how to satisfy them is the subject of the study presented in the subsequent chapters of this book (see Sect. 4.​4).

Example 3.17

Consider the situation presented in Example 3.14. From this example, an experiment can be designed that allows us to know whether a system, a differential equation or a transfer function is stable or unstable:

If the system is originally at “rest,” apply a pulse disturbance to it and observe its evolution:
  • If the system is stable then it “moves” and after a while it comes back to the configuration where it was originally at “rest.”

  • If the system is unstable, then it “moves” more and more such that it goes far away from the configuration where it was originally at “rest.”

Example 3.18

Consider the mass-spring-damper system studied in Example 3.8, but now assume that the spring is not present. Replacing K = 0 in (3.73) yields:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} m\ddot x+b\dot x&amp;\displaystyle =&amp;\displaystyle f. \end{array} \end{aligned} $$
Suppose that a force is applied according to:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} f=k_p(x_d-x),{} \end{array} \end{aligned} $$
(3.96)
where k p is a positive constant, x d is a constant standing for the desired position, and x is the mass position. Combining these expressions yields:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} m\ddot x+b\dot x&amp;\displaystyle =&amp;\displaystyle k_p(x_d-x), \end{array} \end{aligned} $$
and rearranging:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \ddot x+\frac{b}{m}\dot x+\frac{k_p}{m}x&amp;\displaystyle =&amp;\displaystyle \frac{k_p}{m}x_d. \end{array} \end{aligned} $$
Using the Laplace transform and assuming that all of the initial conditions are zero, the following is found:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} G(s)=\frac{X(s)}{X_d(s)}=\frac{\frac{k_p}{m}}{s^2+\frac{b}{m}s+\frac{k_p}{m}}. \end{array} \end{aligned} $$
As an exercise, the reader is to find the roots of the characteristic polynomial  $$s^2+\frac {b}{m}s+\frac {k_p}{m}$$ to corroborate that both roots have a negative real part if  $$\frac {b}{m}&gt;0$$ and  $$\frac {k_p}{m}&gt;0$$ . Then, using (3.95), it is found that the position reached in a steady state:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \lim_{t\to\infty}x(t)=\lim_{s\to 0}sX(s)=\lim_{s\to 0}s\frac{\frac{k_p}{m}}{s^2+\frac{b}{m}s+\frac{k_p}{m}}\frac{x_d}{s}=\frac{\frac{k_p}{m}}{\frac{k_p}{m}}x_d=x_d, \end{array} \end{aligned} $$
equals the desired position x d. The reason for this result can be explained using, again, everyday experience: when x = x d, the force produced according to (3.96) is f = 0; hence, the mass can stop and remain at that position. Moreover, if x < x d, then f > 0 and the mass increases its position x approaching x d (see Fig. 3.18 to recall that x increases in the same direction where f is positive). If x > x d, then f < 0 and the mass position x decreases, approaching x d again.

As an exercise, the reader is to verify that, in the case where a spring is present, i.e., when K > 0, then limt x(t) ≠ x d if x d ≠ 0. Notice, however, that again this can be explained using everyday experience:

If x = x d then, according to (3.96), the external force applied to the mass is zero f = 0 and the friction force is also zero  $$-b\dot x=0$$ , if it is assumed that the mass is at rest. However, if x = x d ≠ 0 then the force of the spring on the mass − Kx is different from zero; hence, the mass will abandon the position x = x d ≠ 0. The mass will reach a position x such that the force of spring and the force f in (3.96) exactly cancel each other out, i.e., where k p(x d − x) = Kx.

3.5 Poles and Zeros in Higher-Order Systems

Systems of orders greater than two are known as higher-order systems. Contrary to first and second order systems, in higher-order systems it is not possible to perform a detailed graphical study of the solution y(t). The main reason for this is the complexity of the expressions arising when the transfer function has three or more poles. Hence, it is important to approximate a higher-order system using a system with a smaller order. There are two ways of accomplishing this: i) The approximate cancellation of some poles with some zeros of the corresponding transfer function, and ii) Neglecting the effect of “fast” poles. Some examples are presented in the following.

3.5.1 Approximate Pole-Zero Cancellation and Reduced Order Models

Consider the following second-order system:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} Y(s)=\frac{k(s-d)}{(s-p_1)(s-p_2)}U(s).{} \end{array} \end{aligned} $$
(3.97)
Suppose that U(s) = As, p 1 ≠ p 2, p 1 < 0, p 2 < 0, d < 0, and p 1 p 2 ≈−kd to render approximately the unitary gain in a steady state of the transfer function in (3.97). Using partial fraction expansion:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} Y(s)=\frac{k(s-d)}{(s-p_1)(s-p_2)}\frac{A}{s}=\frac{B}{s-p_1}+\frac{C}{s-p_2}+\frac{D}{s}.{} \end{array} \end{aligned} $$
(3.98)
Multiplying both sides by the factor (s − p 1) and evaluating at s = p 1 yields:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} B=\left.\frac{k(s-d)A}{(s-p_2)s}\right|{}_{s=p_1}=\frac{k(p_1-d)A}{(p_1-p_2)p_1}. \end{array} \end{aligned} $$
Multiplying both sides of (3.98) by the factor (s − p 2) and evaluating at s = p 2 yields:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} C=\left.\frac{k(s-d)A}{(s-p_1)s}\right|{}_{s=p_2}=\frac{k(p_2-d)A}{(p_2-p_1)p_2}. \end{array} \end{aligned} $$
Multiplying both sides of (3.98) by the factor s and evaluating at s = 0 yields:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} D=\left.\frac{k(s-d)A}{(s-p_1)(s-p_2)}\right|{}_{s=0}=-\frac{kdA}{p_1p_2}. \end{array} \end{aligned} $$
If p 1 ≈ d < 0 then, according to p 1 p 2 ≈−kd, we have k ≈−p 2 and, hence:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} B&amp;\displaystyle \approx&amp;\displaystyle 0,\\ C&amp;\displaystyle \approx&amp;\displaystyle \frac{k(p_2-d)A}{(p_2-d)p_2}=\frac{kA}{p_2},\\ D&amp;\displaystyle \approx&amp;\displaystyle -\frac{kA}{p_2}, \end{array} \end{aligned} $$
to finally obtain:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} y(t)\approx \frac{kA}{p_2}\mathrm{e}^{p_2t}-\frac{kA}{p_2}. \end{array} \end{aligned} $$
The reader can verify that:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} y(t)= \frac{kA}{p_2}\mathrm{e}^{p_2t}-\frac{kA}{p_2}, \end{array} \end{aligned} $$
is the solution of:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} Y(s)=\frac{k}{s-p_2}U(s),{} \end{array} \end{aligned} $$
(3.99)
with U(s) = As and k = −p 2. Thus, the following is concluded. If one pole and one zero of a transfer function are very close, then they approximately cancel each other out to obtain a reduced order transfer function. This means that (3.99) can be used instead of (3.97) to obtain very similar results. The advantages of using the model in (3.99) are: i) it is a model with a smaller order than (3.97), and ii) it has no zero. The advantage of using a reduced order model, which approximately describes a higher-order system is that the response of a reduced order system can often be handled as that of a first- or a second-order system. On the other hand, a zero in a transfer function modifies the time response in a manner that is not easy to quantify; hence, a transfer function that has no zeros is preferable.

It is important to say that the simplification obtained by the cancellation of one pole and one zero is useful only if the pole and the zero have negative real parts. This is because one pole with a positive real part, which is not exactly cancelled because of parameter uncertainty, has a dangerous effect that becomes larger as time increases.

3.5.2 Dominant Poles and Reduced Order Models

Consider the following transfer function:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} Y(s)=\frac{k}{s+\frac{1}{RC}}\frac{d}{s+a}U(s),{} \end{array} \end{aligned} $$
(3.100)
where U(s) = As. Using partial fraction expansion, the following is found:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} Y(s)=\frac{k}{s+\frac{1}{RC}}\frac{d}{s+a}\frac{A}{s}= \frac{B}{s+\frac{1}{RC}}+\frac{C}{s+a}+\frac{D}{s}.{} \end{array} \end{aligned} $$
(3.101)
Multiplying both sides by the factor  $$s+\frac {1}{RC}$$ and evaluating at  $$s=-\frac {1}{RC}$$ :
 $$\displaystyle \begin{aligned} \begin{array}{rcl} B=\left.k\frac{d}{s+a}\frac{A}{s}\right|{}_{s=-\frac{1}{RC}}=k\frac{d}{a-\frac{1}{RC}}\frac{A}{\left(-\frac{1}{RC}\right)}. \end{array} \end{aligned} $$
Multiplying both sides of (3.101) by the factor s + a and evaluating at s = −a:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} C=\left.k\frac{d}{s+\frac{1}{RC}}\frac{A}{s}\right|{}_{s=-a}=k\frac{d}{-a+\frac{1}{RC}}\frac{A}{(-a)}. \end{array} \end{aligned} $$
Multiplying both sides of (3.101) by the factor s and evaluating at s = 0:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} D=\left.k\frac{d}{s+\frac{1}{RC}}\frac{A}{(s+a)}\right|{}_{s=0}=k\frac{d}{\frac{1}{RC}}\frac{A}{a}. \end{array} \end{aligned} $$
If  $$a\gg \frac {1}{RC}&gt;0$$ , then:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} B&amp;\displaystyle \approx&amp;\displaystyle -k\frac{d}{a}\frac{A}{\frac{1}{RC}}, \\ C&amp;\displaystyle \approx&amp;\displaystyle k\frac{dA}{a^2}\ll D. \end{array} \end{aligned} $$
Hence, using (3.101) and neglecting C, the following is found:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} y(t)\approx -k\frac{d}{a}\frac{A}{\frac{1}{RC}}\mathrm{e}^{-\frac{1}{RC}t}+k\frac{d}{\frac{1}{RC}}\frac{A}{a}. \end{array} \end{aligned} $$
The reader can verify that:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} y(t)= -k\frac{d}{a}\frac{A}{\frac{1}{RC}}\mathrm{e}^{-\frac{1}{RC}t}+k\frac{d}{\frac{1}{RC}}\frac{A}{a}, \end{array} \end{aligned} $$
is the solution of:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} Y(s)=\frac{kd}{a(s+\frac{1}{RC})}U(s),{} \end{array} \end{aligned} $$
(3.102)
with U(s) = As. Thus, (3.102) can be used instead of (3.100). The condition  $$a\gg \frac {1}{RC}$$ is interpreted as “the pole at s = −a is very fast compared with the pole at  $$s=-\frac {1}{RC}$$ ”. The pole at  $$s=-\frac {1}{RC}$$ is known as the dominant pole because its effect is more important in the system response. The following is concluded. If one pole is much faster than the others, then a reduced order transfer function can be obtained by neglecting the fastest pole to only maintain the dominant (the slowest) poles. An accepted criterion is that the fastest poles (those to be neglected) have a real part that is five times the real parts of the dominant poles (those to be kept). Notice, however, that it is not just a matter of simply eliminating the factor s + a at the denominator of the transfer function: the constant a still appears at the denominator of (3.102) because a is necessary to keep without change the steady-state gain of the transfer function in (3.100).
A simple way of obtaining (3.102) from (3.100) is the following:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} Y(s)=\frac{k}{s+\frac{1}{RC}}\frac{d}{s+a}U(s)=\frac{k}{s+\frac{1}{RC}}\frac{d}{a(\frac{1}{a}s+1)}U(s).{} \end{array} \end{aligned} $$
(3.103)
If a is very large it can be assumed that  $$\frac {1}{a}s\ll 1$$ ; hence:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} Y(s)\approx\frac{kd}{a(s+\frac{1}{RC})}U(s), \end{array} \end{aligned} $$
which is the expression in (3.102).

Finally, it is important to say that order reduction, as presented above, is only valid if the neglected pole has a negative real part. If the neglected pole has a positive real part, then the contribution of this pole will grow to infinity as time increases, no matter how small C is. Thus, the contribution of such a pole cannot be neglected.

Example 3.19

According to Exercise 9 and Example 2.​20 in Chap. 2, the mathematical model of a permanent magnet brushed DC motor is given as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} L_{a}\frac{di_{a}}{dt} &amp;\displaystyle =&amp;\displaystyle \upsilon-R_{a}i_{a}-k_{e}\omega,\\ J\frac{d\omega }{dt} &amp;\displaystyle =&amp;\displaystyle k_{m}i_{a}-B\omega, {} \end{array} \end{aligned} $$
(3.104)
where ω is the motor velocity (also see (10.​9) and (10.​10) in Chap. 10). Suppose that this motor actuates on a hydraulic mechanism, which, by centrifugal effect, produces a water flow, q i, which is proportional to the motor velocity, i.e., q i = γω, where γ is a positive constant. Finally, this water flow enters the tank in Example 3.2 in the present chapter, whose mathematical model is given in (3.28), rewritten here for ease of reference:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \frac{dh}{d t}+a h=k q_i\\ a=\frac{1}{RC}, k=\frac{1}{C}. \end{array} \end{aligned} $$
Using the Laplace transform (3.2) and assuming that all the initial conditions are zero, it is not difficult to verify that the corresponding equations become:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} I(s)&amp;\displaystyle =&amp;\displaystyle \frac{1/L_a}{s+\frac{R_a}{L_a}}(V(s)-k_e\omega(s)),{} \end{array} \end{aligned} $$
(3.105)
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \omega(s)&amp;\displaystyle =&amp;\displaystyle \frac{k_m/J}{s+\frac{B}{J}}I(s), {} \end{array} \end{aligned} $$
(3.106)
 $$\displaystyle \begin{aligned} \begin{array}{rcl} H(s)&amp;\displaystyle =&amp;\displaystyle \frac{1/C}{s+\frac{1}{RC}}Q_i(s).{} \end{array} \end{aligned} $$
(3.107)
Combining (3.105) and (3.106) yields:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \omega(s)&amp;\displaystyle =&amp;\displaystyle \frac{k_m/J}{s+\frac{B}{J}}\frac{1/L_a}{s+\frac{R_a}{L_a}}(V(s)-k_e\omega(s)). \end{array} \end{aligned} $$
According to (3.103), the following can be written:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \omega(s)&amp;\displaystyle =&amp;\displaystyle \frac{k_m/J}{\frac{B}{J}\left(\frac{J}{B}s+1\right)}\frac{1/L_a}{\frac{R_a}{L_a}\left(\frac{L_a}{R_a}s+1\right)}(V(s)-k_e\omega(s)). \end{array} \end{aligned} $$
In small motors, the inductance L a and the viscous friction coefficient B are small; hence,  $$\frac {L_a}{R_a} \ll \frac {J}{B}$$ . Thus, it can be assumed that  $$\frac {L_a}{R_a}s\ll 1$$ and the following can be written:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \omega(s)&amp;\displaystyle =&amp;\displaystyle \frac{1}{s+\frac{B}{J}}\frac{k_m}{JR_a}(V(s)-k_e\omega(s)). \end{array} \end{aligned} $$
Rearranging terms, this expression can be rewritten as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \omega(s)&amp;\displaystyle =&amp;\displaystyle \frac{\frac{k_m}{JR}}{s+\left(\frac{B}{J}+\frac{k_mk_e}{JR_a}\right)}V(s). \end{array} \end{aligned} $$
Replacing this and Q i(s) = γω(s) in (3.107) yields:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} H(s)&amp;\displaystyle =&amp;\displaystyle \frac{\frac{1}{C}}{s+\frac{1}{RC}}\gamma \frac{\frac{k_m}{JR}}{s+\left(\frac{B}{J}+\frac{k_mk_e}{JR_a}\right)}V(s). \end{array} \end{aligned} $$
Proceeding again as in (3.103), the following can be written:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} H(s)&amp;\displaystyle =&amp;\displaystyle \frac{\frac{\gamma}{C}}{\frac{1}{RC}\left(RCs+1\right)} \frac{\frac{k_m}{JR}}{\left(\frac{B}{J}+\frac{k_mk_e}{JR_a}\right)\left(\frac{1}{\frac{B}{J}+\frac{k_mk_e}{JR_a}}s+1\right)}V(s). \end{array} \end{aligned} $$
If the tank cross-section is large enough, the motor will reach its nominal velocity a long time before the water level in tank increases significantly. This can be stated by saying that “the tank time constant is very large compared with the motor time constant,” i.e.,  $$RC\gg \frac {1}{\frac {B}{J}+\frac {k_mk_e}{JR_a}}$$ . Then, it is possible to say that  $$\frac {1}{\frac {B}{J}+\frac {k_mk_e}{JR_a}}s\ll 1$$ ; hence, the following approximation is valid:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} H(s)&amp;\displaystyle =&amp;\displaystyle \frac{\frac{\gamma}{C}}{\frac{1}{RC}\left(RCs+1\right)} \frac{\frac{k_m}{JR}}{\left(\frac{B}{J}+\frac{k_mk_e}{JR_a}\right)}V(s), \end{array} \end{aligned} $$
or:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} H(s)&amp;\displaystyle =&amp;\displaystyle \frac{\frac{1}{C}}{s+\frac{1}{RC}} \frac{\frac{k_m}{JR}\gamma}{\left(\frac{B}{J}+\frac{k_mk_e}{JR_a}\right)}V(s). \end{array} \end{aligned} $$
Then, defining:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} k_1=\frac{\frac{k_m}{JR}\gamma}{\frac{B}{J}+\frac{k_mk_e}{JR_a}}, \end{array} \end{aligned} $$
and comparing with (3.107), the expression in (3.139) is justified, i.e., that the water flow is proportional to the voltage applied to the motor through a constant k 1. This is possible if: 1) the motor electrical time constant is smaller than the motor mechanical time constant, i.e., if  $$\frac {L_a}{R_a} \ll \frac {J}{B}$$ , and 2) if the tank time constant is very large compared with the complete motor time constant, i.e., if  $$RC\gg \frac {1}{\frac {B}{J}+\frac {k_mk_e}{JR_a}}$$ . Finally, it is important to say that this procedure is valid because  $$\frac {B}{J}+\frac {k_mk_e}{JR_a}&gt;0$$ and  $$\frac {R_a}{L_a}&gt;0$$ , i.e., the neglected poles are negative.

3.5.3 Approximating the Transient Response of Higher-Order Systems

The simplicity of first- and second-order systems allows us to compute their exact solutions. However, in the case of systems represented by differential equations of an order greater than 2, also called higher-order systems, the complexity of the problem prohibits us from obtaining their exact solutions for control purposes. The traditional treatment of higher-order systems in classical control has been to approximate their response as a kind of extrapolation of responses obtained in first- and second-order systems.

According to the studies presented in Sects. 3.1 and 3.3, the poles of first- and second-order systems can be located on the s complex plane, as depicted in Fig. 3.22. Then, the distance of a pole to the origin, s = 0, is representative of the response time: the larger this distance, the shorter the response time. The angle between the imaginary axis and the location vector of the pole is representative of how damped the response is: the closer this angle is to zero, the more oscillatory the response; conversely, the closer this angle is to 90, the less oscillatory the response.
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig22_HTML.png
Fig. 3.22

Relative pole location on the s plane. ζ 1 < ζ 2 and ω n1 < ω n2 < ω n3

The responses of two first-order systems are compared in Fig. 3.23. The location of the respective real poles is depicted in Fig. 3.22. It is observed that a faster response if obtained, as the pole is located farther to the left of the origin.
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig23_HTML.png
Fig. 3.23

Relative response of two systems with real poles (see Fig. 3.22). Dashed: pole at s = d. Continuous: pole at s = e

The responses of three second-order systems are compared in Fig. 3.24. The location of the respective pair of complex conjugate poles is depicted in Fig. 3.22. It is observed that (i) a shorter response time is achieved as poles are located farther from the origin, and (ii) a less oscillatory response is achieved as the poles are located closer to the real axis.
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig24_HTML.png
Fig. 3.24

Relative response of three systems with complex conjugate poles (see Fig. 3.22). Continuous: poles at s = b and  $$s=\bar b$$ . Dash–dot: poles at s = a and  $$s=\bar a$$ . Dashed: poles at s = c and  $$s=\bar c$$

The above results are applied to higher-order systems by considering the worst case, i.e., by observing the “slowest” pole (the closest pole to the origin) and the “least damped” pole (the closest pole to the imaginary axis).

3.6 The Case of Sinusoidal Excitations

Consider the n-order arbitrary transfer function:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \frac{Y(s)}{U(s)}&amp;\displaystyle =&amp;\displaystyle G(s)=\frac{E(s)}{N(s)},{}\\ N(s)&amp;\displaystyle =&amp;\displaystyle s^{n}+a_{n-1}s^{n-1}+\cdots+a_{1}s+a_{0},\\ E(s)&amp;\displaystyle =&amp;\displaystyle b_{0}+b_{1}s+\cdots+ b_{m}s^{m}, \end{array} \end{aligned} $$
(3.108)
where n ≥ m and  $$u(t)=A\sin {}(\omega t)$$ , i.e.,  $$U(s)=A\frac {\omega }{s^2+\omega ^2}$$ . For the sake of simplicity, assume that N(s) has no roots on the imaginary axis. Replacing U(s) in (3.108) and using partial fraction expansion:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} Y(s)=G(s)A\frac{\omega}{s^2+\omega^2}=Y_n(s)+\frac{Cs+D}{s^2+\omega^2},{} \end{array} \end{aligned} $$
(3.109)
where  $$Y_n(s)=\mathcal {L}\{y_n(t)\}$$ is the natural response, i.e., those terms arising from the roots of N(s). Recall that limt y n(t) = 0 if G(s) is stable. As N(s) has been assumed not to have any roots on the imaginary axis, then G(s) has no poles at s = ±. Multiplying both sides of (3.109) by the factor s 2 + ω 2 and evaluating at s = , it is found:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \omega AG(j\omega)=j\omega C+D. \end{array} \end{aligned} $$
Equating real and imaginary parts yields:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} C=A\mathrm{Im}(G(j\omega)), D=\omega A\mathrm{Re}(G(j\omega)), \end{array} \end{aligned} $$
where G() = Re(G()) + jIm(G()). Hence:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \frac{Cs+D}{s^2+\omega^2}=A\mathrm{Im}(G(j\omega))\frac{s}{s^2+\omega^2}+A\mathrm{Re}(G(j\omega)) \frac{\omega}{s^2+\omega^2}. \end{array} \end{aligned} $$
Using the tabulated Laplace transforms:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \mathcal{L}\{\cos{}(\omega t)\}=\frac{s}{s^2+\omega^2}, \mathcal{L}\{\sin{}(\omega t)\}= \frac{\omega}{s^2+\omega^2}, \end{array} \end{aligned} $$
the following is found:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \mathcal{L}^{-1}\left\{\frac{Cs+D}{s^2+\omega^2}\right\}=A[\mathrm{Im}(G(j\omega))\cos{}(\omega t)+\mathrm{Re}(G(j\omega)) \sin{}(\omega t)]. \end{array} \end{aligned} $$
Using Fig. 3.25 and the trigonometric identity,  $$\sin {}(\alpha ) \cos {}(\beta )+\cos {}(\alpha )\sin {}(\beta )=\sin {}(\alpha +\beta )$$ , the following is obtained:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \mathcal{L}^{-1}\left\{\frac{Cs+D}{s^2+\omega^2}\right\}&amp;\displaystyle =&amp;\displaystyle A\vert G(j\omega)\vert\;[\sin{}(\phi)\cos{}(\omega t)+\cos{}(\phi) \sin{}(\omega t)],{}\\ &amp;\displaystyle =&amp;\displaystyle B\;\sin{}(\omega t+\phi), \\ B&amp;\displaystyle =&amp;\displaystyle A\vert G(j\omega)\vert,\\ \phi&amp;\displaystyle =&amp;\displaystyle \arctan\left(\frac{\mathrm{Im}(G(j\omega))}{\mathrm{Re}(G(j\omega))}\right),\\ \vert G(j\omega)\vert&amp;\displaystyle =&amp;\displaystyle \sqrt{\mathrm{Re}^2(G(j\omega))+\mathrm{Im}^2(G(j\omega))}.\end{array} \end{aligned} $$
(3.110)
Then, according to (3.109), the following is found:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} y(t)=y_n(t)+y_f(t),{} \end{array} \end{aligned} $$
(3.111)
where the forced response is given as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} y_f(t)&amp;\displaystyle =&amp;\displaystyle B\;\sin{}(\omega t+\phi), {} \end{array} \end{aligned} $$
(3.112)
 $$\displaystyle \begin{aligned} \begin{array}{rcl} B&amp;\displaystyle =&amp;\displaystyle A\vert G(j\omega)\vert,{} \end{array} \end{aligned} $$
(3.113)
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \phi&amp;\displaystyle =&amp;\displaystyle \arctan\left(\frac{\mathrm{Im}(G(j\omega))}{\mathrm{Re}(G(j\omega))}\right),{} \end{array} \end{aligned} $$
(3.114)
whereas y n(t) → 0 as t → if G(s) is stable. Hence, if the input u(t) of an n −order linear system G(s) is a sine function of time, then the output y(t) is also, in a steady state, a sine function of time with the same frequency, but with an amplitude and a phase that are, in general, different from the amplitude and the phase of the signal at the input.
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig25_HTML.png
Fig. 3.25

Triangle defined by phase ϕ

The ratio between the output and the input amplitudes at frequency ω, i.e.,  $$\frac {B}{A}=\vert G(j\omega )\vert $$ , is known as the magnitude of the transfer function whereas the difference between the phase at the input and the phase at the output, ϕ, is known as the phase of the transfer function . Then, the following is concluded, which is an important result for the purposes of Chap. 6:

The frequency response is defined as the description of:
  • How the ratio of the output amplitude and the input amplitude changes as the frequency of the signal at the input changes.

  • How the difference between the phases of the signals at the input and the output changes as the frequency of the signal at the input changes.

Example 3.20 (Taken from [2], Ch.4)

Consider the spring-mass-damper system depicted in Fig. 3.18. Assume that there is no damper, i.e., b = 0, and the following external force  $$f=F_0\sin {}(\omega t)$$ is applied, where  $$\omega =\sqrt {\frac {K}{m}}$$ . It is desired to compute the mass position x(t) if x(0) = 0 and  $$\dot x(0)=0$$ . It is not necessary to compute the numerical value of constants appearing in x(t).

Solution.

The differential equation describing the situation in Fig. 3.18 when b = 0 is:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} m \ddot x+K x=f, f=F_0\sin{}(\omega t), \omega=\sqrt{\frac{K}{m}}.{} \end{array} \end{aligned} $$
(3.115)
Applying the Laplace transform to (3.115) yields:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} s^2 X(s)+\frac{K}{m} X(s)=\frac{1}{m}F(s). \end{array} \end{aligned} $$
Notice that this expression has the form:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} s^2 X(s)+\omega_n^2 X(s)=\gamma\omega_n^2F(s), \end{array} \end{aligned} $$
where  $$\omega _n=\sqrt {\frac {K}{m}}=\omega $$ ,  $$\gamma =\frac {1}{K}$$ . As:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} F(s)=\frac{F_0\omega}{s^2+\omega^2}, \end{array} \end{aligned} $$
then:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} X(s)&amp;\displaystyle =&amp;\displaystyle \frac{\gamma\omega^3F_0}{(s^2 +\omega^2)^2}. \end{array} \end{aligned} $$
Using partial fraction expansion:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} X(s)&amp;\displaystyle =&amp;\displaystyle \frac{As+B}{(s^2 +\omega^2)^2}+\frac{Cs+D}{s^2 +\omega^2}. \end{array} \end{aligned} $$
Thus, according to Sect. 3.4.4:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} x(t)=\beta_1 t\sin{}(\omega t+\phi_1)+\beta_2 \sin{}(\omega t+\phi_2), \end{array} \end{aligned} $$
where β 1, β 2, ϕ 1, ϕ 2 are some real constants. The solution x(t) is shown in Fig. 3.26 when  $$\omega =\sqrt {\frac {K}{m}}=10$$ [rad/s], m = 1[Kg] and F 0 = 1[N]. This phenomenon is known as resonance and it means that the output reaches very large values despite the input being a bounded sine function of time. Notice that this may happen even if the characteristic polynomial has no roots with positive real parts nor located at s = 0. Thus, this is a different phenomenon to that described in Example 3.14. Also notice that resonance appears when a system is poorly damped (ζ ≈ 0) and the input is an oscillatory signal whose frequency is the same as (or very close to) the system natural frequency. As the mass position grows without a limit, resonance is a phenomenon that is considered to be dangerous.
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig26_HTML.png
Fig. 3.26

Position in a mass-spring-damper system under resonance conditions

Figure 3.26 has been obtained using the MATLAB/Simulink simulation diagram presented in Fig. 3.27. The signal generator block is programmed to have a sine wave form, unit amplitude and 10[rad/s] as frequency. Once the simulation stops, the following MATLAB code is executed in an m-file to draw Fig. 3.26:
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig27_HTML.png
Fig. 3.27

MATLAB/Simulink simulation diagram for results in Fig. 3.26

nn=length(InputOutput(:,2));
n=nn-1;
Ts=20/n;
t=0:Ts:20;
plot(t,InputOutput(:,2),'k-');
axis([0 20 -1 1])
xlabel('t [s]')
ylabel('x [m]')

Example 3.21

Consider a linear system with transfer function:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \frac{Y(s)}{U(s)}=G(s)=\frac{10}{s+5}, y(0)=0. \end{array} \end{aligned} $$
Suppose that  $$u(t)=A\sin {}(5t+90^{\circ })$$ , with A = 1. As G(s) only has one pole at s = −5 and ω = 5 in this case, according to (3.109),  $$Y_n(s)=\frac {h}{s+5}$$ where:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} h=\left.(s+5)\frac{10}{s+5}\frac{\omega}{s^2+\omega^2}\right|{}_{s=-5}=\frac{10\times 5}{5^2+5^2}=1. \end{array} \end{aligned} $$
Hence, according to (3.111), (3.112), (3.113), (3.114):
 $$\displaystyle \begin{aligned} \begin{array}{rcl} y(t)&amp;\displaystyle =&amp;\displaystyle \mathrm{e}^{-5t}+B\;\sin{}(5 t+90^{\circ}+\phi), \\ B&amp;\displaystyle =&amp;\displaystyle A\vert G(j\omega)\vert,\phi=\arctan\left(\frac{\mathrm{Im}(G(j\omega))}{\mathrm{Re}(G(j\omega))}\right). \end{array} \end{aligned} $$
i.e.,:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} G(j\omega)&amp;\displaystyle =&amp;\displaystyle \frac{10}{j\omega+5}\left(\frac{-j\omega+5}{-j\omega+5}\right),\\ &amp;\displaystyle =&amp;\displaystyle \frac{10(-j\omega+5)}{25+\omega^2},\\ &amp;\displaystyle =&amp;\displaystyle \frac{10\sqrt{25+\omega^2}}{\sqrt{25+\omega^2}^2}\angle\arctan\left(\frac{-\omega}{5}\right),\\ \vert G(j\omega)\vert&amp;\displaystyle =&amp;\displaystyle \frac{10}{\sqrt{25+\omega^2}}, \phi=\arctan\left(\frac{-\omega}{5}\right). \end{array} \end{aligned} $$
Evaluating at the applied frequency, ω = 5:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \vert G(j\omega)\vert&amp;\displaystyle =&amp;\displaystyle \frac{10}{\sqrt{25+25}}=\frac{2}{\sqrt{2}}=\sqrt{2}, \phi=-45^{\circ}. \end{array} \end{aligned} $$
Thus:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} y(t)&amp;\displaystyle =&amp;\displaystyle \mathrm{e}^{-5t}+ \sqrt{2}\;\sin{}(5 t+45^{\circ}). \end{array} \end{aligned} $$
This response is depicted in Fig. 3.28. Notice that the difference between y(t) and y f(t) at t = 0.2 = 1∕5[s] is equal to 1.38 − 1.015 = 0.3650 ≈e−5(0.2), i.e., the natural response y n(t) = e−5t evaluated at t = 0.2[s]. This fact corroborates the above results.
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig28_HTML.png
Fig. 3.28

Time response of a linear differential equation when excited with a sinusoidal function of time. Dash–dot:  $$u(t)=A\sin {}(5t+90^{\circ })$$ , A = 1. Dashed:  $$y_f(t)= \sqrt {2}\;\sin {}(5 t+45^{\circ })$$ . Continuous:  $$y(t)=\mathrm {e}^{-5t}+\sqrt {2}\;\sin {}(5 t+45^{\circ })$$

Figure 3.28 has been obtained using the MATLAB/Simulink diagram presented in Fig. 3.29. The sine wave block is programmed to be a time-based block, with unit amplitude, zero bias, 5[rad/s] as the frequency, and 1.57[rad], i.e., 90, as the phase. The sine wave 1 block is programmed to be a time-based block, with 1.4142 as amplitude, zero bias, 5[rad/s] as the frequency, and 1.57∕2[rad], i.e., 45, as the phase. Once the simulation stops, the following MATLAB code is executed in an m-file to draw Fig. 3.28:
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig29_HTML.png
Fig. 3.29

MATLAB/Simulink diagram for the results in Fig. 3.28

nn=length(InputOutput(:,1));
n=nn-1;
Ts=2/n;
t=0:Ts:2;
plot(t,InputOutput(:,1),'k-.',t,InputOutput(:,2),
'k-');
hold on
plot(t,InputOutput(:,3),'k--');
axis([0 2 -1.5 1.5])
xlabel('time [s]')
hold off

3.7 The Superposition Principle

Every linear differential equation satisfies the superposition principle. Moreover, the fact that a differential equation satisfies the superposition principle is accepted as proof that the differential equation is linear. To simplify the exposition of ideas, the superposition principle is presented in the following only when all the initial conditions are zero.

Superposition Principle

Consider the following n −order linear ordinary differential equation with constant coefficients:
 $$\displaystyle \begin{aligned} y^{(n)}+a_{n-1}y^{(n-1)}+\cdots+a_{1}\dot{y}+a_{0}y=b_{0}u+b_{1}\dot{u}+\cdots+ b_{m}u^{(m)},{} \end{aligned} $$
(3.116)
where n ≥ m. Assume that all of the initial conditions are zero. Also assume that y 1(t) is the solution of (3.116) when u 1(t) is applied at the input and y 2(t) is the solution of (3.116) when u 2(t) is applied at the input. Then, α 1 y 1(t) + α 2 y 2(t), where α 1 and α 2 are arbitrary constants, is the solution of (3.116) when α 1 u 1(t) + α 2 u 2(t) is applied at the input.
One way of verifying this result is presented in the following. As the differential equation in (3.116) is linear and all the initial conditions are zero, then it can be expressed in terms of the corresponding transfer function:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} Y(s)=G(s)U(s).{} \end{array} \end{aligned} $$
(3.117)
This means that it can be written as follows:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} Y_1(s)&amp;\displaystyle =&amp;\displaystyle G(s)U_1(s), \\ Y_2(s)&amp;\displaystyle =&amp;\displaystyle G(s)U_2(s). \end{array} \end{aligned} $$
Adding these expressions yields:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \alpha_1 Y_1(s)+\alpha_2 Y_2(s)&amp;\displaystyle =&amp;\displaystyle \alpha_1G(s)U_1(s)+\alpha_2G(s)U_2(s),\\ &amp;\displaystyle =&amp;\displaystyle G(s)(\alpha_1 U_1(s)+\alpha_2U_2(s)). \end{array} \end{aligned} $$
This is possible because α 1 and α 2 are constants. This expression corroborates that α 1 y 1(t) + α 2 y 2(t) is the solution of (3.117); hence, the solution of (3.116) when α 1 u 1(t) + α 2 u 2(t) is the input.

Example 3.22

Obtain the indicated voltage at impedance Z 4(s) in Fig. 3.30a. This circuit can be simplified using the so-called source transformation theorem.
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig30_HTML.png
Fig. 3.30

Electric circuit studied in Example 3.22

Theorem 3.1 (Source Transformation [5, 11], Chap. 9, pp. 61)

  • When an impedance, Z(s), and a voltage source, V fv(s), are series-connected between two terminals a and b, they can be replaced by a current source, I fc(s), parallel-connected to the same impedance, Z(s). The magnitude of the current source is given as I fc(s) = V fv(s)∕Z(s).

  • When an impedance, Z(s), and a current source, I fc(s), are parallel-connected between two terminals a and b, they can be replaced by a voltage source, V fv(s), series-connected to the same impedance, Z(s). The magnitude of the voltage source is given as V fv(s) = Z(s)I fc(s).

Applying the first part of this theorem to voltage sources V 1(s) and V 2(s), circuit in Fig. 3.30b is obtained where:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} I_1(s)=\frac{V_1(s)}{Z_1(s)}, I_2(s)=\frac{V_2(s)}{Z_5(s)}.{} \end{array} \end{aligned} $$
(3.118)
It is clear that Z 1(s), Z 2(s) and Z 5(s) are parallel-connected and its equivalent impedance is given as (see (2.​69)):
 $$\displaystyle \begin{aligned} \begin{array}{rcl} Z_a(s)=\frac{1}{\frac{1}{Z_1(s)}+\frac{1}{Z_2(s)}+\frac{1}{Z_5(s)}}. \end{array} \end{aligned} $$
Using this and combining the current sources, the circuit in Fig. 3.30c is obtained. Now, another important result in network analysis is employed: the current divider.

Fact 3.1 (Current Divider [5], pp. 38)

When two parallel-connected impedances are parallel-connected to a current source, the current flowing through any of the impedances is given by the value of the current source multiplied by the opposite impedance to impedance where the current is to be computed and divided by the addition of the two impedances.

Applying this result in addition to Ohm’s Law to the circuit in Fig. 3.30c, the following expressions are found:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} I_3(s)&amp;\displaystyle =&amp;\displaystyle \frac{Z_a(s)}{Z_a(s)+Z_3(s)+Z_4(s)}(I_1(s)+I_2(s)),\\ V_{z4}(s)&amp;\displaystyle =&amp;\displaystyle Z_4(s)I_3(s), \end{array} \end{aligned} $$
where V z4(s) is voltage at impedance Z 4(s). Combining these expressions yields:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} V_{z4}(s)&amp;\displaystyle =&amp;\displaystyle \frac{Z_a(s)Z_4(s)}{Z_a(s)+Z_3(s)+Z_4(s)}(I_1(s)+I_2(s)). \end{array} \end{aligned} $$
Finally, using (3.118), the following is found:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} V_{z4}(s)&amp;\displaystyle =&amp;\displaystyle \frac{Z_a(s)Z_4(s)}{Z_a(s)+Z_3(s)+Z_4(s)}\left(\frac{V_1(s)}{Z_1(s)}+\frac{V_2(s)}{Z_5(s)}\right). \end{array} \end{aligned} $$
It is concluded that the voltage at Z 4(s) can be obtained, as the addition of the voltages at Z 4(s) due to each one of the voltage sources, i.e., V 1(s) or V 2(s), when the other source is put in short circuit (i.e., a zero value is assigned) and adding both results. This is exactly what the superposition principle establishes. It is interesting to say that the superposition principle has been established above in the present section, assuming that the different applied inputs are directly added, and, after that, they are applied to the system. However, this example shows that the superposition principle is valid in linear systems, even when the voltage sources, V 1(s) and V 2(s), cannot be added directly (see Fig. 3.30a).

Example 3.2

Computing of the voltage at impedance Z 3(s) in the circuit shown in Fig. 3.31a is required. First, this circuit is simplified to obtain circuit in Fig. 3.31b. Then, the source transformation theorem can be employed (see the previous example) to obtain the circuit in Fig. 3.31c, where:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} I_1(s)=\frac{V_1(s)}{Z_1(s)}.{} \end{array} \end{aligned} $$
(3.119)
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig31_HTML.png
Fig. 3.31

Electric circuit studied in Example 3.2

As impedances Z 1(s), Z 2(s), and Z 4(s) are parallel-connected, the equivalent impedance is given as (see (2.​69)):
 $$\displaystyle \begin{aligned} \begin{array}{rcl} Z_a(s)=\frac{1}{\frac{1}{Z_1(s)}+\frac{1}{Z_2(s)}+\frac{1}{Z_4(s)}}. \end{array} \end{aligned} $$
The equivalent impedance Z a(s) is parallel connected to the current source I 1(s); hence, the second part of the source transformation theorem can be used (see the previous example) to obtain the circuit in Fig. 3.31d, where:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} V_3(s)&amp;\displaystyle =&amp;\displaystyle Z_a(s)I_1(s),\\ I(s)&amp;\displaystyle =&amp;\displaystyle \frac{V_3(s)-V_2(s)}{Z_a(s)+Z_3(s)},\\ V_{z3}(s)&amp;\displaystyle =&amp;\displaystyle Z_3(s)I(s), \end{array} \end{aligned} $$
with V z3(s) voltage at the impedance Z 3(s). Combining these expressions and using (3.119) yields:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} V_{z3}(s)&amp;\displaystyle =&amp;\displaystyle \frac{Z_3(s)}{Z_a(s)+Z_3(s)}(V_3(s)-V_2(s)),\\ &amp;\displaystyle =&amp;\displaystyle \frac{Z_3(s)}{Z_a(s)+Z_3(s)}(Z_a(s)I_1(s)-V_2(s)),\\ &amp;\displaystyle =&amp;\displaystyle \frac{Z_3(s)}{Z_a(s)+Z_3(s)}\left(\frac{Z_a(s)}{Z_1(s)}V_1(s)-V_2(s)\right). \end{array} \end{aligned} $$
It is concluded, again, that the voltage at the impedance Z 3(s) can be obtained as the addition of the voltages in Z 3(s) because of each one of the voltage sources, i.e., V 1(s) or V 2(s), when the other source is put in a short circuit (i.e., its value is set to zero) and adding the two results. This example also shows that the superposition principle is valid in linear systems, even when the voltage sources V 1(s) y V 2(s) cannot be added directly (see Fig. 3.31a).

3.8 Controlling First- and Second-Order Systems

The simplicity of the first- and second-order systems allows us to design feedback controllers whose effects can be explained relying on the exact solution of the closed-loop system. This is shown in the present section by applying this methodology to some physical systems.

3.8.1 Proportional Control of Velocity in a DC Motor

According to Chap. 10, the mathematical model of a permanent magnet brushed DC motor is given in (10.​9), (10.​10), which are repeated here for ease of reference:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} L\;\frac{di}{dt}&amp;\displaystyle =&amp;\displaystyle u-R\;i-n\;k_e\;\omega,\\ J\dot \omega&amp;\displaystyle =&amp;\displaystyle - b\;\omega+n\;k_{m}\;i-T_p.{} \end{array} \end{aligned} $$
(3.120)
The reader is encouraged to see Chap. 10 for an explanation of variables in this model. For the purposes of this example, it suffices to say that ω, u, i, T p represent the load angular velocity, the applied voltage, the electric current through the armature circuit, and an external torque disturbance respectively, and all of the parameters are positive. If the motor is small, then it is widely accepted that L ≈ 0 can be assumed, and the following is obtained:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} 0&amp;\displaystyle =&amp;\displaystyle u-R\;i-n\;k_e\;\omega\Rightarrow i=\frac{u-n\;k_e\;\omega}{R}. \end{array} \end{aligned} $$
(3.121)
Replacing the electric current in (3.120) and rearranging:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} J\dot \omega+ \left(b+\;\frac{n^2\;k_{m}\;k_e}{R}\right)\;\omega=\;\frac{n\;k_{m}}{R}u-T_p.{} \end{array} \end{aligned} $$
(3.122)
Consider the following proportional velocity controller :
 $$\displaystyle \begin{aligned} \begin{array}{rcl} u=k_p(\omega_d-\omega),{} \end{array} \end{aligned} $$
(3.123)
where k p is a constant known as the proportional gain, and ω d represents the desired velocity given as a step signal of magnitude A, i.e.,  $$\omega _d(s)=\frac {A}{s}$$ . The closed-loop system is obtained by replacing u, given by the proportional controller, in the previous equation, i.e.,:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} J\dot \omega+ \left(b+\;\frac{n^2\;k_{m}\;k_e}{R}\right)\;\omega=\;\frac{n\;k_{m}}{R}k_p(\omega_d-\omega)-T_p.{} \end{array} \end{aligned} $$
(3.124)
Rearranging terms yields:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \dot \omega+ \left(\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}+\frac{n\;k_{m}k_p}{JR}\right)\; \omega=\;\frac{n\;k_{m}k_p}{JR}\omega_d-\frac{1}{J}T_p. \end{array} \end{aligned} $$
This is a first-order system with two inputs. Then, we can apply superposition, i.e., define two signals ω 1 and ω 2 such that:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \omega(t)&amp;\displaystyle =&amp;\displaystyle \omega_1(t)+\omega_2(t),\\ \dot \omega_1&amp;\displaystyle +&amp;\displaystyle \left(\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}+\frac{n\;k_{m}k_p}{JR}\right)\; \omega_1=\;\frac{n\;k_{m}k_p}{JR}\omega_d, \\ \dot \omega_2&amp;\displaystyle +&amp;\displaystyle \left(\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}+\frac{n\;k_{m}k_p}{JR}\right)\; \omega_2=-\frac{1}{J} T_p. \end{array} \end{aligned} $$
First, analyze the equation for ω 1, applying the Laplace transform with zero initial conditions:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \omega_1(s)&amp;\displaystyle =&amp;\displaystyle G_1(s)\omega_d(s). \\ G_1(s)&amp;\displaystyle =&amp;\displaystyle \frac{ \;\frac{n\;k_{m}k_p}{JR}}{ s+ \left(\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}+\frac{n\;k_{m}k_p}{JR}\right)}.{} \end{array} \end{aligned} $$
(3.125)
Notice that this transfer function is identical to:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \frac{b_2}{s+b_1},{} \end{array} \end{aligned} $$
(3.126)
if we define:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} b_1=\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}+\frac{n\;k_{m}k_p}{JR}\neq 0, &amp;\displaystyle &amp;\displaystyle b_2=\frac{n\;k_{m}k_p}{JR}\neq 0.\end{array} \end{aligned} $$
Notice that this expression has the same structure as the transfer function in (3.23). Hence, if the parameters and variables b 1, b 2, ω 1 and ω d are suitably associated with those in (3.23), the solution ω 1(t) is the same as in (3.16), i.e.,:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \omega_1(t)&amp;\displaystyle =&amp;\displaystyle -\frac{b_2 A}{b_1}\mathrm{e}^{-b_1t}+\frac{b_2A}{b_1}+\omega_1(0)\mathrm{e}^{-b_1t},\end{array} \end{aligned} $$
Stability is ensured if, and only if, the only pole of G 1(s), located at s = −b 1, is negative, i.e., − b 1 < 0, which implies that:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} k_p&gt;-\frac{JR}{n\;k_{m}}\left(\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}\right).{} \end{array} \end{aligned} $$
(3.127)
If this is the case, then the final value of the velocity can be computed using the final value theorem as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \lim_{t\to\infty}\omega_1(t)&amp;\displaystyle =&amp;\displaystyle \lim_{s\to0}s\omega_1(s),\\ &amp;\displaystyle =&amp;\displaystyle \lim_{s\to0}s\frac{ \;\frac{n\;k_{m}k_p}{JR}}{ s+ \left(\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}+\frac{n\;k_{m}k_p}{JR}\right)}\frac{A}{s}=\frac{b_2A}{b_1}. \end{array} \end{aligned} $$
Recall that A represents the magnitude of the desired velocity. As it is desirable for the final value of the velocity to be close to A, it also desirable for  $$\frac {b_2A}{b_1}$$ to have the same sign as A, i.e., b 2b 1 > 0. As b 1 > 0, because of the stability requirement, then b 2 > 0 must also be true. This is ensured if k p > 0, which is in complete agreement with (3.127). Hence, we can conclude that:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \lim_{t\to\infty}\omega_1(t)&amp;\displaystyle =&amp;\displaystyle \frac{b_2A}{b_1}=\frac{\frac{n\;k_{m}k_p}{JR}}{ \frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}+\frac{n\;k_{m}k_p}{JR}}A&lt;A,{} \end{array} \end{aligned} $$
(3.128)
which means that the final velocity remains below the desired velocity. This result may also be explained using everyday experience as follows. Assume for a moment that limt ω 1(t) = A. Then, according to (3.123):
 $$\displaystyle \begin{aligned} \begin{array}{rcl} u=k_p(A-\omega)=0, \end{array} \end{aligned} $$
the voltage applied at the motor terminals is zero. Hence, the motor tends to stop moving, i.e., ω 1 decreases to zero and cannot stay at the desired value A. This means that ω 1 < A will be obtained again. Hence, it is not possible to keep ω 1 = A forever, which justifies the result in (3.128).
The difference between the final velocity and A is known as the steady-state error e ss. We remark that, according to (3.128), the steady-state error can be arbitrarily reduced by increasing k p > 0, although a zero steady-state error will never be accomplished. On the other hand, if a specific steady-state error e ss is required, the corresponding value for k p can be computed as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} e_{ss}&amp;\displaystyle =&amp;\displaystyle A-\frac{\frac{n\;k_{m}k_p}{JR}}{ \frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}+\frac{n\;k_{m}k_p}{JR}}A,\\ &amp;\displaystyle =&amp;\displaystyle \frac{\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}}{ \frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}+\frac{n\;k_{m}k_p}{JR}}A,\\ k_p&amp;\displaystyle =&amp;\displaystyle \left[\left(\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}\right)\frac{A}{e_{ss}}-\left(\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}\right)\right] \frac{JR}{n\;k_{m}} . \end{array} \end{aligned} $$
Notice that to ensure k p > 0, the steady-state error is required to be a fraction of the desired velocity, i.e., e ss = ρA with 0 < ρ < 1.
Now, consider the expression for ω 2. Using the Laplace transform:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \omega_2(s)&amp;\displaystyle =&amp;\displaystyle G_2(s)T_p(s),\\ G_2(s)&amp;\displaystyle =&amp;\displaystyle \frac{-\frac{1}{J} }{s+ \left(\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}+\frac{n\;k_{m}k_p}{JR}\right)}. \end{array} \end{aligned} $$
Notice that G 2(s) can be written as  $$G_2(s)=\frac {b_3}{s+b_1}$$ , with  $$b_3=-\frac {1}{J} $$ , i.e., a transfer function such as that in (3.126). Thus, we may proceed as before to conclude that:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \lim_{t\to\infty}\omega_2(t)&amp;\displaystyle =&amp;\displaystyle \frac{-\frac{1}{J} }{\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}+\frac{n\;k_{m}k_p}{JR}}d, T_p(s)=\frac{d}{s}. \end{array} \end{aligned} $$
The important observation in this expression is that the final value of ω 2 can be reduced by increasing k p > 0. This is good news, as ω 2 is the velocity deviation due to external disturbance T p; hence, it is desirable to keep ω 2 close or equal to zero, if possible. In this case, the deviation can be rendered arbitrarily small by increasing k p > 0, but a zero deviation will never be achieved with a proportional controller.
As the problem at hand is a first-order system, the transient response is completely described by the time constant:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \tau=\frac{1}{b_1}=\frac{1}{\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}+\frac{n\;k_{m}k_p}{JR}}. \end{array} \end{aligned} $$
Recall that a faster response is obtained as τ is shorter. This means that a faster response is obtained as k p > 0 is chosen to be larger. Moreover, if all the system parameters are known and a desired time constant T is specified, then the required proportional gain can be exactly computed as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} k_p=\frac{JR}{n\;k_{m}}\left[\frac{1}{\tau}-\left(\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}\right)\right] . \end{array} \end{aligned} $$
This means that, to ensure k p > 0, the closed-loop system is required to be faster than the open-loop system, i.e.:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \frac{1}{\tau}-\left(\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}\right)&gt;0. \end{array} \end{aligned} $$
The velocity responses of a DC motor, when using two different proportional gains k p = 12 and k p = 40, are shown in Fig. 3.32. This was performed using the MATLAB/Simulink diagram shown in Fig. 3.33 to represent the closed-loop system in (3.124). It is assumed that J = 1, nk mR = 1 and  $$b+\;\frac {n^2\;k_{m}\;k_e}{R}=7$$ . These values were selected merely to obtain representative, clearly appreciable results for the proportional control of velocity. Block ω d is a step applied at t = 0 with a zero initial value and 2 as a final value. The external disturbance T p is a step applied at t = 0.5 with a zero initial value and 7 as a final value. Notice that the use of a larger k p results in a faster response, i.e., T 2 < T 1, a smaller steady-state error, i.e., a larger steady-state response β, and a smaller deviation δ because of a constant external torque disturbance.
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig32_HTML.png
Fig. 3.32

Velocity response in the proportional control of velocity. Continuous: k p = 40 is used. Dashed: k p = 12 is used. Dash–dot: desired velocity

/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig33_HTML.png
Fig. 3.33

MATLAB/Simulink diagram for the results in Fig. 3.32

The following is MATLAB code that is executed in an m-file to draw Fig. 3.32 after simulation in Fig. 3.33 stops:

nn=length(InputOutput(:,1));
n=nn-1;
Ts=1/n;
t=0:Ts:1;
plot(t,InputOutput(:,1),'k-.',t,InputOutput(:,3),
'k-');
hold on
plot(t,InputOutput(:,2),'k--');
axis([-0.02 1 0 2.5])
xlabel('time [s]')
hold off

3.8.2 Proportional Position Control Plus Velocity Feedback for a DC Motor

Consider the motor model in (3.122), but, now, expressed in terms of position θ, i.e.,  $$\dot \theta =\omega $$ :
 $$\displaystyle \begin{aligned} \begin{array}{rcl} J\ddot \theta+ \left(b+\;\frac{n^2\;k_{m}\;k_e}{R}\right)\;\dot\theta=\;\frac{n\;k_{m}}{R}u-T_p.{} \end{array} \end{aligned} $$
(3.129)
Assume that the voltage applied at the motor terminals is computed according to the following expression:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} u=k_p(\theta_d-\theta)-k_v\dot \theta, \end{array} \end{aligned} $$
where k v is a constant known as the velocity gain, and θ d represents the desired position, which is assumed to be given as a step signal with magnitude A, i.e.,  $$\theta _d(s)=\frac {A}{s}$$ . This represents a controller known as proportional position control plus velocity feedback . The closed-loop equation is found by replacing u in (3.129) and rearranging terms, i.e.,:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} J\ddot \theta&amp;\displaystyle +&amp;\displaystyle \left(b+\;\frac{n^2\;k_{m}\;k_e}{R}\right)\;\dot\theta=\;\frac{n\;k_{m}}{R}\left(k_p(\theta_d-\theta)-k_v\dot \theta\right)-T_p,{}\\ \ddot \theta&amp;\displaystyle +&amp;\displaystyle \left(\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}+\frac{n\;k_{m}\;k_v}{JR}\right)\;\dot\theta+\frac{n\;k_{m}\;k_p}{JR}\theta=\;\frac{n\;k_{m\;k_p}}{JR}\theta_d-\frac{1}{J}T_p, \end{array} \end{aligned} $$
(3.130)
Using superposition and Laplace transform yields:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \theta(s)&amp;\displaystyle =&amp;\displaystyle \theta_1(s)+\theta_2(s),\\ \theta_1(s)&amp;\displaystyle =&amp;\displaystyle G_1(s)\theta_d(s),\theta_2(s)=G_2(s)T_p(s),\\ G_1(s)&amp;\displaystyle =&amp;\displaystyle \frac{\frac{n\;k_{m\;k_p}}{JR}}{s^2+ \left(\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}+\frac{n\;k_{m}\;k_v}{JR}\right)\;s+\frac{n\;k_{m}\;k_p}{JR}}\\ G_2(s)&amp;\displaystyle =&amp;\displaystyle \frac{-\frac{1}{J}}{s^2+ \left(\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}+\frac{n\;k_{m}\;k_v}{JR}\right)\;s+\frac{n\;k_{m}\;k_p}{JR}}. \end{array} \end{aligned} $$
First, analyze the equation for θ 1. By equating:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} s^2+ \left(\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}+\frac{n\;k_{m}\;k_v}{JR}\right)\;s+\frac{n\;k_{m}\;k_p}{JR}=s^2+2\zeta\omega_ns+\omega_n^2, \end{array} \end{aligned} $$
it is possible to write:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} G_1(s)&amp;\displaystyle =&amp;\displaystyle \frac{\omega_n^2}{s^2+ 2\zeta\omega_ns+\omega_n^2},{}\\ \omega_n&amp;\displaystyle =&amp;\displaystyle \sqrt{\frac{n\;k_{m}\;k_p}{JR}}\zeta=\left(\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}+\frac{n\;k_{m}\;k_v}{JR}\right)\frac{1}{2\sqrt{\frac{n\;k_{m}\;k_p}{JR}}}. \end{array} \end{aligned} $$
(3.131)
Notice that, under these conditions, the inverse Laplace transform of θ 1(s) = G 1(s)θ d(s), is identical to the function presented in (3.57). Hence, stability is ensured if k p > 0, to ensure that both ω n and ζ are real numbers, and:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}+\frac{n\;k_{m}\;k_v}{JR}&gt;0, \end{array} \end{aligned} $$
to ensure that ζ > 0. If this is the case, then we can use the final value theorem to compute the final position as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \lim_{t\to\infty} \theta_1(t)&amp;\displaystyle =&amp;\displaystyle \lim_{s\to0}s\theta_1(s),\\ &amp;\displaystyle =&amp;\displaystyle \lim_{s\to0}s\frac{\omega_n^2}{s^2+ 2\zeta\omega_ns+\omega_n^2}\frac{A}{s}=A. \end{array} \end{aligned} $$
Thus, the motor position reaches the constant desired position in a steady state. On the other hand, according to (3.131), we conclude that:
  • The closed-loop system is faster as k p > 0 is chosen to be larger. This is because ω n increases as k p > 0 increases. However, as k p > 0 increases the system becomes more oscillatory. This is because ζ decreases as k p increases.

  • The system response is less oscillatory as k v > 0 is chosen to be larger. This is because ζ increases as k v increases.

This means that the response can be rendered as fast and damped as desired merely by suitably selecting both k p and k v.
Now, consider the equation for ω 2. Notice that this variable represents the position deviation produced by an external torque disturbance. Hence, it is desirable for ω 2 to be close or equal to zero, if possible. Notice that G 2(s) is stable if is G 1(s) stable, because both transfer functions have the same characteristic polynomial. Thus, we can compute the final value of ω 2 using the final value theorem when the disturbance is a step signal, i.e.,  $$T_p(s)=\frac {d}{s}$$ :
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \lim_{t\to\infty}\omega_2(t)&amp;\displaystyle =&amp;\displaystyle \lim_{s\to0}s\omega_2(s),\\ &amp;\displaystyle =&amp;\displaystyle \lim_{s\to0}s\frac{-\frac{1}{J}}{s^2+ \left(\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}+\frac{n\;k_{m}\;k_v}{JR}\right)\;s+\frac{n\;k_{m}\;k_p}{JR}}\frac{d}{s},\\ &amp;\displaystyle =&amp;\displaystyle \frac{-\frac{1}{J}}{\frac{n\;k_{m}\;k_p}{JR}}d. \end{array} \end{aligned} $$
This means that the steady-state deviation is not zero. However, the good news is that this deviation can be arbitrarily reduced by increasing k p. We conclude that a faster closed-loop system also results in smaller position deviations because of constant external torque disturbances. This is corroborated by simulations shown in Fig. 3.34, which were performed using the MATLAB/Simulink diagram in Fig. 3.35. There, the closed-loop system in (3.130) is simulated using J = 1,  $$b+\frac {n^2k_{m}k_e}{R}=7$$ , and nk mR = 70. These values were employed merely because they allow representative, clearly appreciable, results to be obtained. The block theta_d is a step applied at t = 0 with a zero initial value and 2 as a final value. The external disturbance T p is a step applied at t = 0.5[s] with a zero initial value and 7 × 70 as a final value. Once simulation in Fig. 3.35 stops, the following MATLAB code is executed using an m-file to draw Fig. 3.34:
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig34_HTML.png
Fig. 3.34

Proportional control of the position with velocity feedback. Use of k p = 40 and k v = 0.45 (continuous) results in a faster response, but the same damping, than using k p = 12 and k v = 0.2 (dashed). See (3.132)

/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig35_HTML.png
Fig. 3.35

MATLAB/Simulink diagram used to obtain results in Fig. 3.34

nn=length(InputOutput(:,1));
n=nn-1;
Ts=1/n;
t=0:Ts:1;
plot(t,InputOutput(:,1),'k-.',t,InputOutput(:,3),
'k-');
hold on
plot(t,InputOutput(:,2),'k--');
axis([-0.02 1 0 2.75])
xlabel('time [s]')
ylabel('Position [rad]')
hold off
Notice that in the case where some t r and M p are specified, the use of expressions in (3.71) allow the required ω n and ζ to be computed. These data and (3.131) allow the corresponding controller gains to be computed as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} k_p&amp;\displaystyle =&amp;\displaystyle \frac{JR}{n\;k_{m}}\omega_n^2, k_v=\left[2\zeta\omega_n-\left(\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}\right)\right]\frac{JR}{n\;k_{m}} .{} \end{array} \end{aligned} $$
(3.132)
Using this procedure together with t r = 0.039[s], M p = 29.9%, for the continuous line in Fig. 3.34, results in k p = 40 and k v = 0.45, whereas t r = 0.071[s], M p = 29.9%, for the dashed line, results in k p = 12 and k v = 0.2.

3.8.3 Proportional–Derivative Position Control of a DC Motor

Consider the motor model in (3.129) in a closed loop with the following proportional–derivative (PD) controller :
 $$\displaystyle \begin{aligned} \begin{array}{rcl} u=k_p(\theta_d-\theta)+k_d\frac{d}{dt}(\theta_d-\theta), \end{array} \end{aligned} $$
i.e.,:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \ddot \theta&amp;\displaystyle {+}&amp;\displaystyle \left(\frac{b}{J}{+}\;\frac{n^2\;k_{m}\;k_e}{JR}{+}\frac{n\;k_{m}\;k_d}{JR}\right)\;\dot\theta{+}\frac{n\;k_{m}\;k_p}{JR}\theta{=}\;\frac{n\;k_{m\;k_p}}{JR}\theta_d{+}\frac{n\;k_{m}\;k_d}{JR}\dot\theta_d{-}\frac{1}{J}T_p, \end{array} \end{aligned} $$
where k d is known as the derivative gain. Using superposition and the Laplace transform yields:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \theta(s)&amp;\displaystyle =&amp;\displaystyle \theta_1(s)+\theta_2(s),\\ \theta_1(s)&amp;\displaystyle =&amp;\displaystyle G_1(s)\theta_d(s),\theta_2(s)=G_2(s)T_p(s),\\ G_1(s)&amp;\displaystyle =&amp;\displaystyle \frac{\frac{n\;k_{m}\;k_d}{JR}s+\frac{n\;k_{m\;k_p}}{JR}}{s^2+ \left(\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}+\frac{n\;k_{m}\;k_d}{JR}\right)\;s+\frac{n\;k_{m}\;k_p}{JR}}\\ G_2(s)&amp;\displaystyle =&amp;\displaystyle \frac{-\frac{1}{J}}{s^2+ \left(\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}+\frac{n\;k_{m}\;k_d}{JR}\right)\;s+\frac{n\;k_{m}\;k_p}{JR}}. \end{array} \end{aligned} $$
Notice that except for the term  $$\frac {n\;k_{m}\;k_d}{JR}s$$ in the numerator of G 1(s) and the use of k d instead of k v in the denominator, we have exactly retrieved the same closed-loop equations as in the previous example. Thus, we can proceed exactly as in the previous example to conclude:
  • Expressions in (3.131) stand again with k d instead of k v.

  • Stability is ensured if k p > 0 and:
     $$\displaystyle \begin{aligned} \begin{array}{rcl} \frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}+\frac{n\;k_{m}\;k_d}{JR}&gt;0. \end{array} \end{aligned} $$
  • If any external disturbance is not present, then the position reaches the constant desired position in a steady state.

  • The closed-loop system is faster as k p > 0 is chosen to be larger. This is because ω n increases as k p > 0 increases. However, as k p > 0 increases, the system becomes more oscillatory. This is because ζ decreases as k p increases.

  • The system response is less oscillatory as k d > 0 becomes larger. This is because ζ increases as k d increases.

  • The response can be rendered as fast and damped as desired merely by suitably selecting both k p and k d.

  • If a step external torque disturbance  $$T_p(s)=\frac {d}{s}$$ is present, then a position deviation exists in a steady state and it is given as:
     $$\displaystyle \begin{aligned} \begin{array}{rcl} \lim_{t\to\infty}\omega_2(t) &amp;\displaystyle =&amp;\displaystyle \frac{-\frac{1}{J}}{\frac{n\;k_{m}\;k_p}{JR}}d. \end{array} \end{aligned} $$
On the other hand, to study the transient response notice that the transfer function G 1(s) can be rewritten as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} G_1(s)&amp;\displaystyle =&amp;\displaystyle \frac{\frac{k_d}{k_p}\omega_n^2s+\omega_n^2}{s^2+ 2\zeta\omega_n\;s+\omega_n^2},\\ &amp;\displaystyle =&amp;\displaystyle \frac{k_d}{k_p}s\frac{\omega_n^2}{s^2+ 2\zeta\omega_n\;s+\omega_n^2}+ \frac{\omega_n^2}{s^2+ 2\zeta\omega_n\;s+\omega_n^2}, \end{array} \end{aligned} $$
where:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \omega_n=\sqrt{\frac{n\;k_{m}\;k_p}{JR}},\zeta=\left(\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}+\frac{n\;k_{m}\;k_d}{JR}\right)\frac{1}{2\sqrt{\frac{n\;k_{m}\;k_p}{JR}}}, {} \end{array} \end{aligned} $$
(3.133)
have been used. Hence:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \theta_1(s)&amp;\displaystyle =&amp;\displaystyle \frac{k_d}{k_p}s\frac{\omega_n^2}{s^2+ 2\zeta\omega_n\;s+\omega_n^2}\theta_d(s)+ \frac{\omega_n^2}{s^2+ 2\zeta\omega_n\;s+\omega_n^2}\theta_d(s).{} \end{array} \end{aligned} $$
(3.134)
As  $$\theta _d(s)=\frac {A}{s}$$ , notice that the inverse Laplace transform of the second term in this expression is the function presented in (3.57), whereas the inverse Laplace transform of the first term is the time derivative of the function in (3.57) multiplied by the factor  $$\frac {k_d}{k_p}$$ . This means that, although the shape of θ 1(t) is similar to that of the function in (3.57), there are some differences. For instance, formulas for the rise time and overshoot given in (3.71) are only approximate in the present control problem.
In the case in which some t r and M p are specified, use of the expressions in (3.71) allow only approximate values for the required ω n and ζ to be computed. These data and (3.133) allow approximate values for the corresponding controller gains to be computed as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} k_p&amp;\displaystyle =&amp;\displaystyle \frac{JR}{n\;k_{m}}\omega_n^2, k_d=\left[2\zeta\omega_n-\left(\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}\right)\right]\frac{JR}{n\;k_{m}} . \end{array} \end{aligned} $$
Some simulation results are presented in Fig. 3.36 when J = 1,  $$b+\frac {n^2k_{m}k_e}{R}=7$$ , and nk mR = 70. The corresponding MATLAB/Simulink diagram is shown in Fig. 3.37. The block theta_d is a step applied at t = 0 with a zero initial value and 2 as a final value. The external disturbance T p is a step applied at t = 0.5[s] with a zero initial value and 7 × 70 as a final value. When the simulation in Fig. 3.37 stops, the following MATLAB code is executed in an m-file to draw Fig. 3.36:
nn=length(InputOutput(:,1));
n=nn-1;
Ts=1/n;
t=0:Ts:1;
plot(t,InputOutput(:,2),'k-.',t,InputOutput(:,3),
'k-');
hold on
plot(t,InputOutput(:,1),'k--');
axis([-0.02 1 0 3])
xlabel('time [s]')
ylabel('Position [rad]')
hold off
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig36_HTML.png
Fig. 3.36

Continuous: PD control law  $$u=k_p(\theta _d-\theta )+k_d \frac {d}{dt}(\theta _d-\theta )$$ , with k p = 40, k d = 0.45, is used. Dashed: control law  $$u=k_p(\theta _d-\theta )-k_v\dot \theta $$ , with k p = 40, k v = 0.45, is used. Dash–dot: desired position θ d

/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig37_HTML.png
Fig. 3.37

MATLAB/Simulink diagram used to obtain results in Fig. 3.36

The dashed line represents the desired response, i.e., the inverse Laplace transform of:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \frac{\omega_n^2}{s^2+ 2\zeta\omega_n\;s+\omega_n^2}\theta_d(s),{} \end{array} \end{aligned} $$
(3.135)
which is identical to the response obtained with controller  $$u=k_p(\theta _d-\theta )-k_v\dot \theta $$ , introduced in the previous section, with k p = 40, k v = 0.45. The continuous line represents the inverse Laplace transform of the complete expression for θ 1(s) in (3.134), i.e., the response when a PD position controller is used with k p = 40, k d = 0.45. Notice that the continuous line in Fig. 3.36 has a smaller rise time and a larger overshoot than the dashed line. The reason for this is that the PD control law  $$u=k_p(\theta _d-\theta )+k_d\frac {d}{dt}(\theta _d-\theta )$$ includes the term  $$k_d\frac {d\theta _d}{dt}$$ , which is not present in control law  $$u=k_p(\theta _d-\theta )-k_v\dot \theta $$ . Recall that θ d is not a constant, but it is a step signal whose time derivative is very large at the time when such a step signal is applied. This contributes with a large voltage spike at this point in time, which results in a faster response, including a larger overshoot for the step response when a PD controller is employed.

Notice that a nonzero steady-state deviation results when a step external torque disturbance appears at t = 0.5[s]. We stress that this deviation is the same in both responses, the continuous and the dashed line, because the same k p is used.

3.8.4 Proportional–Integral Velocity Control of a DC Motor

Consider again the velocity control problem in a permanent magnet brushed DC motor. Now assume that the following proportional–integral (PI) controller is employed:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} u=k_p(\omega_d-\omega)+k_i\int_0^t(\omega_d-\omega)dr, \end{array} \end{aligned} $$
where k i is a constant known as the integral gain, whereas ω d is a step signal standing for the desired velocity, i.e.,  $$\omega _d(s)=\frac {A}{s}$$ . The closed-loop system is obtained by replacing u, given above, in the model (3.122), i.e.,:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} J\dot \omega+ \left(b+\;\frac{n^2\;k_{m}\;k_e}{R}\right)\;\omega=\;\frac{n\;k_{m}}{R}\left(k_p(\omega_d-\omega)+k_i\int_0^t(\omega_d-\omega)dr\right)-T_p. \end{array} \end{aligned} $$
Differentiating once with respect to time and rearranging yields:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \ddot \omega{+} \left(\frac{b}{J}{+}\;\frac{n^2\;k_{m}\;k_e}{JR}{+}\frac{n\;k_{m}k_p}{JR}\right)\;\dot \omega{+}\frac{n\;k_{m}k_i}{JR}\omega{=}\;\frac{n\;k_{m}k_p}{JR}\dot\omega_d{+}\frac{n\;k_{m}k_i}{JR}\omega_d{-}\frac{1}{J}\dot T_p. \\ {} \end{array} \end{aligned} $$
(3.136)
This is a second-order system with two inputs. Then, superposition has to be used, i.e., to define two signals ω 1 and ω 2 such that:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \omega(t)&amp;\displaystyle =&amp;\displaystyle \omega_1(t)+\omega_2(t),\\ \ddot \omega_1&amp;\displaystyle +&amp;\displaystyle \left(\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}+\frac{n\;k_{m}k_p}{JR}\right)\;\dot \omega_1+\frac{n\;k_{m}k_i}{JR}\omega_1=\;\frac{n\;k_{m}k_p}{JR}\dot\omega_d+\frac{n\;k_{m}k_i}{JR}\omega_d, \\ \ddot \omega_2&amp;\displaystyle +&amp;\displaystyle \left(\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}+\frac{n\;k_{m}k_p}{JR}\right)\;\dot \omega_2+\frac{n\;k_{m}k_i}{JR}\omega_2=-\frac{1}{J}\dot T_p. \end{array} \end{aligned} $$
First, analyze the equation for ω 1. Applying the Laplace transform with zero initial conditions:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \omega_1(s)&amp;\displaystyle =&amp;\displaystyle G_1(s)\omega_d(s). \\ G_1(s)&amp;\displaystyle =&amp;\displaystyle \frac{ \;\frac{n\;k_{m}k_p}{JR}s+\frac{n\;k_{m}k_i}{JR}}{ s^2+ \left(\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}+\frac{n\;k_{m}k_p}{JR}\right)\;s+\frac{n\;k_{m}k_i}{JR} }.{} \end{array} \end{aligned} $$
(3.137)
Since the characteristic polynomial is second-degree, we proceed to equate:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} s^2+ \left(\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}+\frac{n\;k_{m}k_p}{JR}\right)\;s+\frac{n\;k_{m}k_i}{JR} =s^2+2\zeta\omega_ns+\omega_n^2, \end{array} \end{aligned} $$
to find:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \omega_n=\sqrt{\frac{n\;k_{m}k_i}{JR}}&gt;0, \ \zeta=\left(\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}+\frac{n\;k_{m}k_p}{JR}\right)\frac{1}{2\sqrt{\frac{n\;k_{m}k_i}{JR}}}. {} \end{array} \end{aligned} $$
(3.138)
These expressions require k i > 0 to ensure that ω n and ζ are real numbers. On the other hand, ζ > 0 is true if k p is chosen to satisfy:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} k_p&gt;-\frac{JR}{n\;k_{m}}\left(\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}\right).\end{array} \end{aligned} $$
Hence, G 1(s) is a stable transfer function and, thus, the velocity final value can be computed using the final value theorem:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \lim_{t\to\infty}\omega_1(t)&amp;\displaystyle =&amp;\displaystyle \lim_{s\to0}s\omega_1(s),\\ &amp;\displaystyle =&amp;\displaystyle \lim_{s\to0}s\frac{ \;\frac{n\;k_{m}k_p}{JR}s+\frac{n\;k_{m}k_i}{JR}}{ s^2+ \left(\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}+\frac{n\;k_{m}k_p}{JR}\right)\;s+\frac{n\;k_{m}k_i}{JR} }\frac{A}{s},\\ &amp;\displaystyle =&amp;\displaystyle \frac{ \frac{n\;k_{m}k_i}{JR}}{ \frac{n\;k_{m}k_i}{JR} }A=A. \end{array} \end{aligned} $$
It is concluded that the desired velocity is reached in a steady state, an important property of PI velocity control.
This result can be explained using everyday experience. As ω 1 = A is achieved in a steady state, the error signal e = A − ω(t) is expected to behave as depicted in Fig. 3.38. Recall that the integral operation  $$\int _0^t(\omega _d-\omega _1(r))dr$$ represents the area (positive) under the function shown in Fig. 3.38. This means that the above integral operation remains constant at a positive value when ω 1 = ω d = A (because the integrand becomes zero in these conditions). Then, according to  $$u=k_p(\omega _d-\omega )+k_i\int _0^t(\omega _d-\omega )dr $$ , a positive voltage given by  $$u=k_i\int _0^t(\omega _d-\omega )dr $$ continues to be applied at the motor terminals. This allows the motor to stay rotating at the desired velocity, i.e., the condition ω 1 = ω d = A holds forever.
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig38_HTML.png
Fig. 3.38

The integral  $$ \int _0^t(\omega _d-\omega _1(r))dr$$ is represented by the shadowed area under function e = ω d − ω 1. In this figure it is assumed that ω d = A > ω 1(0)

On the other hand, to study the transient response, notice that the transfer function in (3.137) can be rewritten as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} G_1(s)&amp;\displaystyle =&amp;\displaystyle \frac{\frac{k_p}{k_i}\omega_n^2s+\omega_n^2}{s^2+ 2\zeta\omega_n\;s+\omega_n^2},\\ &amp;\displaystyle =&amp;\displaystyle \frac{k_p}{k_i}s\frac{\omega_n^2}{s^2+ 2\zeta\omega_n\;s+\omega_n^2}+ \frac{\omega_n^2}{s^2+ 2\zeta\omega_n\;s+\omega_n^2}, \end{array} \end{aligned} $$
where (3.138) has been used. Hence, according to (3.137):
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \omega_1(s)&amp;\displaystyle =&amp;\displaystyle \frac{k_p}{k_i}s\frac{\omega_n^2}{s^2+ 2\zeta\omega_n\;s+\omega_n^2}\omega_d(s)+ \frac{\omega_n^2}{s^2+ 2\zeta\omega_n\;s+\omega_n^2}\omega_d(s). \end{array} \end{aligned} $$
As  $$\omega _d(s)=\frac {A}{s}$$ , notice that the inverse Laplace transform of the second term in this expression is the function presented in (3.57), whereas the inverse Laplace transform of the first term is the time derivative of the function in (3.57) multiplied by the factor  $$\frac {k_p}{k_i}$$ . This means that, although the shape of ω 1(t) is similar to that of the function in (3.57), there are some differences. For instance, the rise time and overshoot formulas given in (3.71) are only approximate in the present control problem. This is a similar situation to that found in the study of the transient response of PD position control of a DC motor. Despite this, the expressions in (3.138) are still valid and they can be used to conclude the following:
  • A faster response is achieved as k i > 0 is chosen to be larger. This is because ω n increases as k i > 0 is increased. However, a more oscillatory response is obtained as k i > 0 is chosen to be larger. This is because ζ decreases as k i increases.

  • The oscillation can be reduced by increasing k p > 0. This is because ζ increases as k p > 0 increases.

These observations can be used as a tuning procedure to select both k p and k i for a PI velocity controller. Note that this is a trial and error procedure.
Finally, consider the expression for ω 2:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \ddot \omega_2&amp;\displaystyle +&amp;\displaystyle \left(\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}+\frac{n\;k_{m}k_p}{JR}\right)\;\dot \omega_2+\frac{n\;k_{m}k_i}{JR}\omega_2=-\frac{1}{J}\dot T_p. \end{array} \end{aligned} $$
Using the Laplace transform yields:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \omega_2(s)&amp;\displaystyle =&amp;\displaystyle \frac{ -\frac{1}{J}s}{s^2+ \left(\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}+\frac{n\;k_{m}k_p}{JR}\right)\;s+\frac{n\;k_{m}k_i}{JR}} T_p(s). \end{array} \end{aligned} $$
Then, the final value of ω 2 can be computed as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \lim_{t\to\infty}\omega_2(t)&amp;\displaystyle =&amp;\displaystyle \lim_{s\to0}s\omega_2(s),\\ &amp;\displaystyle =&amp;\displaystyle \lim_{s\to0}s\frac{ -\frac{1}{J}s}{s^2+ \left(\frac{b}{J}+\;\frac{n^2\;k_{m}\;k_e}{JR}+\frac{n\;k_{m}k_p}{JR}\right)\;s+\frac{n\;k_{m}k_i}{JR}} \frac{d}{s}=0, \end{array} \end{aligned} $$
when the external torque disturbance is a step signal, i.e.,  $$T_p(s)=\frac {d}{s}$$ . Recall that it has already been ensured that the characteristic polynomial in this expression has two poles with a negative real part. Hence, the steady-state velocity deviation due to a constant external torque disturbance is zero, which is very good news.
The closed-loop response of velocity under PI control is shown in Fig. 3.39 where J = 1,  $$b+\frac {n^2k_{m}k_e}{R}=7$$ , and nk mR = 70 are used. These values are employed merely because they allow representative, clear, responses to be obtained. The corresponding MATLAB/Simulink diagram is shown in Fig. 3.40. The block ω d is a step applied at t = 0 with a zero initial value and 2 as a final value. The external disturbance T p is a step applied at t = 0.5[s] with a zero initial value and 0.4 × 70 as a final value. Once the simulation stops, the following MATLAB code is executed in an m-file to draw Fig. 3.39:
nn=length(InputOutput(:,1));
n=nn-1;
Ts=1/n;
t=0:Ts:1;
plot(t,InputOutput(:,1),'k--',t,InputOutput(:,2),
'k-');
axis([-0.02 1 0 2.5])
xlabel('time [s]')
ylabel('Velocity [rad/s]')
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig39_HTML.png
Fig. 3.39

PI velocity control when a constant external disturbance appears at t = 0.5[s]. The PI controller gains are k p = 0.7 and k i = 10. Continuous: velocity ω(t). Dashed: desired velocity ω d

/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig40_HTML.png
Fig. 3.40

MATLAB/Simulink diagram to obtain results in Fig. 3.39

Notice that the velocity reaches its desired value before and after a constant external torque disturbance appears at t = 0.5[s].

3.8.5 Proportional, PI, and PID Control of First-Order Systems

Consider the water level system studied in Example 3.2 and depicted in Fig. 3.3. Suppose that a water pump is employed to produce a water flow q i that is proportional to the voltage u applied at motor terminals actuating on the pump, i.e.,:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} q_i=k_1 u,{} \end{array} \end{aligned} $$
(3.139)
where k 1 is a positive constant. Combining (3.139) and (3.28) it is obtained:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \frac{dh}{d t}+a h=kk_1u. \end{array} \end{aligned} $$
Notice that this differential equation can also be represented by a transfer function with the form:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} G(s)=\frac{Y(s)}{U(s)}=\frac{b_2}{s+b_1}.{} \end{array} \end{aligned} $$
(3.140)
Hence, identical conclusions are obtained as those for the proportional control of velocity, if a proportional level controller is used, or those for a proportional–integral velocity controller, if a PI level controller is used. Moreover, identical conclusions are valid for any first order plant with transfer function having the same structure as in (3.140).
On the other hand, first order systems represented by (3.140) do not need use of a controller with a derivative action. This can be explained as follows. Suppose that the following proportional–integral–derivative (PID) controller:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} u=k_pe+k_d\frac{de}{dt}+k_i\int_0^te(r)dr, e=y_d-y \end{array} \end{aligned} $$
is used for plant in (3.140), i.e.,:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \dot y+b_1 y&amp;\displaystyle =&amp;\displaystyle b_2\left(k_pe+k_d\frac{de}{dt}+k_i\int_0^te(r)dr\right),\\ (1+b_2k_d)\dot y+(b_1+b_2k_p)y&amp;\displaystyle =&amp;\displaystyle b_2k_py_d+b_2k_d\dot y_d+b_2k_i\int_0^te(r)dr, \end{array} \end{aligned} $$
and differentiating once with respect to time:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} (1+b_2k_d)\ddot y+(b_1+b_2k_p)\dot y+b_2k_iy=b_2k_d\ddot y_d+b_2k_p\dot y_d+b_2k_iy_d. \end{array} \end{aligned} $$
The corresponding closed-loop transfer function is:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} H(s)=\frac{Y(s)}{Y_d(s)}=\frac{b_2k_ds^2+b_2k_ps+b_2k_i}{(1+b_2k_d)s^2+(b_1+b_2k_p)s+b_2k_i}.{} \end{array} \end{aligned} $$
(3.141)
Notice that the characteristic polynomial is second degree. Thus, only two free parameters are necessary to assign the corresponding two roots at any location on the s complex plane, i.e., to achieve any desired transient response. However, three free parameters appear in this characteristic polynomial, i.e., k p, k d and k i. Hence, one of them is not necessary, i.e., k d.

On the other hand, the transfer function in (3.141) has polynomials at the numerator and the denominator having the same degree, i.e., it has the same number of poles and zeros. It has been shown in Examples 3.6 and 3.7 that time response of a system whose transfer function has the same number of poles and zeros is discontinuous when a step input is applied. It is important to remark that such a response is not possible, for instance, in a velocity control system because the velocity cannot be discontinuous. This is due to the fact that such a response would require a very large voltage to be applied at the motor terminals during a very short time interval. Mathematically, this is explained by the time derivative of the desired value y d (a discontinuous, step signal) which is introduced by the derivative action of a controller. However, such large voltage spikes are not possible in practice. Thus, an unpredictable response is expected. Moreover, in the case of a velocity control system, the derivative action on velocity (equivalent to acceleration measurements, a very noisy signal) results in a strong amplification of noise, as velocity measurements are recognized to be noisy signals.

Thus, it is concluded that any performance improvement is not expected when using a PID controller with respect to performance achieved with a PI controller for first-order plants.

3.9 Case Study: A DC-to-DC High-Frequency Series Resonant Power Electronic Converter

In Sect. 2.​5, Chap. 2 the mathematical model of a DC-to-DC high-frequency series resonant power electronic converter was obtained. This model is given in (2.​142)–(2.​144) and it is rewritten here for ease of reference:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} L\frac{di}{dt}&amp;\displaystyle =&amp;\displaystyle -v-v_0sign(i)+E(t),{} \end{array} \end{aligned} $$
(3.142)
 $$\displaystyle \begin{aligned} \begin{array}{rcl} C\frac{dv}{dt}&amp;\displaystyle =&amp;\displaystyle i,{} \end{array} \end{aligned} $$
(3.143)
 $$\displaystyle \begin{aligned} \begin{array}{rcl} C_0\frac{dv_0}{dt}&amp;\displaystyle =&amp;\displaystyle abs(i)-\frac{v_0}{R},{} \end{array} \end{aligned} $$
(3.144)
where:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} sign(i)=\left\{\begin{array}{cc}+1,&amp;i&gt;0\\ -1,&amp;i&lt;0\end{array}\right.,{} \end{array} \end{aligned} $$
(3.145)
whereas abs(i) stands for the absolute value of i. The operation of this circuit is studied in the present section through the solution of the mathematical model in (3.142)–(3.144). Performing a coordinate change for the variables involved is very useful. Hence, the following variables are defined (see Sect. 2.​5, Chap. 2, for an explanation of the meaning of the parameters involved):
 $$\displaystyle \begin{aligned} \begin{array}{rcl} z_1&amp;\displaystyle =&amp;\displaystyle \frac{i}{E}\sqrt{\frac{L}{C}}, z_2=\frac{v}{E}, z_3=\frac{v_0}{E},\tau=\frac{t}{\sqrt{LC}}.{} \end{array} \end{aligned} $$
(3.146)
Replacing this in (3.142), (3.143), (3.144) yields:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} L\frac{d\left(E\sqrt{\frac{C}{L}}z_1 \right)}{d \left(\tau\sqrt{LC}\right)}&amp;\displaystyle =&amp;\displaystyle -Ez_2-Ez_3sign(z_1)+E(t),\\ C\frac{d\left(Ez_2\right)}{d\left(\tau\sqrt{LC}\right)}&amp;\displaystyle =&amp;\displaystyle z_1E\sqrt{\frac{C}{L}},\\ C_0\frac{d(Ez_3)}{d\left(\tau\sqrt{LC}\right)}&amp;\displaystyle =&amp;\displaystyle abs\left(E\sqrt{\frac{C}{L}}z_1 \right)-\frac{Ez_3}{R}. \end{array} \end{aligned} $$
After some simplifications, the following is found:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \dot z_1&amp;\displaystyle =&amp;\displaystyle -z_2 -z_3sign(z_1)+u,{} \end{array} \end{aligned} $$
(3.147)
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \dot z_2&amp;\displaystyle =&amp;\displaystyle z_1,{} \end{array} \end{aligned} $$
(3.148)
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \alpha\dot z_3&amp;\displaystyle =&amp;\displaystyle abs(z_1)-\frac{z_3}{Q},{} \end{array} \end{aligned} $$
(3.149)
where the dot “⋅” represents the differentiation with regard to the normalized time τ whereas α = C 0C,  $$Q=R\sqrt {C/L}$$ and u is a variable taking only the two possible values + 1 (when E(t) = +E) and − 1 (when E(t) = −E). The output capacitor C 0 is commonly chosen to be very large compared with the value of the resonant capacitor C, because this ensures that z 3, i.e., v 0, remains approximately constant during several cycles of the resonant current z 1, i.e., i. Hence, under this assumption (a constant z 3), (3.149) can be neglected and (3.147), (3.148), suffice to represent the evolution of the resonant variables z 1 and z 2, at least for several oscillations.
For operation at resonance, the transistors Q 1, Q 3 are turned on when z 1 > 0, i.e., when i > 0; hence, u = +1 because E(t) = +E in this case, according to Figs. 2.​34a, 2.​35 and 2.​36. On the other hand, the transistors Q 2, Q 4 are turned on when z 1 < 0, i.e., when i < 0, and hence u = −1 because E(t) = −E in this case. One way of achieving this is by designing u = sign(z 1), where sign(z 1) = +1 if z 1 > 0 and sign(z 1) = −1 if z 1 < 0. This means that, according to (3.147) and (3.148), the evolution of the resonant variables can be described as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \dot z_1&amp;=&amp;-z_2+v_e, {} \end{array} \end{aligned} $$
(3.150)
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \dot z_2&amp;=&amp;z_1,{} \\ v_e&amp;=&amp;\left\{\begin{array}{cc}1-z_3,&amp;Q_1,Q_3\mathrm{ turned-on}\\ -(1-z_3),&amp;Q_2,Q_4\mathrm{ turned-on}\end{array}\right., \end{array} \end{aligned} $$
(3.151)
where z 3 is assumed to be constant. Using the variable change z 4 = z 2 − v e in the previous equations yields  $$\dot z_4=\dot z_2=z_1$$ , because a constant v e implies that  $$\dot v_e=0$$ , and  $$\ddot z_4=\dot z_1=-z_2+v_e=-z_4$$ , i.e.,:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \ddot z_4+z_4&amp;\displaystyle =&amp;\displaystyle 0.{} \end{array} \end{aligned} $$
(3.152)
This is a second-order linear ordinary differential equation with constant coefficients such as that defined in (3.53) with ζ = 0 and ω n = 1. As the excitation, or input, of this differential equation is zero, then A = 0 and the complete solution is given by the natural response alone. Hence, according to Sect. 3.3, i.e., expressions in (3.62) and (3.63), the solution of (3.152) is given as:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} z_4(\tau)&amp;\displaystyle =&amp;\displaystyle p(\tau)=\mathcal{L}^{-1}\left\{\frac{z_{40}(s+2\zeta\omega_n)+ \dot z_{40}}{s^2+2\zeta\omega_n s+\omega_n^2}\right\}=\mathcal{L}^{-1}\left\{\frac{z_{40}s+ \dot z_{40}}{s^2+1}\right\},\\ &amp;\displaystyle =&amp;\displaystyle \mathcal{L}^{-1}\left\{\frac{z_{40}s}{s^2+1}+\frac{ \dot z_{40}}{s^2+1}\right\}=z_{40}\cos{}(\tau)+\dot z_{40}\sin{}(\tau),{} \end{array} \end{aligned} $$
(3.153)
where the following Laplace transform pairs have been employed [4], Ch. 32:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \mathcal{L}^{-1}\Bigg\{\frac{s} {s^2+a^2}\Bigg\}=\cos{}(a \tau),\\ \mathcal{L}^{-1}\Bigg\{\frac{a} {s^2+a^2}\Bigg\}=\sin{}(a\tau), \end{array} \end{aligned} $$
whereas z 40 = z 4(0) = z 2(0) − v e and  $$\dot z_{40}=\dot z_4(0)=z_1(0)$$ . Differentiating (3.153) once with respect to τ yields:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \dot z_4(\tau)&amp;\displaystyle =&amp;\displaystyle -z_{40}\sin{}(\tau)+\dot z_{40}\cos{}(\tau).{} \end{array} \end{aligned} $$
(3.154)
Then, using (3.153) and (3.154), it is easy to verify that:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} z_4^2(\tau)+\dot z_4^2(\tau)&amp;\displaystyle =&amp;\displaystyle z_{40}^2+\dot z_{40}^2.{} \end{array} \end{aligned} $$
(3.155)
If the solution of (3.152) is plotted on the phase plane  $$\dot z_4-z_4$$ , then a circle is obtained whose center is located at origin  $$(z_4,\dot z_4)=(0,0)$$ , and the radius is determined by the initial conditions, i.e.,  $$\sqrt {z_{40}^2+\dot z_{40}^2}$$ . Moreover, according to z 4 = z 2 − v e and  $$\dot z_4=z_1$$ , it is concluded that the solution of (3.150), (3.151), when plotted on the phase plane z 1 − z 2, is a circle centered at (z 2, z 1) = (v e, 0), with a radius  $$\sqrt {(z_{20}-v_e)^2+ z_{10}^2}$$ where z 20 = z 2(0) and z 10 = z 1(0). It is important to say that, according to (3.150) and (3.151), the values of z 1 and z 2 cannot be discontinuous because that would require infinite values for  $$\dot z_1$$ and  $$\dot z_2$$ respectively, which is not possible as the right-hand sides of (3.150) and (3.151) cannot take infinite values. This means that, when combining the solutions in both regions, i.e., when z 1 > 0 and when z 1 < 0, the initial conditions on each region must be equal to the final values reached in the previous region. Hence, Fig. 3.41 is obtained. The closed trajectory shown in this figure indicates that the resonant variables z 2, z 1, i.e., v and i, have sustained oscillations. It should be noted that this closed trajectory is clockwise-oriented because, according to (3.151), z 2 grows when z 1 > 0, i.e., if  $$\dot z_2&gt;0$$ then z 2 grows, and z 2 decreases when z 1 < 0. Notice that v e can only take two different values, 1 − z 3 when z 1 > 0, and − (1 − z 3) when z 1 < 0. Thus, the only way in which the situation represented in Fig. 3.41 can be obtained is that z 3 = 1, i.e., that v e = 0, in both regions. This means that the output voltage is equal to the power supply voltage, i.e., v 0 = E.
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig41_HTML.png
Fig. 3.41

Evolution of the resonant variables z 2 and z 1 when u = sign(z 1)

An important application of DC-to-DC high-frequency series resonant power electronic converters is to deliver output voltages that are smaller than the power supply voltage, i.e., v 0 < E or z 3 < 1. Define a function s = z 1 − γz 2, with γ a positive constant, and assign u = sign(s). This defines the four regions shown in Fig. 3.42. Notice that the transistors Q 1, Q 3 are turned on when u = +1, i.e., above the straight line z 1 = γz 2, but the electric current flows through them only when z 1 > 0 as, even when Q 1, Q 3 are turned on, the electric current flows through the diodes D 1, D 3 when z 1 < 0 (the electric current only flows in one direction through any of the transistors Q 1, Q 2, Q 3, Q 4). On the other hand, the transistors Q 2, Q 4 are turned on when u = −1, i.e., below the straight line z 1 = γz 2, but the electric current flows through them only when z 1 < 0, as, despite Q 2, Q 4 being turned on, the electric current flows through diodes D 2, D 4 when z 1 > 0.
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig42_HTML.png
Fig. 3.42

Evolution of the resonant variables z 2 y z 1 when u = sign(s) with s = z 1 − γz 2

The resonant variables still evolve according to (3.150) and (3.151), but now v e takes the following constant values:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} v_e&amp;\displaystyle =&amp;\displaystyle \left\{\begin{array}{cc}1-z_3,&amp;\displaystyle Q_1,Q_3\mathrm{ in saturation}, \ (z_1&gt;\gamma z_2,\ z_1&gt;0)\\ -(1-z_3),&amp;\displaystyle Q_2,Q_4\mathrm{ in saturation},\ (z_1&lt;\gamma z_2,\ z_1&lt;0)\\ 1+z_3,&amp;\displaystyle D_1,D_3\mathrm{ conduct},\ (z_1&gt;\gamma z_2,\ z_1&lt;0)\\ -(1+z_3),&amp;\displaystyle D_2,D_4\mathrm{ conduct},\ (z_1&lt;\gamma z_2,\ z_1&gt;0) \end{array}\right. \end{array} \end{aligned} $$
This means that, again, and according to arguments between (3.150) and the paragraph after (3.155), the resonant variables describe circles on the z 2 − z 1 plane. Radii of these circles are determined by the initial conditions in each region, which are equal to the final values in the previous region. The center of these circles is located at the following points, depending on the operation region:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} (z_2,z_1)&amp;\displaystyle =&amp;\displaystyle (1-z_3,0), z_1&gt;\gamma z_2,\ z_1&gt;0,\\ (z_2,z_1)&amp;\displaystyle =&amp;\displaystyle (-1+z_3,0), z_1&lt;\gamma z_2,\ z_1&lt;0,\\ (z_2,z_1)&amp;\displaystyle =&amp;\displaystyle (1+z_3,0), z_1&gt;\gamma z_2,\ z_1&lt;0,\\ (z_2,z_1)&amp;\displaystyle =&amp;\displaystyle (-1-z_3,0), z_1&lt;\gamma z_2,\ z_1&gt;0. \end{array} \end{aligned} $$
Figure 3.42 depicts the corresponding situation when γ = 3. Notice that, again, the resonant variables z 2, z 1, i.e., v and i, describe sustained oscillations. To obtain the closed trajectory shown in Fig. 3.42, we may proceed as follows. Propose any z 3 < 1 and any point in the z 2 − z 1 plane as an initial condition. Using a pair of compasses, draw circles centered at the above indicated points according to the respective operation region. According to the previous discussion and (3.151), these circles are clockwise-oriented. The reader will realize that an helix appears similar to the dashed line in Fig. 3.42, which finally ends on a closed trajectory. The fact that this closed trajectory is reached from any initial point on the z 2 − z 1 plane is indicative that it is stable or attractive. It is possible to proceed analogously using some z 3 > 1 to verify that a closed trajectory is also reached in such a case. This is indicative that in a series resonant converter is not possible to deliver, in a steady state, an output voltage v 0 larger than the power supply voltage E, i.e., z 3 > 1 is not possible in a steady state. If an output voltage v 0 > E is desired, then the parallel resonant converter shown in Fig. 2.​34b is a suitable alternative.
On the other hand, arcs with a center at (z 2, z 1) = (1 − z 3, 0) and (z 2, z 1) = (−1 − z 3, 0), shown in Fig. 3.42, are drawn one after the other. Then, if both of them were centered at origin (z 2, z 1) = (0, 0), they would together span a 180 angle. However, using basic geometry in Fig. 3.42 it is possible to conclude that arcs with the center at (z 2, z 1) = (1 − z 3, 0) and (z 2, z 1) = (−1 − z 3, 0) together describe an angle smaller that 180 despite completing a semicircle on both resonant variables z 2, z 1. Hence, the four arcs composing the whole closed trajectory in Fig. 3.42 together describe an angle smaller that 360 despite a complete cycle being described on both resonant variables. Notice that each one of these arcs is covered at a frequency that is still ω n = 1, but, according to the above description, less time is now required to complete an oscillation as the four arcs together describe a smaller angle at the same angular frequency. Thus, the resonant variables oscillate at a frequency ω o, which is larger than ω n = 1, i.e., ω o > ω n. Using (3.146), it is possible to verify that:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \omega_{nt}&amp;\displaystyle =&amp;\displaystyle \frac{\omega_n}{\sqrt{LC}}=\frac{1}{\sqrt{LC}},\omega_{ot}=\frac{\omega_o}{\sqrt{LC}}&gt;\omega_{nt}, \end{array} \end{aligned} $$
where ω nt and ω ot stand respectively, for the resonance frequency and the operation frequency, expressed with respect to real time t. Hence, it is concluded that the circuit must operate at frequencies that are greater than the resonance frequency to deliver output voltages smaller than the power supply voltage v 0 < E, i.e., z 3 < 1. This is the reason why, when the transistors are activated according to u = sign(s), the circuit in Figs. 2.​34a and 2.​35 is designated as DC-to-DC high-frequency series resonant power electronic converter. Further information on DC-to-DC resonant power electronic converters can be found in [8], where converters of this class (series and parallel) are designed, constructed, controlled, and experimentally tested, and [7], where the method used here to analyze series resonant converters drawing arcs of a circle is studied. On the other hand, the use of the phase plane z 2 − z 1 to study the operation of resonant power converters was introduced for the first time in [9] and [10].

3.10 Summary

The systems to be controlled in engineering, and every component in a closed-loop control system, are described by ordinary, linear differential equations with constant coefficients. Hence, for a control engineer, the study of differential equations must be oriented toward the understanding of the properties of the differential equations that shape the solution waveform.

The solution of an ordinary, linear differential equation with constant coefficients is given as the addition of the natural and the forced responses. The natural response only depends on the roots of the differential equation characteristic polynomial and it is always present, even when the excitation function (or input) is zero. On the other hand, the forced response depends on the excitation function (or input), i.e., the forced response is zero if the excitation function is zero. The forced response can be rendered to be equal to the excitation function or input. Furthermore, if the differential equation is stable, i.e., if the natural response tends toward zero, the complete response converges to the forced response. Hence, the input of a differential equation can be chosen to be equal to the way in which it is desired that the differential equation solution behaves.

Thus, the rationale behind control system design is the following. Given a system or plant to be controlled, i.e., a differential equation, choose a controller, i.e., another differential equation, such that when feedback connects the two differential equations, a new equivalent closed-loop differential equation is obtained with the following properties: 1) it is stable, 2) the forced response is equal to the input applied to the closed-loop system, i.e., the desired input of the closed-loop system, and 3) the natural response vanishes as fast and damped as desired.

The features listed in the previous paragraph are accomplished as follows:

  1. 1.

    Stability is determined exclusively by the roots of the differential equation characteristic polynomial or, equivalently, by the poles of the corresponding transfer function (see Sect. 3.4).

     
  2. 2.

    The steady-state response is achieved by rendering the forced response equal to the input applied to the closed-loop system. It is explained in Sect. 3.4 that this depends on the independent terms of polynomials at the numerator and denominator in the closed-loop transfer function for the case of constant inputs.

     
  3. 3.

    The transient response refers to the suitable shaping of the natural response waveform and is achieved by suitably assigning the closed-loop poles.

     

Controller design for arbitrary plants is studied in Chaps. 5, 6, 7 using the root locus method in addition to the frequency response and the state space approach. The main idea is to render the closed-loop system stable, so that the steady-state response is equal to the closed-loop system input and the transient response suitably shaped.

3.11 Review Questions

  1. 1.

    Why is a transfer function unstable if it has one pole with a zero real part that is repeated two or more times?

     
  2. 2.

    It has been said that the forced response has the same waveform as the input. Explain why this is not ensured if the transfer function has poles with a zero real part. Give an example of an input and a transfer function where this happens.

     
  3. 3.

    When an incandescent lamp is turned on it becomes hot. However, its temperature does not grow to infinity, but it stops at a certain value. Of the differential equations that have been studied in this chapter, which is the one that better describes the evolution in time of the incandescent lamp temperature? Why?

     
  4. 4.

    What is the relationship between a transfer function and a differential equation?

     
  5. 5.

    What are the conditions that determine the stability of a transfer function?

     
  6. 6.

    What are the necessary assumptions for a permanent magnet brushed DC motor to behave as an integrator or a double integrator?

     
  7. 7.

    How do you think the natural response waveform of a second-order differential equation whose characteristic polynomial has two real, repeated, and negative roots?

     
  8. 8.

    It has been explained that the time functions composing the natural response are determined by the transfer function poles, i.e., roots of the characteristic polynomial, but what is the effect of the transfer function zeros? What do you think determines the coefficients when using partial fraction expansion to apply the inverse Laplace transform? (see Sects. 3.1 and 3.5).

     
  9. 9.

    Indicate the zones of the s complex plane where the poles have to be located to achieve (i) fast but not oscillatory responses, (ii) fast and oscillatory responses, (iii) permanent oscillatory responses, and (iv) unstable responses.

     

3.12 Exercises

  1. 1.
    Consider the water level system in Example 3.2. The response in Fig. 3.43 is obtained when a constant water flow q i is applied and the tank has a constant cross-section of 0.5[m2]. Find the values of both R and q i.
    /epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig43_HTML.png
    Fig. 3.43

    Water level when a constant water flow is applied

     
  2. 2.

    Consider a proportional level control, i.e., when q i = k 1 k p(h d − h), with h d the constant desired level. Using the results in the previous exercise to find the gain k p, such that the steady-state error is less than or equal to 20% of the desired value and that the time constant is less than or equal to one third of the time constant in the previous exercise, i.e., the open-loop time constant. Find the maximal value of q i that must be supplied. This is important for selecting the water pump to be employed. Suppose that the opening of the output valve is increased such that its hydraulic resistance decreases to 50%. Find the increment or the decrement, as a percentage, of the level steady-state error and the new time constant.

     
  3. 3.

    Consider a proportional–integral level control. Show that oscillations in the water level appear if an adequate gain k i > 0 is employed. Use your everyday experience to explain this phenomenon.

     
  4. 4.
    Given the following differential equation:
     $$\displaystyle \begin{aligned} \begin{array}{rcl} \ddot y +127 \dot y+570 y=3990 u, y(0)=0,\dot y(0)=0, u=1, \end{array} \end{aligned} $$
    find values of ζ, ω n and k in (3.53). With these values, write an expression for y(t).
     
  5. 5.
    Consider the mass-spring-damper system described in Example 3.8. The response shown in Fig. 3.44 is obtained when the constant force f=0.5[N] is applied. Find the values of b, K, and m.
    /epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig44_HTML.png
    Fig. 3.44

    Mass position in a mass-spring-damper system when a constant force is applied at the input. The dotted line stands for the final value of the mass position

     
  6. 6.
    Consider the following transfer functions:
     $$\displaystyle \begin{aligned} \begin{array}{rcl} Y_1(s)&amp;\displaystyle =&amp;\displaystyle G_1(s)U(s), Y_2(s)=G_2(s)U(s),\\ G_1(s)&amp;\displaystyle =&amp;\displaystyle \frac{ab}{(s+a)(s+b)}, G_2(s)=\frac{b}{s+b}, a&gt;0,\ b&gt;0, \end{array} \end{aligned} $$
    where U(s) = As, with A a constant. Perform simulations where a takes values that increase from small to large (with respect to b) and compare graphically y 1(t) and y 2(t). Explain what is observed.
     
  7. 7.
    According to Sect. 3.1, it is known that from any initial condition the following differential equation  $$\dot y+7y=2$$ has a solution given as y(t) = Ee−7t + D where E and D are some constants that are not going to be computed in this exercise. Proceeding similarly, find the solution for each one of the following differential equations.
     $$\displaystyle \begin{aligned} \begin{array}{rcl} &amp;\displaystyle &amp;\displaystyle \ddot y+4\dot y+y=2,\\ &amp;\displaystyle &amp;\displaystyle -\ddot y+\dot y+10 y=3t,\\ &amp;\displaystyle &amp;\displaystyle \ddot y+7\dot y=2\sin{}(t),\\ &amp;\displaystyle &amp;\displaystyle \ddot y+3\dot y+2y=2e^{-t},\\ &amp;\displaystyle &amp;\displaystyle \ddot y+2\dot y+2y=2\cos{}(2t)-4\sin{}(2t),\\ &amp;\displaystyle &amp;\displaystyle \ddot y-4y=8t^2-4,\\ &amp;\displaystyle &amp;\displaystyle y^{(4)}+11y^{(3)}+28\ddot y=10. \end{array} \end{aligned} $$
     
  8. 8.
    For each one of the following differential equations, find the value of limt y(t). Notice that the same result can be obtained, no matter what the initial conditions are.
     $$\displaystyle \begin{aligned} \begin{array}{rcl} &amp;\displaystyle &amp;\displaystyle y^{(3)}+\ddot y-2\dot y+y=8,\\ &amp;\displaystyle &amp;\displaystyle \ddot y+\dot y-10 y=\cos{}(5t),\\ &amp;\displaystyle &amp;\displaystyle \ddot y+a\dot y+by=kA, a&gt;0, \ b&gt;0,\ a,b,A,k\ \mathrm{are constants}.\end{array} \end{aligned} $$
     
  9. 9.
    Which one of the following differential equations has a solution with a smaller overshoot and which one has a solution with a smaller rise time?
     $$\displaystyle \begin{aligned} \begin{array}{rcl} &amp;\displaystyle &amp;\displaystyle 3\ddot y+3\dot y+6y=2A,\\ &amp;\displaystyle &amp;\displaystyle \ddot y+4\dot y+10y=9A, \end{array} \end{aligned} $$
    A is a constant.
     
  10. 10.

    Compute the poles of a second-order system with a 25% overshoot and 0.2[s] rise time.

     
  11. 11.

    Consider the proportional position control with velocity feedback of the following DC motor:  $$\ddot \theta +0{.}4\dot \theta =30u$$ , where θ is the angular position, whereas u is the applied voltage. Find the controller gains to achieve a 0.01[s] rise time and a 10% overshoot.

     
  12. 12.
    Consider the solution of a second-order differential equation when a step input is applied and assuming zero initial conditions, which is presented in (3.57). Find the maxima and minima of this response and solving for the first maximum, show that overshoot can be computed as:
     $$\displaystyle \begin{aligned} \begin{array}{rcl} M_p(\%)&amp;\displaystyle =&amp;\displaystyle 100 \times \mathrm{e}^{-\frac{\zeta}{\sqrt{1-\zeta^2}}\pi}.\end{array} \end{aligned} $$
    Find the times when the natural response is zero and solving for the first of these times, show that:
     $$\displaystyle \begin{aligned} \begin{array}{rcl} t_r&amp;\displaystyle =&amp;\displaystyle \frac{1}{\omega_d}\left[\pi-\arctan\left(\frac{\sqrt{1-\zeta^2}}{\zeta}\right)\right]. \end{array} \end{aligned} $$
     
  13. 13.
    Consider the following expressions:
     $$\displaystyle \begin{aligned} \begin{array}{rcl} Y(s)=\frac{as+b}{s^2+c}U(s), Z(s)=\frac{d}{s^2+c}U(s), \end{array} \end{aligned} $$
    where U(s) = As with A, a, b, c, d, are positive constants. Express y(t) as a function of z(t).
     
  14. 14.
    Consider the following transfer function:
     $$\displaystyle \begin{aligned} \begin{array}{rcl} G(s)=\frac{c}{s^2+bs+c}. \end{array} \end{aligned} $$
    What must be done with b or c to achieve a faster system response?, What must be done with b or c to achieve a less oscillatory response?
     
  15. 15.
    Consider the solution y(t) of an arbitrary n −order linear differential equation with constant coefficients.
    • If the input is bounded, What are the conditions to ensure that y(t) is also bounded for all time? i.e., to ensure that y(t) does not become infinity.

    • What are the conditions for the forced response to be the only component of y(t) that remains as time increases?

    • Assume that the input is zero. What are the different behaviors for y(t) as time increases and under what conditions do they appear?

    • Assume that the input is a polynomial of time. Show that the solution is not a polynomial with a larger degree than the polynomial at the input if all the roots of the characteristic polynomial have a negative real part.

    • Assume that the input is given as the addition of several sine and cosine functions of time with different frequencies. Using the results in Sect. 3.6 and the superposition principle, Can you explain the following fundamental result for linear differential equations? [6, pp. 389]: “if the input is any periodic function with period T and all the roots of the characteristic polynomial have negative real parts, then as t → the solution y(t) is also a periodic function with period T, although with a waveform that may be different from that of the input.” Recall the concept of the Fourier series.

     
  16. 16.
    Verify the following equations:
     $$\displaystyle \begin{aligned} \begin{array}{rcl} i)&amp;\displaystyle &amp;\displaystyle \frac{5}{s^2+s-2}=\frac{-5/3}{s+2}+\frac{5/3}{s-1},\\ ii)&amp;\displaystyle &amp;\displaystyle \frac{2}{s(s+2)(s-1)}=\frac{1/3}{s+2}+\frac{2/3}{s-1}+\frac{-1}{s},\\ iii)&amp;\displaystyle &amp;\displaystyle \frac{1}{s^3-s^2-s+1}=\frac{1/2}{(s-1)^2}+\frac{-1/4}{s-1}+\frac{1/4}{s+1},\\ iv)&amp;\displaystyle &amp;\displaystyle \frac{1}{s^4-s^3-s^2+s}=\frac{1}{s}+\frac{1/2}{(s-1)^2}+\frac{-3/4}{s-1}+\frac{-1/4}{s+1},\\ v)&amp;\displaystyle &amp;\displaystyle \frac{2}{s^3+2s}=\frac{1}{s}+\frac{-s}{s^2+2},\\ vi)&amp;\displaystyle &amp;\displaystyle \frac{3}{s^4-3s^3}=\frac{-1}{s^3}+\frac{-1/3}{s^2}+\frac{-1/9}{s}+\frac{1/9}{s-3},\\ vii)&amp;\displaystyle &amp;\displaystyle \frac{8}{s(s+4)(s-4)}=\frac{-1/2}{s}+\frac{1/4}{s+4}+\frac{1/4}{s-4},\\ viii)&amp;\displaystyle &amp;\displaystyle \frac{8}{s^2-16}=\frac{1}{s-4}+\frac{-1}{s+4},\\ ix)&amp;\displaystyle &amp;\displaystyle \frac{4}{s^4+s^3+2s^2}=\frac{2}{s^2}+\frac{-1}{s}+\frac{s-1}{s^2+s+2},\\ x)&amp;\displaystyle &amp;\displaystyle \frac{4}{s^5+3s^4+4s^3+4s^2+3s}=\frac{2}{(s+1)^3}+\frac{2}{(s+1)^2}+\frac{1}{s+1}+\frac{-s-1}{s^2+1},\\ xi)&amp;\displaystyle &amp;\displaystyle \frac{3s^2+3s-2}{s^3+2s^2}=\frac{-1}{s^2}+\frac{2}{s}+\frac{1}{s+2},\\ xii)&amp;\displaystyle &amp;\displaystyle \frac{s+2}{s^2(s+1)^2}=\frac{2}{s^2}+\frac{-3}{s}+\frac{1}{(s+1)^2}+\frac{3}{s+1}. \end{array} \end{aligned} $$
     
  17. 17.

    An induction oven is basically an inductor L with a heat-resistant material inside containing some metal to melt down. A capacitor C is series-connected to the inductor to improve the power factor. A high-frequency alternating voltage is applied at the circuit terminals that induces eddy currents in metal, producing heat and, hence, melting it down.

    This device constitutes a series RLC circuit where the resistance R is represented by the equivalent electrical resistance of metal to melt down. The high-frequency alternating voltage applied to the circuit is given as a square wave described as u = E sign(i), where i is the electric current through the circuit, E is a positive constant and sign(i) = +1 if i ≥ 0 or sign(i) = −1 if i < 0.
    • Assume that the initial electric current is zero. Show that, provided that the damping coefficient is less than unity, i.e., 0 < ζ < 1, the electric current through the inductor is zero periodically with period πω d, where  $$\omega _d=\omega _n\sqrt {1-\zeta ^2}$$ ,  $$\omega _n=\frac {1}{\sqrt {LC}}$$ y  $$\zeta =\frac {R}{2L\omega _n}$$ .

    • According to u = E sign(i), the applied voltage, u, changes value when i = 0. Using this and v c(0) as the initial voltage at the capacitor, find an expression for the voltage at the capacitor that will be useful the next time that i = 0.

    • Propose any values for R, L, and C such that 0 < ζ < 1. Assuming that the voltage at the capacitor and the current through the inductor are not discontinuous when the value of u changes (why?), write a computer program to use iteratively the formula in the previous item to compute voltage at the capacitor after u has changed several times. Verify that the voltage at the capacitor, when i = 0, converges to a constant.

    • Perform a simulation of the series RLC circuit when u = E sign(i), to verify the result in the previous item.

     
  18. 18.
    According to Example 3.19, when the armature inductance is negligible, the mathematical model of a DC motor is given as:
     $$\displaystyle \begin{aligned} \begin{array}{rcl} \omega(s)&amp;\displaystyle =&amp;\displaystyle \frac{\frac{k_m}{JR}}{s+\left(\frac{B}{J}+\frac{k_mk_e}{JR_a}\right)}V(s), \end{array} \end{aligned} $$
    where ω(s) and V (s) are the Laplace transforms of the motor velocity and voltage applied at the motor terminals respectively.
    • Assume that the applied voltage is zero and that the initial velocity is not zero. Notice that applying a zero voltage at motor terminals is equivalent to putting a short circuit at the armature motor terminals and that a non-zero-induced voltage is present if the velocity is not zero. Depict the natural response under these conditions. What can be done to force the natural response to vanish faster? What happens with the electric current? Can you explain why the natural response vanishes faster if the armature resistance is closer to zero? Do you know the meaning of the term “dynamic braking”? Why does the time constant depend on motor inertia? What does this mean from the point of view of Newton’s Laws of Mechanics?

    • Assume that a voltage different from zero is applied when the initial velocity is zero. Depict the system response. Why does the final velocity not depend on the motor inertia J? Give an interpretation from the point of view of physics.

     
  19. 19.
    Consider the electric circuit in Fig. 3.45. This circuit is called a phase-lag network. The corresponding mathematical model is given in (2.​146) and is rewritten here for ease of reference:
     $$\displaystyle \begin{aligned} \begin{array}{rcl} V_0(s)=\frac{R_2}{R_1+R_2}\frac{s+\frac{1}{R_2C}}{s+\frac{1}{(R_1+R_2)C}}V_i(s)+\frac{R_1}{R_1+R_2}\frac{1}{s+\frac{1}{(R_1+R_2)C}}v_c(0), \end{array} \end{aligned} $$
    where v c(0) is the initial voltage at the capacitor. Show that the solution is given as:
     $$\displaystyle \begin{aligned} \begin{array}{rcl} v_0(t)=A+\frac{R_1}{R_1+R_2}(v_c(0)-A)\mathrm{e}^{-at}, a=\frac{1}{(R_1+R_2)C}, \end{array} \end{aligned} $$
    and use this result to explain the response in Fig. 3.46 when v i(t) is the square wave represented by the dash-dot line, v c(0) = 0, R 1 = 2[Ω], R 2 = 1[Ω] and C = 1∕33[F].
    /epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig45_HTML.png
    Fig. 3.45

    Phase-lag network

    /epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig46_HTML.png
    Fig. 3.46

    Time response of the circuit in Fig. 3.45

     
  20. 20.

    Repeat the previous exercise for all circuits in Fig. 2.​39. Use MATLAB/Simulink to draw the time response and verify your results.

     
  21. 21.
    Consider the simple pendulum in Fig. 3.47 when the external torque T(t) = 0 and friction is negligible. The mathematical model is given as:
     $$\displaystyle \begin{aligned} \begin{array}{rcl} ml^2\ddot \theta+mgl\theta=0, \end{array} \end{aligned} $$
    which is valid only when the pendulum remains close to the downward vertical configuration, i.e., θ is “small,” θ ≈ 0 (see Example 7.​5, Chap. 7). Find the required pendulum length l to produce oscillations with a period of 1 s.
    /epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_3_Chapter/454499_1_En_3_Fig47_HTML.png
    Fig. 3.47

    Simple pendulum