© Springer International Publishing AG, part of Springer Nature 2019
Victor Manuel Hernández-Guzmán and Ramón Silva-OrtigozaAutomatic Control with ExperimentsAdvanced Textbooks in Control and Signal Processinghttps://doi.org/10.1007/978-3-319-75804-6_1

1. Introduction

Victor Manuel Hernández-Guzmán1  and Ramón Silva-Ortigoza2
(1)
Universidad Autonoma de Queretaro, Facultad de Ingenieria, Querétaro, Querétaro, Mexico
(2)
Instituto Politécnico Nacional, CIDETEC, Mexico City, Mexico
 

1.1 The Human Being as a Controller

Everybody has been a part of a control system at some time. Some examples of this are when driving a car, balancing a broomstick on a hand, walking or standing up without falling, taking a glass to drink water, and so on. These control systems, however, are not automatic control systems, as a person is required to perform a role in it. To explain this idea, in this section some more technical examples of control systems are described in which a person performs a role.

1.1.1 Steering a Boat

A boat sailing is depicted in Fig. 1.1. There, the boat is heading in a direction that is different from the desired course indicated by a compass. A sailor, or a human pilot, compares these directions to obtain a deviation. Based on this information, she/he decides what order must be sent to her/his arms to apply a suitable torque on the rudder wheel. Then, through a mechanical transmission, the rudder angle is modified, rendering it possible, by action of the water flow hitting the rudder, to apply a torque to the boat. Finally, by the action of this torque, the boat rotates toward the desired course. This secession of actions continuously repeats until the boat is heading on the desired course. A block diagram of this process is shown in Fig. 1.2.
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_1_Chapter/454499_1_En_1_Fig1_HTML.png
Fig. 1.1

Steering a boat

/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_1_Chapter/454499_1_En_1_Fig2_HTML.png
Fig. 1.2

Actions performed when steering a ship

In this block diagram, the fundamental concept of feedback in control systems is observed. Feedback means to feed again and refers to the fact that the resulting action of the control system, i.e., the boat actual course, is measured to be compared with the desired course and, on the basis of such a comparison, a corrective action (torque on the rudder wheel) is commanded again, i.e., fed again, trying to render zero deviation between the desired and actual courses. It is said that the human being performs as a controller: she/he evaluates the actual deviation and, based on this information, commands a corrective order until the actual course reaches the desired course. Although arms act as actuators, notice that the rudder wheel and the mechanical transmission suitably amplify torque generated by the arms to actuate the rudder.

This control system is not an automatic control system because a human being is required to perform the task. Suppose that a large ship is engaged in a long trip, i.e., traveling between two harbors on two different continents. In such a case, it is preferable to replace the human brain by a computer. Moreover, as the ship is very heavy, a powerful motor must be used to actuate the rudder. Thus, a machine (the computer) must be used to control the ship by controlling another machine (the rudder motor). In such a case, this control system becomes an automatic control system.

1.1.2 Video Recording While Running

In some instances, video recording must be performed while a video camera is moving, for example, when it is placed on a car driving on an irregular terrain, on a boat on the sea surface, or attached to a cameraman who is running to record a scene that is moving. The latter situation is depicted in Fig. 1.3. The cameraman must direct the video camera toward the scene. However, because of the irregular terrain and the natural movement of the cameraman who is running, the arms transmit an undesired vibration to the video camera. A consequence of this vibration is deterioration of the image recording. Although the cameraman may try to minimize this effect by applying a corrective torque to the video camera, as shown in Fig. 1.4, the natural limitations of the human being render it difficult to record well-defined images.
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_1_Chapter/454499_1_En_1_Fig3_HTML.png
Fig. 1.3

Video recording while running

/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_1_Chapter/454499_1_En_1_Fig4_HTML.png
Fig. 1.4

Actions performed when video recording while running

Thus, it is necessary to replace the human being in this control system by more precise mechanisms, i.e., design of an automatic control system is required This requires the use of a computer to perform the comparison and decision tasks, and the combined use of tracks and a motor to actuate the camera. Because of the high capability of these machines to perform fast and precise actions, high-quality images can be recorded because vibration induced on the video camera can be suitably compensated for [1, 2].

1.2 Feedback Is Omnipresent

The main feature of control systems introduced in the previous section and the automatic control systems with which this book is concerned is feedback. This is the capability of a control system to command corrective actions until the desired response is accomplished. On the other hand, feedback is not an invention of human beings: it is rather a concept that human being has learned from nature, where feedback is omnipresent. Examples in this section are intended to explain this.

1.2.1 A Predator–Prey System

This is a fundamental feedback system in all ecosystems. Predators need to eat prey to survive. Then, if there are many predators, the number of prey diminishes because many predators require lots of food. Then, a reduced number of prey ensures that the number of predators also diminishes because of the lack of food. As the number of predators diminishes, the number of prey increases, because there are fewer predators to eat prey. Hence, there will be a point in time where the number of prey is so large and the number of predators is so small that the number of predators begins to increase, because of the food abundance. Thus, at some point in the future, the number of predators will be large and the number of prey will be small, and the process repeats over and over again.

Feedback exists in this process because the number of predators depends on the number of prey and vice versa. Notice that, because of this process, the number of predators and prey are kept within a range that renders possible sustainability of the ecosystem. Too many predators may result in prey extinction which, eventually, will also result in predator extinction. On the other hand, too many prey may result in extinction of other species the prey eat and, hence, prey and predator extinction results again.

The reader may wonder whether it is possible for the number of predators and prey to reach constant values instead of oscillating. Although this is not common in nature, the question is Why? This class of question can be answered using Control Theory, i.e., the study of (feedback) automatic control systems.

1.2.2 Homeostasis

Homeostasis is the ability of an organism to maintain its internal equilibrium. This means that variables such as arterial blood pressure, oxygen, CO2 and glucose concentration in blood, in addition to the relations among carbohydrates, proteins, and fats, for instance, are kept constant at levels that are good for health.

1.2.2.1 Glucose Homeostasis [3, 4]

Glucose concentration in the blood must be constrained to a narrow band of values. Glucose in the blood is controlled by the pancreas by modifying the concentrations of insulin and glucagon (see Fig. 1.5). When the glucose concentration increases, the pancreas delivers more insulin and less glucagon, which has the following effects: (i) it favors transportation of glucose from blood to cells, (ii) it increases demand for glucose in cells, (iii) it stimulates the liver for glucose consumption to produce glycogen, fats, and proteins. The effect of this set of actions is a reduction of glucose concentration in the blood to safe, healthy levels.
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_1_Chapter/454499_1_En_1_Fig5_HTML.png
Fig. 1.5

Glucose homeostasis: regulation of glucose in the blood

On the other hand, if the glucose concentration in the blood diminishes, the pancreas delivers more glucagon and less insulin, which has the following effects: (i) it stimulates the liver cells to produce glucose, which is delivered into the blood, (ii) it stimulates the degradation of fats into fatty acids and glycerol, which are delivered into the blood, iii) it stimulates the liver to produce glucose from glycogen, which is delivered into the blood. The effect of this set of actions is the incrementation of the glucose concentration in the blood to safe, healthy levels.

The glucose regulation mechanisms described above are important because the blood glucose level changes several times within a day: it increases after meals and it decreases between meals because cells use or store glucose during these periods of time. Thus, it is not difficult to imagine that glucose homeostasis performs as a perturbed control system equipped with an efficient regulator: the pancreas.

1.2.2.2 Psychological Homeostasis [3]

According to this concept, homeostasis regulates internal changes for both physiological and psychological reasons, which are called necessities. Thus, the life of an organism can be defined as the constant search for and equilibrium between necessities and their satisfaction. Every action searching for such an equilibrium is a behavior.

1.2.2.3 Body Temperature [3, 5]

Human beings measure their body temperatures, using temperature sensors in their brains and bodies.

A body temperature decrement causes a reduction of blood supply to the skin to avoid heat radiation from the body to the environment, and the metabolic rate is increased by the body shivering to avoid hypothermia.

When the body temperature increases, the sweat glands in the skin are stimulated to secrete sweat onto the skin which, when it evaporates, cools the skin and blood (Fig. 1.6).
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_1_Chapter/454499_1_En_1_Fig6_HTML.png
Fig. 1.6

Regulation of body temperature

On the other hand, fever in a human being is considered when the body’s temperature is above 38C. Fever, however, is a body’s natural defense mechanism against infectious diseases, as high temperatures help the human body to overcome microorganisms that produce diseases. This, however, results in body weakness, because of the energy employed in this process. Thus, when this is not enough, medical assistance is required.

1.3 Real-Life Applications of Automatic Control

In this section, some examples of real-life applications are presented to intuitively understand the class of technological problems with which automatic control is concerned, how they are approached, and to stress the need for automatic control.

1.3.1 A Position Control System

The task to be performed by a position control system is to force the position or orientation of a body, known as the load, to track a desired position or orientation. This is the case, for instance, of large parabolic antennae, which have to track satellites while communicating with them. A simplified description of this problem is presented in the following where the effect of gravity is assumed not to exist because movement is performed in a horizontal plane (Fig. 1.7).
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_1_Chapter/454499_1_En_1_Fig7_HTML.png
Fig. 1.7

A position control system

A permanent magnet brushed DC motor is used as the actuator. The motor shaft is coupled to the load shaft by means of a gear box. The assembly works as follows. If a positive voltage is applied at the motor terminals, then a counter-clockwise torque is applied to the load. Hence, the load starts moving counter-clockwise. If a negative voltage is applied at the motor terminals, then a clockwise torque is applied to the load and the load starts moving clockwise. If a zero voltage is applied at the motor terminals, then a zero torque is applied to the load and the load has a tendency to stop.

Let θ represent the actual angular position of the load. The desired load position is assumed to be known and it is designated as θ d. The control system objective is that θ approaches θ d as fast as possible. According to the working principle described above, one manner to accomplish this is by applying at the motor terminals an electric voltage v that is computed according to the following law:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} v=k_p\;e,{} \end{array} \end{aligned} $$
(1.1)
where e = θ d − θ is the system error and k p is some positive constant. The mathematical expression in (1.1) is to be computed using some low-power electronic equipment (a computer or a microcontroller, for instance) and a power amplifier must also be included to satisfy the power requirements of the electric motor. It is assumed in this case that the power amplifier has a unit voltage gain, but that the electric current gain is much larger (see Chap. 10, Sect. 10.​2). According to Fig. 1.8, one of the following situations may appear:
  • If θ < θ d, then v > 0 and the load moves counter-clockwise such that θ approaches θ d.

  • If θ > θ d, then v < 0 and the load moves clockwise such that θ approaches θ d again.

  • If θ = θ d, then v = 0 and the load does not move and θ = θ d stands forever.

/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_1_Chapter/454499_1_En_1_Fig8_HTML.png
Fig. 1.8

Load must always move such that θ → θ d. (a) θ d > θ, v > 0, load moves counter-clockwise. (b) θ d < θ, v < 0, load moves clockwise

According to this reasoning it is concluded that the law presented in (1.1) to compute the voltage to be applied at the motor terminals has the potential to work well in practice.

The expression in (1.1) is known as a control law or, simply, as a controller . A block diagram showing the component interconnections in the position control system is presented in Fig. 1.9. Notice that the construction of the controller in (1.1) requires knowledge of the actual load position θ (also known as the system output or plant output ), which has to be used to compute the voltage v to be applied at the motor terminals (also known as the plant input ). This fact defines the basic control concepts of feedback and a closed-loop system . This means that the control system compares the actual load position (θ, the system output) with the desired load position (θ d, system desired output or system reference ) and applies, at the motor terminals, a voltage v, which depends on the difference between these variables (see (1.1)).
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_1_Chapter/454499_1_En_1_Fig9_HTML.png
Fig. 1.9

Block diagram of a position control system

The system error is defined as e = θ d − θ. Hence, the steady state error 1 is zero because, as explained above, θ d − θ = 0 can stand forever. However, the term steady state means that this will be achieved once the time is long enough such that the load stops moving (i.e., when the system steady-state response is reached). Hence, a zero steady-state error does not describe how the load position θ evolves as it approaches θ d. This evolution is known as the system transient response .2 Some examples of the possible shapes of the transient response are shown in Fig. 1.10. If k p in (1.1) is larger, then voltage v applied at the motor terminals is larger and, thus, the torque applied to the load is also larger, forcing the load to move faster. This means that less time is required for θ to reach θ d. However, because of a faster load movement combined with load inertia, θ may reach θ d with a nonzero load velocity  $$\dot \theta \neq 0$$ . As a consequence, the load position continues growing and the sign of θ d − θ changes. Thus, the load position θ may perform several oscillations around θ d before the load stops moving. It is concluded that k p has an important effect on the transient response and it must be computed such that the transient response behaves as desired (a fast response without oscillations). Moreover, sometimes this requirement cannot be satisfied just by adjusting k p and the control law in (1.1) must be modified, i.e., another controller must be used (see Chaps. 5, 6, 7 and 11). Furthermore, the steady-state error may be different to zero (θ ≠ θ d when the motor stops) because torque disturbances (static friction at either the motor or the load shafts may make the load deviate from its desired position). This means that even the search for a zero or, at least, a small enough steady-state error may be a reason to select a new controller.
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_1_Chapter/454499_1_En_1_Fig10_HTML.png
Fig. 1.10

Three possible transient responses in a one-degree-of-freedom position control system

Stability is a very important property that all control systems must possess. Consider the case of a simple pendulum (see Fig. 1.11). If the pendulum desired position is θ d = 0, it suffices to let the pendulum move (under a zero external torque, T(t) = 0) just to find that the pendulum oscillates until the friction effect forces it to stop at θ = 0. It is said that the pendulum is stable at θ d = 0 because it reaches this desired position in a steady state when starting from any initial position that is close enough. On the other hand, if the desired position is θ d = π, it is clear that, because of the effect of gravity, the pendulum always moves far away from that position, no matter how close to θ d = π the initial position selected. It is said that the pendulum is unstable at θ d = π. Notice that, according to this intuitive description, the position control system described above is unstable if control law in (1.1) employs a constant negative value for k p: in such a case, the load position θ would move far away from θ d. Thus, k p also determines the stability of the closed-loop control system and it must be selected such that closed-loop stability is ensured.3 It must be remarked that in the case of an unstable closed-loop system (when using a negative k p) a zero steady-state error will never be accomplished, despite the control law in (1.1), indicating that the motor stops when θ = θ d. The reason for this is that, even if θ = θ d since the beginning, position measurements θ always have a significant noise content, which would render θ ≠ θ d in (1.1) and this would be enough to move θ far away from θ d.
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_1_Chapter/454499_1_En_1_Fig11_HTML.png
Fig. 1.11

Simple pendulum

Another important factor to be taken into account in a position control system is the desired load trajectory. It is clear that tracking a constant value for θ d is easier than the case where θ d changes very fast, i.e., when the time derivative of θ d is large or in the case when the second time derivative of θ d is large. Hence, the control law in (1.1) must be designed such that the closed-loop control system behaves satisfactorily under any of the situations described above. When θ d is different to any of these situations, it is assumed that the control system will behave correctly if it behaves well in any of the situations described above. This is the main idea behind the design of the system steady-state error studied in Chap. 4.

Thus, the three basic specifications for a closed-loop control system are the transient response, the steady-state response, or steady=state error, and the stability. One controller must be designed such that, by satisfying these three fundamental specifications, a fast and well-damped system response is obtained, the load position reaches the desired position in the steady state and the closed-loop system is stable. To achieve these goals, the automatic control techniques studied in this book require knowledge and study of the mathematical model of the whole closed-loop control system. According to Chap. 2, this mathematical model is given as ordinary differential equations, which are assumed to be linear and with constant coefficients. This is the reason why Chap. 3 is concerned with the study of this class of differential equations. The main idea is to identify properties of differential equations that determine their stability in addition to their transient and steady-state responses. This will allow the design of a controller as a component that suitably modifies properties of a differential equation such that the closed-loop differential equation behaves as desired. This is the rationale behind the automatic control system design tools presented in this book.

The control techniques studied in this book can be grouped as classical or modern. Classical control techniques are presented in Chaps. 3, 4, 5, 6 and there are two different approaches: time response techniques (Chap. 5) and frequency response techniques (Chap. 6). Classical control techniques rely on the use of the Laplace transform to solve and analyze ordinary linear differential equations. Classical time response techniques study the solution of differential equations on the basis of transfer function poles and zero locations (Chap. 3) and the main control design tool is Root Locus (Chap. 5). Classical frequency response techniques exploit the fundamental idea behind the Fourier transform: (linear) control systems behave as filters; a system response is basically obtained by filtering the command signal, which is applied at the control system input. This is why the fundamental analysis and design tools in this approach are Bode and polar plots (Chap. 6), which are widely employed to analyze and design linear filters (low-pass, high-pass, band-pass, etc.). Some experimental applications of the classical control techniques are presented in Chaps. 9, 10, 11, 12, 13, and 14.

On the other hand, the modern control techniques studied in this book are known as the state variables approach (Chap. 7) which, contrary to classical control tools, allow the study of the internal behavior of a control system. This means that the state variables approach provides more information about the system to be controlled, which can be exploited to improve performance. Some examples of the experimental application of this approach are presented in Chaps. 15 and 16.

1.3.2 Robotic Arm

A robotic arm is shown in Fig. 1.12. A robotic arm can perform several tasks, for instance:
  • Take a piece of some material from one place to another to assemble it together with other components to complete complex devices such as car components.

  • Track a pre-established trajectory in space to solder two pieces of metal or to paint surfaces. Assume that the pre-established trajectory is given as six dimensional coordinates parameterized by time, i.e., three for the robot tip position [x d(t), y d(t), z d(t)] and three for the robot tip orientation [α 1d(t), α 2d(t), α 3d(t)]. Thus, the control objective is that the actual robot tip position [x(t), y(t), z(t)] and orientation [α 1(t), α 2(t), α 3(t)] reach their desired values as time grows, i.e., that:
     $$\displaystyle \begin{aligned} \begin{array}{rcl} &amp;\displaystyle &amp;\displaystyle \lim_{t\to\infty}[x(t),y(t),z(t)]=[x_d(t),y_d(t),z_d(t)],\\ &amp;\displaystyle &amp;\displaystyle \lim_{t\to\infty}[\alpha_{1}(t),\alpha_{2}(t),\alpha_{3}(t)]=[\alpha_{1d}(t),\alpha_{2d}(t),\alpha_{3d}(t)]. \end{array} \end{aligned} $$
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_1_Chapter/454499_1_En_1_Fig12_HTML.jpg
Fig. 1.12

A commercial robotic arm (with permission of Crustcrawler Robotics)

To be capable of efficiently accomplishing these tasks in three-dimensional space, a robotic arm has to be composed of at least seven bodies joined by at least six joints, three for position and three for orientation. Then, it is said that the robotic arm has at least six degrees of freedom. However, to simplify the exposition of ideas, the two-degrees-of-freedom robotic arm depicted in Fig. 1.13 is considered in the following.
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_1_Chapter/454499_1_En_1_Fig13_HTML.png
Fig. 1.13

A two-degrees-of-freedom robotic arm

Two bodies, called the arm and the forearm, move in a coordinated fashion to force the robotic arm tip to track a desired trajectory in space to perform the tasks described above. To achieve this goal, two permanent magnet brushed DC motors are employed.4 The first motor is placed at the shoulder, i.e., at the point in Fig. 1.13 where the x and y axes intersect. The stator of this motor is fixed at some point that never moves (the robot base), whereas the motor shaft is fixed to the arm. The second motor is placed at the elbow, i.e., at the point joining the arm and forearm. The stator of this motor is fixed to the arm, whereas the shaft is fixed to the forearm. This allows the arm to move freely with respect to the robot base5 and the forearm to move freely with respect to the arm. Hence, any point can be reached by the robot tip, as long as it belongs to a plane where the robot moves and is placed within the robot’s reach.

One way to define the trajectory to be tracked by the robot tip is as follows. The robot tip is manually taken through all points defining the desired trajectory. While this is performed, the corresponding angular positions at the shoulder and elbow joints are measured and recorded. These data are used as the desired positions for motors at each joint, i.e., θ 1d, θ 2d. Then, the robot is made to track the desired trajectory by using a control scheme similar to that described in Sect. 1.3.1 for each motor. Two main differences exist between the control law in (1.1) and the control law used for the motor at the shoulder, with angular position θ 1:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} v_1=k_{p1}(\theta_{1d}-\theta_1)-k_{d1}\dot\theta_1+k_{i1}\int_0^t(\theta_{1d}(r)-\theta_1(r))dr,{} \end{array} \end{aligned} $$
(1.2)
and for the motor at the elbow, with angular position θ 2:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} v_2=k_{p2}(\theta_{2d}-\theta_2)-k_{d2}\dot\theta_2+k_{i2}\int_0^t(\theta_{2d}(r)-\theta_2(r))dr.{} \end{array} \end{aligned} $$
(1.3)
These differences are the velocity feedback term, with the general form  $$-k_d\dot \theta $$ , and the integral term, with the general form  $$k_{i}\int _0^t(\theta _{d}(r)-\theta (r))dr$$ . Notice that the term  $$-k_d\dot \theta $$ has the same form as viscous friction, i.e.,  $$-b\dot \theta $$ with b the viscous friction coefficient.6 Hence, adding velocity feedback to control laws in (1.2) and (1.3) is useful to increase the system damping, i.e., in order to reduce the oscillations pointed out in section (1.3.1), allowing fast robot movements without oscillations. On the other hand, the integral term ensures that position θ reaches the desired position θ d, despite the presence of gravity effects. This can be seen as follows. Suppose that:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} v_1&amp;\displaystyle =&amp;\displaystyle k_{p1}(\theta_{1d}-\theta_1)-k_{d1}\dot\theta_1,{} \end{array} \end{aligned} $$
(1.4)
 $$\displaystyle \begin{aligned} \begin{array}{rcl} v_2&amp;\displaystyle =&amp;\displaystyle k_{p2}(\theta_{2d}-\theta_2)-k_{d2}\dot\theta_2.{} \end{array} \end{aligned} $$
(1.5)
are used instead of (1.2) and (1.3). If θ 1d = θ 1 and  $$\dot \theta _1=0$$ , then v 1 = 0. Hence, if gravity exerts a torque at the shoulder joint this motor cannot compensate for such a torque (because v 1 = 0) and, thus, θ 1d ≠ θ 1 results. On the other hand, if θ 1d ≠ θ 1, the term  $$k_{i1}\int _0^t(\theta _{1d}(r)-\theta _1(r))dr$$ adjusts voltage v 1 until θ 1d = θ 1 again and this can remain forever because the integral is not zero, despite its integrand being zero. A similar analysis yields the same results for the joint at the elbow, i.e., the use of (1.2) and (1.3) is well justified.

The main problem for the use of (1.2) and (1.3) is the selection of the controller gains k p1, k p2, k d1, k d2, k i1, k i2 and the automatic control theory has been developed to solve this kind of problem. Several ways of selecting these controller gains are presented in this book and are known as proportional integral derivative (PID) control tuning methods.

1.3.3 Automatic Steering of a Ship

Recall Sect. 1.1.1 where the course of a boat was controlled by a human being. This situation was depicted in Figs. 1.1 and 1.2. Consider now the automatic control problem formulated at the end of Sect. 1.1.1: where the course of a large ship is to be controlled using a computer and a permanent magnet brushed DC motor to actuate a rudder. The block diagram of this control system is depicted in Fig. 1.14.
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_1_Chapter/454499_1_En_1_Fig14_HTML.png
Fig. 1.14

Block diagram for the automatic steering of a ship

Mimicking Sect. 1.3.1, controller 1 is designed to perform the following computation:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \theta_d=k_{p1}(\text{desired course}-\text{actual ship course}).{} \end{array} \end{aligned} $$
(1.6)
The desired course angle shown in Fig. 1.1 is defined as positive. Thus, if k p1 is positive θ d is also positive. Suppose that the rudder angle θ, defined as positive when described as in Fig. 1.1, reaches θ d > 0. Then, water hitting the rudder will produce a torque T 2 on the ship, which is defined as positive when applied as in Fig. 1.1. This torque is given as a function δ(θ, s)7 depending on the rudder angle θ and the water speed s. Torque T 2 produces a ship rotation such that the actual course approaches the desired course.

The reader can verify, following the above sequence of ideas, that in the case where the desired course is negative, it is reached again by the actual course if k p1 > 0. This means that a positive k p1 is required to ensure that the control system is stable. Moreover, for similar reasons to the position control system, as k p1 > 0 is larger, the ship rotates faster and several oscillations may appear before settling at the desired course. On the contrary, as k p1 > 0 is smaller the ship rotates more slowly and the desired course is reached after a long period of time. Thus, the transient behavior of the actual course is similar to that shown in Fig. 1.10 as k p1 > 0 is changed. Finally, it is not difficult to verify that the control system is unstable if k p1 is negative.

In this system controller 2 is designed to perform the following computation:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} u=k_{p2}(\theta_d-\theta),k_{p2}&gt;0,{} \end{array} \end{aligned} $$
(1.7)
whereas the power amplifier just amplifies u by a positive factor to obtain voltage v to be applied at the motor actuating on the rudder. Following ideas in Sect. 1.3.1, it is not difficult to realize that θ reaches θ d as time grows because k p2 > 0 whereas instability is produced if k p2 < 0. As explained before, the ship may oscillate if k p1 > 0 is large. Moreover, for similar reasons, the rudder may also oscillate several times before θ reaches θ d if k p2 > 0 is large. A combination of these oscillatory behaviors may result in closed-loop system instability despite k p1 > 0 and k p2 > 0. Thus, additional terms must be included in expressions (1.6) and (1.7) for controller 1 and controller 2. The question is, what terms? Finding an answer to this class of questions is the reason why control theory has been developed and why this book has been written. See Chap. 14 for a practical control problem which is analogous to the control problem in this section.

1.3.4 A Gyro-Stabilized Video Camera

Recall the problem described at the end of Sect. 1.1.2, i.e., when proposing to use a computer to perform the comparison and decision tasks, and a combination of some tracks and a motor to actuate on the camera, to compensate for video camera vibration when recording a scene that is moving away. To fully solve this problem, the rotative camera movement must be controlled on its three main orientation axes. However, as the control for each axis is similar to the other axes and for the sake of simplicity, the control on only one axis is described next. The block diagram of such an automatic control system is presented in Fig. 1.15.
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_1_Chapter/454499_1_En_1_Fig15_HTML.png
Fig. 1.15

A gyro-stabilized video camera control system

A gimbal is used to provide pivoted support to the camera allowing it to rotate in three orthogonal axes. See Fig. 1.16. A measurement device known as gyroscope, or gyro for short, measures the inertial angular velocity ω of the camera. This velocity is not a velocity measured with respect to the cameraman’s arms (or the vehicle to which the video camera is attached), but it is camera velocity measured with respect to a coordinate frame that is fixed in space. Thus, this velocity is independent of the movement of the cameraman (or the movement of the vehicle to which the video camera is attached). As a simple example, consider the camera orientation control problem in a single axis. Controller 2 may be designed to perform the following mathematical operation:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} u=k_{p2}(\omega_d-\omega),{} \end{array} \end{aligned} $$
(1.8)
where k p2 is a positive number. Suppose that ω d = 0, then u = −k p2 ω. This means that in the case where, because of the vibration induced by a negative torque T T < 0, i.e. − T T > 0, the camera is instantaneously moving with a large positive velocity ω > 0, then a negative voltage u = −k p2 ω < 0 will be commanded to the motor, which will result in a negative generated torque T 2 < 0 intended to compensate for − T T > 0, i.e., resulting in a zero camera angular velocity again. The reader can verify that a similar result is obtained if a negative camera velocity ω < 0 is produced by a T T > 0. Notice that, according to (1.8), u adjusts until the camera velocity ω = ω d = 0. Hence, the vibration produced by the disturbance torque T T is eliminated. Although solving this problem is the main goal of this control system, the camera must also be able to record the scene when located in another direction if required. Recall that the scene is moving. This is the main reason for Controller 1 which, for simplicity, is assumed to perform the following mathematical operation:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \omega_d=k_{p1}(\text{desired scene direction}-\text{actual camera direction}), \end{array} \end{aligned} $$
where k p1 is a positive constant. Hence, Controller 1 determines the angular velocity that the camera must reach to keep track of the desired scene. Observe that ω d adjusts until the actual camera direction equals the desired scene direction. However, according to the above definition, ω d would be zero when the actual camera direction equals the desired scene direction. Notice that this is not possible when these variables are equal but not constant, i.e., when the camera must remain moving at a nonzero velocity. For this reason, the following expression is preferable for Controller 1:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} \omega_d&amp;\displaystyle =&amp;\displaystyle k_{p1}(\text{desired scene direction}-\text{actual camera direction})\\ &amp;\displaystyle &amp;\displaystyle +k_{i1}\int_0^t(\text{desired scene direction}-\text{actual camera direction})dt. \end{array} \end{aligned} $$
This allows that ω d ≠ 0, despite the actual camera direction equals the desired scene direction because an integral is constant, but not necessarily zero, when its integrand is zero. Moreover, computing u as in (1.8) forces the camera velocity ω to reach ω d even if the latter is not zero in a similar manner to the actual position reaching the desired position in Sect. 1.3.1. However, computing u as in (1.8) would result in a zero voltage to be applied at motor terminals when ω d = ω and, hence, the motor would tend to stop, i.e., ω d = ω cannot be satisfied when ω d ≠ 0. For this reason, the following expression is preferable for Controller 2:
 $$\displaystyle \begin{aligned} \begin{array}{rcl} u=k_{p2}(\omega_d-\omega)+k_{i2}\int_0^t(\omega_d-\omega)dt. \end{array} \end{aligned} $$
This allows the voltage commanded to motor u to be different from zero despite ω d = ω. Thus, this expression for Controller 2 computes a voltage to be commanded to the motor to force the camera velocity to reach the desired velocity ω d, which is computed by Controller 1 such that the actual camera direction tracks the direction of the desired scene. Again, a selection of controller gains k p1, k i1, k p2, k i2, is one reason why automatic control theory has been developed.
/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_1_Chapter/454499_1_En_1_Fig16_HTML.png
Fig. 1.16

A gimbal

1.4 Nomenclature in Automatic Control

The reader is referred to Fig. 1.17 for a graphical explanation of the concepts defined below.
  • Plant . The mechanism, device or process to be controlled.

  • Output . Variable or property of the plant that must be controlled.

  • Input . Variable or signal that, when adjusted, produces important changes in the plant output.

  • Desired output or reference . Signal that represents the behavior that is desired at the plant output.

  • Feedback . Use of plant output measurements to verify whether it behaves as the desired output or reference.

  • Controller . Device that computes the value of the signal to be used as the input to the plant to force the plant output to behave as the desired output or reference.

  • System . Assembly of devices with connections among them. The plant is also designated as a system.

  • Control system . Assembly of devices that includes a plant and a controller.

  • Closed-loop system . A system provided with feedback. In a closed-loop system, the output of every system component is the input of some other system component. Thus, it is important to specify what input or what output is referred to.

  • Open-loop system . A system that is not provided with feedback. Although the plant may be naturally provided with some kind of feedback, it is common to designate it as an open-loop system when not controlled using feedback.

  • Measurement system . A system that is devoted to measure the required signals to implement a closed-loop system. Its main component is a sensor.

  • Actuator . A device that applies the input signal to the plant. It is a device that requires high levels of power to work.

  • Power amplifier . A device that receives a weak signal from a controller and delivers a powerful signal to the actuator.

  • External disturbance . A signal that is external to the control system having deleterious effects on the performance of the closed-loop system. This signal may appear at the plant input or output or even at any component of the control system.

/epubstore/G/V-M-Guzman/Automatic-Control-With-Experiments/OEBPS/images/454499_1_En_1_Chapter/454499_1_En_1_Fig17_HTML.png
Fig. 1.17

Nomenclature employed in a closed-loop control system

1.5 History of Automatic Control

The previous sections of this chapter described the objectives pursued when designing a control system along with some useful ideas on how to achieve them. In this section, a brief history of automatic control is presented. The reader is expected to realize that the development of fundamental tools and concepts in automatic control have been motivated by practical engineering problems arising when introducing new technologies [10]. In this exposition, the reader is referred to specific chapters in this book where the corresponding tools and ideas are further studied. The content of the present section is based on information reported in [6]. The reader is referred to that work for more details on the history of automatic control.

Automatic control has existed and has been applied for more than 2000 years. It is known that some water clocks were built by Ktesibios by 270 BC, in addition to some ingenious mechanisms built at Alexandria and described by Heron. However, from the engineering point of view, the first important advance in automatic control was attributed to James Watt in 1789 when introducing a velocity regulator for its improved steam-engine.8 However, despite its importance Watt’s velocity regulator had several problems. Sometimes, the velocity oscillated instead of remaining constant at a desired value or, even worse, sometimes the velocity increased without a limit.9 Working toward a solution of these problems, between 1826 and 1851, J.V. Poncelet and G.B. Airy showed that it was possible to use differential equations to represent the steam-engine and the velocity regulator working together (see Chap. 2 for physical system modeling ).

By those years, mathematicians knew that the stability of a differential equation was determined by the location of the roots of the corresponding characteristic polynomial equation, and they also knew that instability appeared if some root had a positive real part (see Chap. 3, Sect. 3.​4). However, it was not simple to compute the roots of a polynomial equation and sometimes this was even not possible. In 1868, J.C. Maxwell showed how to establish the stability of steam-engines equipped with Watt’s velocity regulator just by analyzing the system’s differential equation coefficients. Nevertheless, this result was only useful for second-, third-, and fourth-order differential equations. Later, between 1877 and 1895, and independently, E.J. Routh and A. Hurwitz conceived a method of determining the stability of arbitrary order systems, solving the problem that Maxwell had left open. This method is known now as the Routh criterion or the Routh–Hurwitz criterion (see Chap. 4, Sect. 4.​3).

Many applications related to automatic control were reported throughout the nineteenth century. Among the most important were temperature control, pressure control, level control, and the velocity control of rotative machines. On the other hand, several applications were reported where steam was used to move large guns and as actuators in steering systems for large ships [9, 11]. It was during this period when the terms servo-motor and servo-mechanism were introduced in France to describe a movement generated by a servo or a slave device. However, despite this success, most controllers were simple on-off. People such as E. Sperry and M.E. Leeds realized that performance could be improved by smoothly adjusting the power supplied to the plant as the controlled variable approached its desired value. In 1922, N. Minorsky presented a clear analysis of position control systems and introduced what we know now as the PID controller (see Chap. 5, Sect. 5.​2.​5). This controller was conceived after observing how a ship’s human pilot controls heading [12].

On the other hand, distortion by amplifiers had posed many problems to telephony companies since 1920.10 It was then that H.S. Black found that distortion is reduced if a small quantity of signal at the amplifier output is fed back to its input. During this work, Black was helped by H. Nyquist who, in 1932, published these experiences in a report entitled “Regeneration Theory” where he established the basis of what we know now as Nyquist Analysis (see Chap. 6, Sect. 6.​4).

During the period 1935–1940, telephony companies wanted to increase the bandwidth of their communication systems to increase the number of users. To accomplish this, it was necessary for the telephonic lines to have a good frequency response characteristic (see Chap. 6). Motivated by this problem, H. Bode studied the relationship between a given attenuation characteristic and the minimum associated phase shift. As a result, he introduced the concepts of gain margin and phase margin (see Chap. 6, Sect. 6.​5) and he began to consider the point (− 1, 0) in the complex plane as a critical point instead of point (+ 1, 0) introduced by Nyquist. A detailed description of Bode’s work appeared in 1945 in his book “Network Analysis and Feedback Amplifier Design.”

During World War II, work on control systems focused on some important problems [8]. The search for solutions to these problems motivated the development of new ideas on mechanism control. G.S. Brown from the Massachusetts Institute of Technology showed that electrical and mechanical systems can be represented and manipulated using block diagrams (see Chap. 4, Sect. 4.​1) and A.C. Hall showed in 1943 that defining blocks as transfer functions (see Chap. 3), it was possible to find the equivalent transfer function of the complete system. Then, the Nyquist Stability criterion could be used to determine gain and phase margins.

Researchers at the Massachusetts Institute of Technology employed phase lead circuits (see Chap. 11, Sect. 11.​2.​2) at the direct path to improve the performance of the closed-loop system, whereas several internal loops were employed in the UK to modify the response of the closed-loop system.

By the end of World War II, the frequency response techniques, based on the Nyquist methodology and Bode diagrams, were well established, describing the control system performance in terms of bandwidth, resonant frequency, phase margin, and gain margin (see Chap. 6). The alternative approach to these techniques relied on the solution of the corresponding differential equations using Laplace transform, describing control system performance in terms of rise time, overshoot, steady-state error, and damping (see Chap. 3, Sect. 3.​3). Many engineers preferred the latter approach because the results were expressed in “real” terms. But this approach had the drawback that there was no simple technique for designers to use to relate changes in parameters to changes in the system response. It was precisely the Root Locus method [7] (see Chap. 5, Sects. 5.​1 and 5.​2), introduced between 1948 and 1950 by W. Evans, that allowed designers to avoid these obstacles. Hence, what we know now as the Classical Control Techniques were well established by that time, and they were orientated to systems represented by ordinary, linear differential equations with constant coefficients and single-input single-output.

Then, the era of supersonic and space flights arrived. It was necessary to employ detailed physical models represented by differential equations that could be linear or nonlinear. Engineers working in the aerospace industries found, following the ideas of Poincaré, that it was possible to formulate general differential equations in terms of a set of first-order differential equations: the state variable approach was conceived (see Chap. 7). The main promoter of this approach was R. Kalman who introduced the concepts of controllability and observability around 1960 (see Sect. 7.​7). Kalman also introduced what is today known as the Kalman filter, which was successfully employed for guidance of the Apollo space capsule, and the linear quadratic regulator, or LQR control, which is an optimal controller minimizing the system time response and the input effort required for it.

After 1960, the state space approach was the dominant subject for about two decades, leading to I. Horowitz, who continued to work on the frequency response methods, to write in 1984: “modern PhD.s seem to have a poor understanding of even such a fundamental concept of bandwidth and not the remotest idea of its central importance in feedback theory. It is amazing how many are unaware that the primary reason for feedback in control is uncertainty.”

In fact, control systems in practice are subject to parameter uncertainties, external disturbances, and measurement noise. An important advantage of classical control methods was that they were better suited than the state variable approach to coping with these problems. This is because classical design control methods are naturally based on concepts such as bandwidth, and gain and phase margins (see Chap. 6, Sects. 6.​5 and 6.​6.​1) which represent a measure of the robustness11 of the closed-loop system. Furthermore, there was a rapid realization that the powerful results stemming from the state variable approach were difficult to apply in general industrial problems because the exact models of processes are difficult to obtain and sometimes, it is not possible to obtain them. In this respect, K. Astrom and P. Eykoff wrote in 1971 that an important feature of the classical frequency response methods is that they constitute a powerful technique for system identification allowing transfer functions to be obtained that are accurate enough to be used in the design tasks. In modern control, the models employed are parametric models in terms of state equations and this has motivated interest in parameter estimation and related techniques. Moreover, the state variable approach has been demonstrated to be a very powerful tool for the analysis and design of nonlinear systems.

Finally, new problems have arisen since then in the study of control systems theory, which have motivated the introduction of diverse new control techniques, some of them still under development. For instance, nonlinearities found in servo-mechanisms have motivated the study of nonlinear control systems. Control of supersonic aircrafts, which operate under wide variations in temperature, pressure, velocity, etc., has motivated the development of adaptive control techniques. The use of computers in modern navigation systems has resulted in the introduction of discrete time control systems, etc.

1.6 Experimental Prototypes

It has been stated in the previous sections that automatic control has developed to solve important engineering problems. However, teaching automatic control techniques requires students to experimentally apply their new knowledge of this subject. It is not possible to accomplish this using either industrial facilities or high-technology laboratories. This is the reason why teaching automatic control relies on construction of some experimental prototypes. An experimental prototype is a device that has two main features: (i) it is simple enough to be built and put into operation using low-cost components, and (ii) its model is complex enough to allow some interesting properties to appear such that the application of the control techniques under study can be demonstrated. A list of experimental prototypes used in this book is presented in the following and the specific control techniques tested on them are indicated:
  • Electronic oscillators based on operational amplifiers and bipolar junction transistors (Chap. 9). Frequency response, Nyquist stability criterion, and Routh stability criterion.

  • Permanent magnet brushed DC motors (Chaps. 10 and 11). Several basic controller designs using time response: proportional, proportional–derivative, proportional–integral, proportional–integral–derivative, phase lead controllers, two-degrees-of-freedom controllers.

  • Mechanism with flexibility (Chap. 12). Frequency response for experimental identification and root locus for controller design.

  • Magnetic levitation system (Chap. 13). PID controller design using linear approximation of a nonlinear system and the root locus method.

  • Ball and beam system (Chap. 14). Design of a multi-loop control system using frequency response (Nyquist criterion and Bode diagrams) and the root locus method.

  • Furuta pendulum (Chap. 15). Design of a linear state feedback controller using the state variable approach. A linear approximation of a nonlinear system is employed.

  • Inertia wheel pendulum (Chap. 16). Design of two state feedback controllers. One of these controllers is designed on the basis of the complete nonlinear model of the inertial wheel pendulum and it is employed to introduce the reader to the control of nonlinear systems.

1.7 Summary

In the present chapter, the main ideas behind closed-loop control have been explained and the objectives of designing a control system were also described. A brief history of automatic control has been presented to show the reader that all concepts and tools in control systems have been motivated by solving important technological problems. This historical review has been related to content of this book.

1.8 Review Questions

  1. 1.

    What objectives can the reader give for automatic control?

     
  2. 2.

    Can the reader make a list of equipment at home that employs feedback?

     
  3. 3.

    Investigate how a pendulum-based clock works. How do you think that feedback appears in the working principle of these clocks?

     
  4. 4.

    Why is Watt’s velocity regulator for a steam engine historically important?

     
  5. 5.

    What does instability of a control system mean?

     
  6. 6.

    What do you understand by the term “fast response”?

     
  7. 7.

    Why is it stated that an inverted pendulum is unstable?

     
  8. 8.

    Why did the frequency response approach develop before the time response approach?