1.1 The Human Being as a Controller
Everybody has been a part of a control system at some time. Some examples of this are when driving a car, balancing a broomstick on a hand, walking or standing up without falling, taking a glass to drink water, and so on. These control systems, however, are not automatic control systems, as a person is required to perform a role in it. To explain this idea, in this section some more technical examples of control systems are described in which a person performs a role.
1.1.1 Steering a Boat

Steering a boat

Actions performed when steering a ship
In this block diagram, the fundamental concept of feedback in control systems is observed. Feedback means to feed again and refers to the fact that the resulting action of the control system, i.e., the boat actual course, is measured to be compared with the desired course and, on the basis of such a comparison, a corrective action (torque on the rudder wheel) is commanded again, i.e., fed again, trying to render zero deviation between the desired and actual courses. It is said that the human being performs as a controller: she/he evaluates the actual deviation and, based on this information, commands a corrective order until the actual course reaches the desired course. Although arms act as actuators, notice that the rudder wheel and the mechanical transmission suitably amplify torque generated by the arms to actuate the rudder.
This control system is not an automatic control system because a human being is required to perform the task. Suppose that a large ship is engaged in a long trip, i.e., traveling between two harbors on two different continents. In such a case, it is preferable to replace the human brain by a computer. Moreover, as the ship is very heavy, a powerful motor must be used to actuate the rudder. Thus, a machine (the computer) must be used to control the ship by controlling another machine (the rudder motor). In such a case, this control system becomes an automatic control system.
1.1.2 Video Recording While Running

Video recording while running

Actions performed when video recording while running
Thus, it is necessary to replace the human being in this control system by more precise mechanisms, i.e., design of an automatic control system is required This requires the use of a computer to perform the comparison and decision tasks, and the combined use of tracks and a motor to actuate the camera. Because of the high capability of these machines to perform fast and precise actions, high-quality images can be recorded because vibration induced on the video camera can be suitably compensated for [1, 2].
1.2 Feedback Is Omnipresent
The main feature of control systems introduced in the previous section and the automatic control systems with which this book is concerned is feedback. This is the capability of a control system to command corrective actions until the desired response is accomplished. On the other hand, feedback is not an invention of human beings: it is rather a concept that human being has learned from nature, where feedback is omnipresent. Examples in this section are intended to explain this.
1.2.1 A Predator–Prey System
This is a fundamental feedback system in all ecosystems. Predators need to eat prey to survive. Then, if there are many predators, the number of prey diminishes because many predators require lots of food. Then, a reduced number of prey ensures that the number of predators also diminishes because of the lack of food. As the number of predators diminishes, the number of prey increases, because there are fewer predators to eat prey. Hence, there will be a point in time where the number of prey is so large and the number of predators is so small that the number of predators begins to increase, because of the food abundance. Thus, at some point in the future, the number of predators will be large and the number of prey will be small, and the process repeats over and over again.
Feedback exists in this process because the number of predators depends on the number of prey and vice versa. Notice that, because of this process, the number of predators and prey are kept within a range that renders possible sustainability of the ecosystem. Too many predators may result in prey extinction which, eventually, will also result in predator extinction. On the other hand, too many prey may result in extinction of other species the prey eat and, hence, prey and predator extinction results again.
The reader may wonder whether it is possible for the number of predators and prey to reach constant values instead of oscillating. Although this is not common in nature, the question is Why? This class of question can be answered using Control Theory, i.e., the study of (feedback) automatic control systems.
1.2.2 Homeostasis
Homeostasis is the ability of an organism to maintain its internal equilibrium. This means that variables such as arterial blood pressure, oxygen, CO2 and glucose concentration in blood, in addition to the relations among carbohydrates, proteins, and fats, for instance, are kept constant at levels that are good for health.
1.2.2.1 Glucose Homeostasis [3, 4]

Glucose homeostasis: regulation of glucose in the blood
On the other hand, if the glucose concentration in the blood diminishes, the pancreas delivers more glucagon and less insulin, which has the following effects: (i) it stimulates the liver cells to produce glucose, which is delivered into the blood, (ii) it stimulates the degradation of fats into fatty acids and glycerol, which are delivered into the blood, iii) it stimulates the liver to produce glucose from glycogen, which is delivered into the blood. The effect of this set of actions is the incrementation of the glucose concentration in the blood to safe, healthy levels.
The glucose regulation mechanisms described above are important because the blood glucose level changes several times within a day: it increases after meals and it decreases between meals because cells use or store glucose during these periods of time. Thus, it is not difficult to imagine that glucose homeostasis performs as a perturbed control system equipped with an efficient regulator: the pancreas.
1.2.2.2 Psychological Homeostasis [3]
According to this concept, homeostasis regulates internal changes for both physiological and psychological reasons, which are called necessities. Thus, the life of an organism can be defined as the constant search for and equilibrium between necessities and their satisfaction. Every action searching for such an equilibrium is a behavior.
1.2.2.3 Body Temperature [3, 5]
Human beings measure their body temperatures, using temperature sensors in their brains and bodies.
A body temperature decrement causes a reduction of blood supply to the skin to avoid heat radiation from the body to the environment, and the metabolic rate is increased by the body shivering to avoid hypothermia.

Regulation of body temperature
On the other hand, fever in a human being is considered when the body’s temperature is above 38∘C. Fever, however, is a body’s natural defense mechanism against infectious diseases, as high temperatures help the human body to overcome microorganisms that produce diseases. This, however, results in body weakness, because of the energy employed in this process. Thus, when this is not enough, medical assistance is required.
1.3 Real-Life Applications of Automatic Control
In this section, some examples of real-life applications are presented to intuitively understand the class of technological problems with which automatic control is concerned, how they are approached, and to stress the need for automatic control.
1.3.1 A Position Control System

A position control system
A permanent magnet brushed DC motor is used as the actuator. The motor shaft is coupled to the load shaft by means of a gear box. The assembly works as follows. If a positive voltage is applied at the motor terminals, then a counter-clockwise torque is applied to the load. Hence, the load starts moving counter-clockwise. If a negative voltage is applied at the motor terminals, then a clockwise torque is applied to the load and the load starts moving clockwise. If a zero voltage is applied at the motor terminals, then a zero torque is applied to the load and the load has a tendency to stop.

-
If θ < θ d, then v > 0 and the load moves counter-clockwise such that θ approaches θ d.
-
If θ > θ d, then v < 0 and the load moves clockwise such that θ approaches θ d again.
-
If θ = θ d, then v = 0 and the load does not move and θ = θ d stands forever.

Load must always move such that θ → θ d. (a) θ d > θ, v > 0, load moves counter-clockwise. (b) θ d < θ, v < 0, load moves clockwise
According to this reasoning it is concluded that the law presented in (1.1) to compute the voltage to be applied at the motor terminals has the potential to work well in practice.

Block diagram of a position control system


Three possible transient responses in a one-degree-of-freedom position control system

Simple pendulum
Another important factor to be taken into account in a position control system is the desired load trajectory. It is clear that tracking a constant value for θ d is easier than the case where θ d changes very fast, i.e., when the time derivative of θ d is large or in the case when the second time derivative of θ d is large. Hence, the control law in (1.1) must be designed such that the closed-loop control system behaves satisfactorily under any of the situations described above. When θ d is different to any of these situations, it is assumed that the control system will behave correctly if it behaves well in any of the situations described above. This is the main idea behind the design of the system steady-state error studied in Chap. 4.
Thus, the three basic specifications for a closed-loop control system are the transient response, the steady-state response, or steady=state error, and the stability. One controller must be designed such that, by satisfying these three fundamental specifications, a fast and well-damped system response is obtained, the load position reaches the desired position in the steady state and the closed-loop system is stable. To achieve these goals, the automatic control techniques studied in this book require knowledge and study of the mathematical model of the whole closed-loop control system. According to Chap. 2, this mathematical model is given as ordinary differential equations, which are assumed to be linear and with constant coefficients. This is the reason why Chap. 3 is concerned with the study of this class of differential equations. The main idea is to identify properties of differential equations that determine their stability in addition to their transient and steady-state responses. This will allow the design of a controller as a component that suitably modifies properties of a differential equation such that the closed-loop differential equation behaves as desired. This is the rationale behind the automatic control system design tools presented in this book.
The control techniques studied in this book can be grouped as classical or modern. Classical control techniques are presented in Chaps. 3, 4, 5, 6 and there are two different approaches: time response techniques (Chap. 5) and frequency response techniques (Chap. 6). Classical control techniques rely on the use of the Laplace transform to solve and analyze ordinary linear differential equations. Classical time response techniques study the solution of differential equations on the basis of transfer function poles and zero locations (Chap. 3) and the main control design tool is Root Locus (Chap. 5). Classical frequency response techniques exploit the fundamental idea behind the Fourier transform: (linear) control systems behave as filters; a system response is basically obtained by filtering the command signal, which is applied at the control system input. This is why the fundamental analysis and design tools in this approach are Bode and polar plots (Chap. 6), which are widely employed to analyze and design linear filters (low-pass, high-pass, band-pass, etc.). Some experimental applications of the classical control techniques are presented in Chaps. 9, 10, 11, 12, 13, and 14.
On the other hand, the modern control techniques studied in this book are known as the state variables approach (Chap. 7) which, contrary to classical control tools, allow the study of the internal behavior of a control system. This means that the state variables approach provides more information about the system to be controlled, which can be exploited to improve performance. Some examples of the experimental application of this approach are presented in Chaps. 15 and 16.
1.3.2 Robotic Arm
-
Take a piece of some material from one place to another to assemble it together with other components to complete complex devices such as car components.
-
Track a pre-established trajectory in space to solder two pieces of metal or to paint surfaces. Assume that the pre-established trajectory is given as six dimensional coordinates parameterized by time, i.e., three for the robot tip position [x d(t), y d(t), z d(t)] and three for the robot tip orientation [α 1d(t), α 2d(t), α 3d(t)]. Thus, the control objective is that the actual robot tip position [x(t), y(t), z(t)] and orientation [α 1(t), α 2(t), α 3(t)] reach their desired values as time grows, i.e., that:

A commercial robotic arm (with permission of Crustcrawler Robotics)

A two-degrees-of-freedom robotic arm
Two bodies, called the arm and the forearm, move in a coordinated fashion to force the robotic arm tip to track a desired trajectory in space to perform the tasks described above. To achieve this goal, two permanent magnet brushed DC motors are employed.4 The first motor is placed at the shoulder, i.e., at the point in Fig. 1.13 where the x and y axes intersect. The stator of this motor is fixed at some point that never moves (the robot base), whereas the motor shaft is fixed to the arm. The second motor is placed at the elbow, i.e., at the point joining the arm and forearm. The stator of this motor is fixed to the arm, whereas the shaft is fixed to the forearm. This allows the arm to move freely with respect to the robot base5 and the forearm to move freely with respect to the arm. Hence, any point can be reached by the robot tip, as long as it belongs to a plane where the robot moves and is placed within the robot’s reach.










The main problem for the use of (1.2) and (1.3) is the selection of the controller gains k p1, k p2, k d1, k d2, k i1, k i2 and the automatic control theory has been developed to solve this kind of problem. Several ways of selecting these controller gains are presented in this book and are known as proportional integral derivative (PID) control tuning methods.
1.3.3 Automatic Steering of a Ship

Block diagram for the automatic steering of a ship

The reader can verify, following the above sequence of ideas, that in the case where the desired course is negative, it is reached again by the actual course if k p1 > 0. This means that a positive k p1 is required to ensure that the control system is stable. Moreover, for similar reasons to the position control system, as k p1 > 0 is larger, the ship rotates faster and several oscillations may appear before settling at the desired course. On the contrary, as k p1 > 0 is smaller the ship rotates more slowly and the desired course is reached after a long period of time. Thus, the transient behavior of the actual course is similar to that shown in Fig. 1.10 as k p1 > 0 is changed. Finally, it is not difficult to verify that the control system is unstable if k p1 is negative.

1.3.4 A Gyro-Stabilized Video Camera

A gyro-stabilized video camera control system





A gimbal
1.4 Nomenclature in Automatic Control
-
Plant . The mechanism, device or process to be controlled.
-
Output . Variable or property of the plant that must be controlled.
-
Input . Variable or signal that, when adjusted, produces important changes in the plant output.
-
Desired output or reference . Signal that represents the behavior that is desired at the plant output.
-
Feedback . Use of plant output measurements to verify whether it behaves as the desired output or reference.
-
Controller . Device that computes the value of the signal to be used as the input to the plant to force the plant output to behave as the desired output or reference.
-
System . Assembly of devices with connections among them. The plant is also designated as a system.
-
Control system . Assembly of devices that includes a plant and a controller.
-
Closed-loop system . A system provided with feedback. In a closed-loop system, the output of every system component is the input of some other system component. Thus, it is important to specify what input or what output is referred to.
-
Open-loop system . A system that is not provided with feedback. Although the plant may be naturally provided with some kind of feedback, it is common to designate it as an open-loop system when not controlled using feedback.
-
Measurement system . A system that is devoted to measure the required signals to implement a closed-loop system. Its main component is a sensor.
-
Actuator . A device that applies the input signal to the plant. It is a device that requires high levels of power to work.
-
Power amplifier . A device that receives a weak signal from a controller and delivers a powerful signal to the actuator.
-
External disturbance . A signal that is external to the control system having deleterious effects on the performance of the closed-loop system. This signal may appear at the plant input or output or even at any component of the control system.

Nomenclature employed in a closed-loop control system
1.5 History of Automatic Control
The previous sections of this chapter described the objectives pursued when designing a control system along with some useful ideas on how to achieve them. In this section, a brief history of automatic control is presented. The reader is expected to realize that the development of fundamental tools and concepts in automatic control have been motivated by practical engineering problems arising when introducing new technologies [10]. In this exposition, the reader is referred to specific chapters in this book where the corresponding tools and ideas are further studied. The content of the present section is based on information reported in [6]. The reader is referred to that work for more details on the history of automatic control.
Automatic control has existed and has been applied for more than 2000 years. It is known that some water clocks were built by Ktesibios by 270 BC, in addition to some ingenious mechanisms built at Alexandria and described by Heron. However, from the engineering point of view, the first important advance in automatic control was attributed to James Watt in 1789 when introducing a velocity regulator for its improved steam-engine.8 However, despite its importance Watt’s velocity regulator had several problems. Sometimes, the velocity oscillated instead of remaining constant at a desired value or, even worse, sometimes the velocity increased without a limit.9 Working toward a solution of these problems, between 1826 and 1851, J.V. Poncelet and G.B. Airy showed that it was possible to use differential equations to represent the steam-engine and the velocity regulator working together (see Chap. 2 for physical system modeling ).
By those years, mathematicians knew that the stability of a differential equation was determined by the location of the roots of the corresponding characteristic polynomial equation, and they also knew that instability appeared if some root had a positive real part (see Chap. 3, Sect. 3.4). However, it was not simple to compute the roots of a polynomial equation and sometimes this was even not possible. In 1868, J.C. Maxwell showed how to establish the stability of steam-engines equipped with Watt’s velocity regulator just by analyzing the system’s differential equation coefficients. Nevertheless, this result was only useful for second-, third-, and fourth-order differential equations. Later, between 1877 and 1895, and independently, E.J. Routh and A. Hurwitz conceived a method of determining the stability of arbitrary order systems, solving the problem that Maxwell had left open. This method is known now as the Routh criterion or the Routh–Hurwitz criterion (see Chap. 4, Sect. 4.3).
Many applications related to automatic control were reported throughout the nineteenth century. Among the most important were temperature control, pressure control, level control, and the velocity control of rotative machines. On the other hand, several applications were reported where steam was used to move large guns and as actuators in steering systems for large ships [9, 11]. It was during this period when the terms servo-motor and servo-mechanism were introduced in France to describe a movement generated by a servo or a slave device. However, despite this success, most controllers were simple on-off. People such as E. Sperry and M.E. Leeds realized that performance could be improved by smoothly adjusting the power supplied to the plant as the controlled variable approached its desired value. In 1922, N. Minorsky presented a clear analysis of position control systems and introduced what we know now as the PID controller (see Chap. 5, Sect. 5.2.5). This controller was conceived after observing how a ship’s human pilot controls heading [12].
On the other hand, distortion by amplifiers had posed many problems to telephony companies since 1920.10 It was then that H.S. Black found that distortion is reduced if a small quantity of signal at the amplifier output is fed back to its input. During this work, Black was helped by H. Nyquist who, in 1932, published these experiences in a report entitled “Regeneration Theory” where he established the basis of what we know now as Nyquist Analysis (see Chap. 6, Sect. 6.4).
During the period 1935–1940, telephony companies wanted to increase the bandwidth of their communication systems to increase the number of users. To accomplish this, it was necessary for the telephonic lines to have a good frequency response characteristic (see Chap. 6). Motivated by this problem, H. Bode studied the relationship between a given attenuation characteristic and the minimum associated phase shift. As a result, he introduced the concepts of gain margin and phase margin (see Chap. 6, Sect. 6.5) and he began to consider the point (− 1, 0) in the complex plane as a critical point instead of point (+ 1, 0) introduced by Nyquist. A detailed description of Bode’s work appeared in 1945 in his book “Network Analysis and Feedback Amplifier Design.”
During World War II, work on control systems focused on some important problems [8]. The search for solutions to these problems motivated the development of new ideas on mechanism control. G.S. Brown from the Massachusetts Institute of Technology showed that electrical and mechanical systems can be represented and manipulated using block diagrams (see Chap. 4, Sect. 4.1) and A.C. Hall showed in 1943 that defining blocks as transfer functions (see Chap. 3), it was possible to find the equivalent transfer function of the complete system. Then, the Nyquist Stability criterion could be used to determine gain and phase margins.
Researchers at the Massachusetts Institute of Technology employed phase lead circuits (see Chap. 11, Sect. 11.2.2) at the direct path to improve the performance of the closed-loop system, whereas several internal loops were employed in the UK to modify the response of the closed-loop system.
By the end of World War II, the frequency response techniques, based on the Nyquist methodology and Bode diagrams, were well established, describing the control system performance in terms of bandwidth, resonant frequency, phase margin, and gain margin (see Chap. 6). The alternative approach to these techniques relied on the solution of the corresponding differential equations using Laplace transform, describing control system performance in terms of rise time, overshoot, steady-state error, and damping (see Chap. 3, Sect. 3.3). Many engineers preferred the latter approach because the results were expressed in “real” terms. But this approach had the drawback that there was no simple technique for designers to use to relate changes in parameters to changes in the system response. It was precisely the Root Locus method [7] (see Chap. 5, Sects. 5.1 and 5.2), introduced between 1948 and 1950 by W. Evans, that allowed designers to avoid these obstacles. Hence, what we know now as the Classical Control Techniques were well established by that time, and they were orientated to systems represented by ordinary, linear differential equations with constant coefficients and single-input single-output.
Then, the era of supersonic and space flights arrived. It was necessary to employ detailed physical models represented by differential equations that could be linear or nonlinear. Engineers working in the aerospace industries found, following the ideas of Poincaré, that it was possible to formulate general differential equations in terms of a set of first-order differential equations: the state variable approach was conceived (see Chap. 7). The main promoter of this approach was R. Kalman who introduced the concepts of controllability and observability around 1960 (see Sect. 7.7). Kalman also introduced what is today known as the Kalman filter, which was successfully employed for guidance of the Apollo space capsule, and the linear quadratic regulator, or LQR control, which is an optimal controller minimizing the system time response and the input effort required for it.
After 1960, the state space approach was the dominant subject for about two decades, leading to I. Horowitz, who continued to work on the frequency response methods, to write in 1984: “modern PhD.s seem to have a poor understanding of even such a fundamental concept of bandwidth and not the remotest idea of its central importance in feedback theory. It is amazing how many are unaware that the primary reason for feedback in control is uncertainty.”
In fact, control systems in practice are subject to parameter uncertainties, external disturbances, and measurement noise. An important advantage of classical control methods was that they were better suited than the state variable approach to coping with these problems. This is because classical design control methods are naturally based on concepts such as bandwidth, and gain and phase margins (see Chap. 6, Sects. 6.5 and 6.6.1) which represent a measure of the robustness11 of the closed-loop system. Furthermore, there was a rapid realization that the powerful results stemming from the state variable approach were difficult to apply in general industrial problems because the exact models of processes are difficult to obtain and sometimes, it is not possible to obtain them. In this respect, K. Astrom and P. Eykoff wrote in 1971 that an important feature of the classical frequency response methods is that they constitute a powerful technique for system identification allowing transfer functions to be obtained that are accurate enough to be used in the design tasks. In modern control, the models employed are parametric models in terms of state equations and this has motivated interest in parameter estimation and related techniques. Moreover, the state variable approach has been demonstrated to be a very powerful tool for the analysis and design of nonlinear systems.
Finally, new problems have arisen since then in the study of control systems theory, which have motivated the introduction of diverse new control techniques, some of them still under development. For instance, nonlinearities found in servo-mechanisms have motivated the study of nonlinear control systems. Control of supersonic aircrafts, which operate under wide variations in temperature, pressure, velocity, etc., has motivated the development of adaptive control techniques. The use of computers in modern navigation systems has resulted in the introduction of discrete time control systems, etc.
1.6 Experimental Prototypes
-
Electronic oscillators based on operational amplifiers and bipolar junction transistors (Chap. 9). Frequency response, Nyquist stability criterion, and Routh stability criterion.
-
Permanent magnet brushed DC motors (Chaps. 10 and 11). Several basic controller designs using time response: proportional, proportional–derivative, proportional–integral, proportional–integral–derivative, phase lead controllers, two-degrees-of-freedom controllers.
-
Mechanism with flexibility (Chap. 12). Frequency response for experimental identification and root locus for controller design.
-
Magnetic levitation system (Chap. 13). PID controller design using linear approximation of a nonlinear system and the root locus method.
-
Ball and beam system (Chap. 14). Design of a multi-loop control system using frequency response (Nyquist criterion and Bode diagrams) and the root locus method.
-
Furuta pendulum (Chap. 15). Design of a linear state feedback controller using the state variable approach. A linear approximation of a nonlinear system is employed.
-
Inertia wheel pendulum (Chap. 16). Design of two state feedback controllers. One of these controllers is designed on the basis of the complete nonlinear model of the inertial wheel pendulum and it is employed to introduce the reader to the control of nonlinear systems.
1.7 Summary
In the present chapter, the main ideas behind closed-loop control have been explained and the objectives of designing a control system were also described. A brief history of automatic control has been presented to show the reader that all concepts and tools in control systems have been motivated by solving important technological problems. This historical review has been related to content of this book.
1.8 Review Questions
- 1.
What objectives can the reader give for automatic control?
- 2.
Can the reader make a list of equipment at home that employs feedback?
- 3.
Investigate how a pendulum-based clock works. How do you think that feedback appears in the working principle of these clocks?
- 4.
Why is Watt’s velocity regulator for a steam engine historically important?
- 5.
What does instability of a control system mean?
- 6.
What do you understand by the term “fast response”?
- 7.
Why is it stated that an inverted pendulum is unstable?
- 8.
Why did the frequency response approach develop before the time response approach?