OPEN ACCESS
The performance of automatic control systems hinges on the parameters of proportional–integral–derivative (PID) controller. Therefore, this paper attempts to determine the most suitable parameter values of PID controller. For this purpose, the particle swarm optimization (PSO) was improved after introducing the flying time T and adaptive weight ω, and the improved PSO (IPSO) was compared against the basic PSO and the PSO modified with both inertial weight and constriction factor (PSOω, x). After that, the IPSO was applied to optimize the parameters of the PID controller. With a secondorder inertia model as the control object, the parameters of PID controller optimized by the IPSO were contrasted with those optimized by the traditional Ziegler–Nichols optimization method. The results show that the IPSO is faster and more accurate than the traditional approach. The research findings provide new insights into the optimization of the PID controller and the application of the PSO.
flying time, adaptive weight, constriction factor, improved particle swarm optimization (ipso), proportional–integral–derivative (PID) controller
The proportional–integral–derivative (PID) controller is a robust and simple control loop feedback mechanism, which has been widely applied for the digital PID control in the manufacturing of electronics, machines, chemicals and metal products. The control effect depends on the proportional gain constant kp, the integral gain ki and the derivative gain kd, while the control error hinges on the proportional, integral and derivative terms (denoted as P, I and D, respectively) of the object (Yu and Yan, 2006). However, it is very difficult to find the most suitable parameters of the PID controller. To overcome the difficulty, many new methods have been developed to determine the parameters that ensure the stable autooptimization and adaptive control of the PID controller (Sun et al., 2004).
Traditionally, the parameters of PID controller are optimized numerically or graphically using trailanderror against Bode plots. Recent years has seen the proliferation of intelligent optimization algorithms in parameter optimization of PID controller, including but not limited to genetic algorithm (GA), ant colony algorithm (ACA), seeker optimization algorithm (SOA) and particle swarm optimization (PSO) (Yu et al., 2013; Tao et al., 2012). Among them, the PSO, proposed by Kennedy and Eberhart (1998), has been extensively applied to optimize the parameters of the PID controller at home and abroad. On the upside, the PSO enjoys a simple structure, high accuracy, fast convergence and strong adaptability. Particularly, the algorithm can be extended for multiobjective optimization. On the downside, the PSO faces slow convergence and undesirable accuracy in certain conditions (Atyabi and Samadzadegan, 2011; Meng et al., 2013).
Considering the above, this paper introduces the flying time T and adaptive weight ω to the PSO algorithm, and compares the improved PSO (IPSO) against the basic PSO and the PSO modified with both inertial weight and constriction factor (PSOω, x). The experimental results verify that the IPSO can achieve high accuracy and fast convergence. Next, the IPSO was applied to optimize the parameters of the PID controller (Ono and Nakayama, 2009). The control object is a secondorder inertia model. The IPSObased optimization outperformed the traditional parameter optimization method for PID controller (Ziegler–Nichols optimization method) (Gong et al., 2011; Yang et al., 2010; Yu and Cao, 2014).
The PSO was first intended for simulating social behaviors of bird flocks or fish schools. This algorithm has been widely adopted for parameter optimization in highdimensional spaces, thanks to its simple structure, search efficiency and fast global convergence.
2.1. The basic PSO
The basic PSO works by having a population (called swarm) of z candidate solutions (called particles), which are interested in approximating the global minimum x_{0} of the objective function f: R^{n}→R. These particles are moved around in the searchspace D according to a few simple formulae. The movements of the particles are guided by their own best known position X^{z(pbest)} in the searchspace as well as the entire swarm’s best known position X^{gbest}. The position of particle z is determined by the solution of the objective function. In each iteration, the position is updated and represented by a vector X_{l}^{z}∈Rn.
The position of particle z is updated based on its current velocity V_{l}^{z} ∈Rn and the previous position X_{l1}^{z}:
${{X}_{l}}^{z}={{X}_{l1}}^{z}+{{V}_{l}}^{z}$ (1)
The vector V_{l1}^{z} is updated by the formula below:
${{V}_{l}}^{z}~={{V}_{l1}}^{z}~+{{c}_{1}}\Xi ({{X}^{z\left( pbest \right)}}{{X}_{l1}}^{z})+{{c}_{2}}\Xi ({{X}^{gbest}}{{X}_{l1}}^{z})$ (2)
where V_{l1}^{z }is the previous velocity of particle z; Ξ is a diagonal matrix of random numbers in the interval [0, 1]; c_{1} is the cognitive parameter reflecting the effect of individual experience on the decisionmaking of the next particle; c_{2 }is the social parameter reflecting the effect of social experience on the decisionmaking of the next particle. Together, the two parameters characterize the trend of velocity update. The previous studies have recommended the following values for c_{1} and c_{2}: c_{1}=c_{2}=2, c_{1}=c_{2}=2.05 or c_{1}>c_{2} with c_{1}+c_{2}<=4.10.
Figure 1. Particle movement
It can be seen from Figure 1 and Equation (2) that the particle velocity is updated in three phases. The flow chart of the basic PSO is presented in Figure 2 below.
Figure 2. Flow chart of the basic PSO
It can be seen that the basic PSO contains the three key steps below:
(1) Initialize the swarm size z.
(2) Randomly select an X_{l}^{z} from the interval [X_{max}, X_{min}] that obeys uniform distribution.
(3) Randomly select a V_{l}^{z} from the interval [V_{max}, V_{min}] that obeys uniform distribution.
2.2. Variants
Some variants of the PSO have been developed to enhance its velocity, stability or convergence. A wellknown variant is called the PSO with inertial weight (PSOω), which either fixes or reduces the inertial weight. The basic idea is to balance the local and global searches with the addition of the inertial weight ω. The impact of ω on the velocity update of each particle can be expressed as:
$V_{l}^{z}=\omega V_{l1}^{z}+c_{1} \Xi\left(X^{z(p b e s t)}X_{l1}^{z}\right)+c_{2} \Xi\left(X^{g b e s t}X_{l1}^{z}\right)$ (3)
The value of the inertial weight ω is positively correlated with the global search ability of the algorithm and negatively with the local search ability. In other words, a large inertial weight helps to avoid the local minimum trap and boost the global search, while a small inertial weight facilitates the accurate local search and promotes the convergence of the algorithm.
Many different PSO variants can be created according to different weight update formulas, such as linear weight decreasing PSO, adaptive weight PSO and random weight PSO. For example, the linear weight decreasing PSO can prevent the premature convergence and oscillation against the global optimum of the basic PSO.
Currently, the most accepted strategy of inertial weight is to establish ω∈ [ω_{min}; ω_{max}] and reduce its value according the number of the current iteration:
$\omega=\omega=\omega_{m a x}\frac{\left(\omega_{m a x}\omega_{m m}\right)}{I t r_{m a x}} l$ (4)
where Itr_{max} is the maximum number of iterations. The recommended values are ω_{max}=0.9 and ω_{min}=0.4.
The basic PSO can be viewed as a special case in which the inertial weight is set to 1 throughout the iterations.
For better control of particle velocity, the constriction factor x can be introduced:
$V_{l}^{z}=\chi\left[V_{l1}^{z} \omega+c_{1} \Xi\left(X^{z(p b e s t)}X_{l1}^{z}\right)+c_{2} \Xi\left(X^{g b e s t}X_{l1}^{z}\right)\right]$ (5)
Where
$x=\frac{2}{2\varphi\sqrt{\varphi^{2}4 \varphi}}$ (6)
and φ=c_{1}+c_{2}>4.
The recommended value of x is 0.729 with c_{1}=c_{2}=2. The PSO modified with both inertial weight and constriction factor is denoted as the PSOω, x.
2.3. The IPSO
The basic PSO was improved with the addition of flying time T and adaptive weight ω, aiming to enhance the stability and convergence speed. The adaptive weight ω can be updated as:
$\omega(l)=C \cdot e^{(F(l) / F(l1))}$ (7)
where ω(l) is the adaptive weight of the lth iteration; F(l) is the global best fitness of the lth iteration; C is the compressibility factor, which is a constant. The impact of ω(l) on the velocity update of each particle can be expressed as:
$V_{l}^{z}=\omega(l) \bullet V_{l1}^{z}+c_{1} \Xi\left(X^{z(p b s t)}X_{l1}^{z}\right)+c_{2} \Xi\left(X^{g b s t}X_{l1}^{z}\right)$ (8)
The flying time T is updated according to the following expression
$T=t \bullet\left(1\frac{k \bullet l}{l t r_{m x x}}\right)$ (9)
where t is the initial flying time; k is an adjusting factor, which is a constant. The impact of T on the positive update of each particle can be expressed as:
$X_{l}^{z}=X_{l1}^{z}+T \bullet V_{l}^{z}$ (10)
The IPSO can be implemented in the following steps:
(1) Initialize the swarm size, particle positions and particle velocities
(2) Calculate the fitness of each particle.
(3) Compare the fitness of each particle with its own best known fitness, and make it the new best known fitness if it is better than the latter.
(4) Compare the fitness of each particle with the global best known fitness, and make it the new global best known fitness if it is better than the latter.
(5) Update the velocity and position of each particle according to Equations (8) and (10).
(6) Output the solution if the termination condition is satisfied; Otherwise, return to Step (2).
2.4. Verification of the IPSO with classical functions
Five classical functions (Table 1) were selected to compared the IPSO with the basic PSO and the PSOω, x. The pseudocodes of the basic PSO, the PSOω, x and the IPSO are given in Tables 2~4, respectively. The parameters settings of the three PSOs are listed in Table 5. During the verification, the solution of the classical functions was restricted in the range shown in Table 6. The results of the three PSOs relative to the five classical functions are recorded in Table 7. Figures 3~7 compare the results of all three PSOs obtained through 200 iterations.
Table 1. The selected classical function
Classical Function 
$f_{1}(x)=\sum_{i=1}^{10} x_{i}^{2}$ 
$f_{2}\left(x_{1}, x_{2}\right)=20+x_{1}^{2}+x_{2}^{2}10 \cdot\left[\cos \left(2 \pi \cdot x_{1}\right)+\cos \left(2 \pi \cdot x_{2}\right)\right]$ 
$f_{3}(x, y)=0.5+\frac{\sin ^{2} \sqrt{x^{2}+y^{2}}0.5}{\left[1+0.001\left(x^{2}+y^{2}\right)^{2}\right]^{2}}$ 
$f_{4}(x)=418.929 n+\sum_{i=1}^{n} x_{i} \sin \left(\leftx_{i}\right^{1 / 2}\right)$ 
$f_{s}\left(x_{1}, x_{2}\right)=100\left(\mathrm{x}_{1}^{2}x_{2}^{2}\right)+\left(1\mathrm{x}_{1}^{2}\right)$ 
Table 2. Pseudocode for basic PSO algorithm
Data: c1c2ZItrmaxω_{max}ω_{min} 

Generete randomly X_{0}^{z} and V_{0}^{z }for Z particles of the swarm; Evaluate F(X_{0}^{z}) for each particle,; The minimum valueof F(X_{0}^{z}) for each particle is F_{min}; update X^{z(best)}and X^{gbest}; for l=1 to l=ltr_{max}do for z=1 to z=Z do Update V_{l}^{z} with Eq.(3); Update X_{l}^{z} with Eq.(1); Compute F(X_{l}^{z}), seethe classical function; Update F_{min}; Update X^{z(pbest)}; End 

End 

Update X^{gbest}; Verify stopping criteria;SolutionX^{gbest}; 
Table 3. Pseudocode for PSO with ω and x algorithm
Data: c1c2ZItrmaxx ω 

Generete randomly X_{0}^{z} and V_{0}^{z }for Z particles of the swarm; Evaluate F(X_{0}^{z}) for each particle,; The minimum valueof F(X_{0}^{z}) for each particle is F_{min}; update X^{z(best)}and X^{gbest}; for l=1 to l=ltr_{max}do for z=1 to z=Z do Update V_{l}^{z} with Eq.(5); Update X_{l}^{z} with Eq.(1); Compute F(X_{l}^{z}), seethe classical function; UpdateF_{min}; Update X^{z(pbest)}; End 

End 

Update X^{gbest}; Verify stopping criteria; Solution X^{gbest}; 
Table 4. Pseudocode for improved PSO algorithm
Data: c1c2ZItrmaxt k 

Generete randomly X_{0}^{z} and V_{0}^{z }for Z particles of the swarm; Evaluate F(X_{0}^{z}) for each particle,; The minimum valueof F(X_{0}^{z}) for each particle is F_{min}; update X^{z(best)}and X^{gbest}; for l=1 to l=ltr_{max}do for z=1 to z=Z do Update V_{l}^{z} with Eq.(8); Update X_{l}^{z} with Eq.(10); Compute F(X_{l}^{z}),seethe classical function; UpdateF_{min}; Update X^{z(pbest)}; End 

End 

Update X^{gbest}; Verify stopping criteria;SolutionX^{gbest}; 
Table 5. The parameter settings ofthree PSOs
Basic PSO 
PSO withω and x 
Improved PSO 
c_{1}=1.49, c_{2}=1.49 Z=50, Itr_{max}=200 ω_{max} =0.91, ω_{min} = 0.45 
c_{1}=1.49, c_{2}=1.49 Z=50, Itr_{max}=200 χ=0.729, ω=1 
c_{1}=1.49, c_{2}=1.49 Z=50, Itr_{max}=200 T=0.6, k=0.9 
Table 6. The range of values of the classical function
The classical function 
The range of values 
$f_{1}(x)$ 
x<=15 
$f_{2}\left(x_{1}, x_{2}\right)$ 
x_{1},x_{2}∈[5,5] 
$f_{3}(x, y)$ 
x,y∈[10,10] 
$f_{4}(x)$ 
x∈[500,500] 
Table 7. The optimization values obtained by the three PSO methods
Classical function 
Target value 
Basic PSO 
PSO withω and x 
Improved PSO 
$f_{1}(x)$ 
0 
1.65 e04 
8.70 e05 
2.00 e09 
$f_{2}\left(x_{1}, x_{2}\right)$ 
0 
4.73 e04 
2.86 e05 
3.79 e06 
$f_{3}(x, y)$ 
0 
2.73 e08 
1.47 e10 
1.30 e11 
$f_{4}(x)$ 
0 
1.31 e04 
2.67 e04 
2.50 e08 
$f_{5}\left(x_{1}, x_{2}\right)$ 
0 
1.47e05 
2.73e05 
3.67 e08 
As shown in Table 7, the IPSO outputted 2.00 e^{09} for f_{1}(x), 3.79 e^{06} for f_{2}(x_{1}, x_{2}), 1.30 e^{11} for f_{3}(x, y), 1.30 e^{11} for f_{4}(x) and 3.67 e^{08} for f_{5}(x_{1}, x_{2}), while the target values of f_{1}(x)~ f_{5}(x_{1}, x_{2}) are all zero. Compared to the contrastive algorithms, the IPSO approximated the theoretical value. According to Figures 5~9, the IPSO achieved the fastest convergence while the basic PSO the slowest convergence. Suffice it to say that the IPSO can greatly enhance the solution quality.
Figure 3. F(l) value obtained by f_{1}(x) for each iteration of the three PSO methods
Figure 4. F(l) value obtained by f_{2}(x_{1}, x_{2}) for each iteration of the three PSO methods
Figure 5. F(l) value obtained by f_{3}(x,y) for each iteration of the three PSO methods
Figure 6. F(l) value obtained by f_{4}(x) for each iteration of the three PSO methods
Figure 7. F(l) value obtained by f_{5}(x) for each iteration of the three PSO methods
3.1. Parameter optimization problem of PID controller
In an industrial control system, the output of a control object exhibits as an Sshaped rising curve under the action of a step signal. In this case, the output can be described by a secondorder inertia transfer function:
$G(S)=\frac{K}{T_{1} \bullet S^{2}+T_{2} \bullet S+T_{3}}$ (11)
The PID controller is the most popular regulator tool in engineering. Since its birth 70 years ago, the PID controller has become the main technology of industrial control due to its simplicity, stability, reliability and flexibility. It is particularly suitable for objects that cannot be understood clearly with common theories. Based on system error, the PID control technology computes the control value based on the proportional, integral and derivative terms of the object. The control parameters of PID controller are detailed as follows.
(1) Proportional
The error signal is proportional to the scale of control system. Upon detection of an error, the controller will perform an action to control the error. The response speed and adjustment accuracy are positively correlated with the value of the proportional gain constant k_{p}. However, it is easy to produce overshoot, which leads to shock and instability in a certain range.
(2) Integral
The integral action aims to eliminate the static error, thus enhancing the stability and response speed of the system. The effect of the integral action is negatively correlated with the integral time constant T_{s}. The greater the constant is, the weaker the integral action, and the faster the elimination of the static error. Nevertheless, the integral action is likely to cause saturation in the initial phase, and worsen the overshoot in the response phase.
(3) Differential
The differential parameter reveals the variation in the error signal. To speed up system operation and shorten the adjustment time, the differential time constant T_{D} is introduced before the change takes place to the error signal. The differential action reduces the error in response to any direction and predicts the error in advance. Nonetheless, this action may force the response to stop early and lengthen the adjustment time.
The parameters of PID controller should remain constant in the reproduction process. Any variation in T_{D}, T_{s} and K_{P} will harm the control effect of the PID controller.
Suppose the error e and control action u satisfy the following equation:
$u(t)=K_{p}\left[e(t)+\frac{1}{T_{s}} \int_{0}^{t} e(t) \quad d t+T_{D} \frac{e(t)e(t1)}{d t}\right]$ (12)
where e(t) is the error function; u(t) is the t function of the control action; dt is the sampling period; T_{D }is the differential time constant; T_{s} is the integral time constant; K_{P} is the proportional gain constant.
Figure 8. Schematic diagram of traditional PID
The traditional PID contoller is shown in Figure 8, where rin(t) is the set value and yout(t) is the output value. The goal of parameter optimization is to find the proper K_{P}, T_{s}and T_{D} of the PID controller, such that the solution of the fitness function F=∫_{0}^{∞}e(t)tdt is minimized. The fitness function is illustrated in Figure 9, where Step is the set point.
The parameter optimization of PID controller is a complex nonlinear programming problem. So far, there has not been a mathematical formula that can accurately express the relationship between parameters of PID controller and the objective function. To make up for the gap, the IPSO was applied to solve the problem. As shown in Figure 9, the error of the control system is the difference between the set value and the response. The Abs refers to the absolute value of the error; the Error is the integration between the clock and the absolute error.
Figure 9. PID simulation mode
3.2. Tuning results
The IPSObased parameter optimization of PID controller is explained in Figures 10 and 11. The goal is to obtain the best parameters that ensure the optimal PID control effect.
Figure 10. Flow chart of Optimizing PID parameters with the improved PSO
Figure 11. Schematic diagram of Optimizing PID parameters with the improved PSO
The optimization inputs include the following data: the parameters of the object (T_{1}=7.69e^{3}, T_{2}=2.3e^{5}, T_{3}=291 and K=3508), the parameters of the IPSO (c_{1}=1.49, c_{2}=1.49, Z=50, Itr_{max}=200, T=0.6 and k=0.9). The control object is a secondorder inertial model:
$G(S)=\frac{3508}{7.69 \cdot 10^{3} S^{2}+2.3 \cdot 10^{5} S+291}$ (13)
The PID control output ya constant is unit step response. To verify the effect of IPSObased optimization, the control effect of the IPSOoptimized parameters was contrasted with that of the parameters optimized by the traditional parameter optimization method for PID controller (Ziegler–Nichols optimization method).
Table 8. The ZieglerNichols tuning method
Ziegler–Nichols method 

Control Type 
K_{p} 
T_{S} 
T_{D}. 
P 
0.5 K_{u} 
 
 
PI 
0.45K_{u} 
T_{u}/1.2 
 
PD 
0.8K_{u} 
 
T_{u}/8 
Classic PID 
0.6 K_{u} 
T_{u}/2 
T_{u}/8 
Peesen Inter Rule 
0.7 K_{u} 
T_{u}/2.5 
3T_{u}/20 
Some overshoot 
0.33 K_{u} 
T_{u}/2 
T_{u}/3 
No overshoot 
0.2 K_{u} 
T_{u}/2 
T_{u}/3 
The Ziegler–Nichols optimization method is a heuristic method developed by John G. Ziegler and Nathaniel B. Nichols. It is performed by setting the I (integral) and D (derivative) gains to zero. The P (proportion) gain K_{p} is then increased (from zero) until it reaches the ultimate gain K_{u}, at which the output of the control loop has stable and consistent oscillations. The K_{u} and the oscillation period T_{u} are used to set the P, I and D gains depending on the controller used.
Table 9. Pseudocode for Optimization of PID parameters with improved PSO algorithm
Data: c_{1}c_{2}ZItr_{max}tk 

Generete randomly X_{0}^{z} and V_{0}^{z }for Z particles of the swarm; Evaluate F(X_{0}^{z}) for each particle, $F=\int_{0}^{\infty}e(t) \mathrm{t} d t$; The minimum valueof F(X_{0}^{z}) for each particle is F_{min}; update X^{z(best)}and X^{gbest}; for l=1 to l=ltr_{max}do for z=1 to z=Z do Update V_{l}^{z} with Eq.(8); Update X_{l}^{z} with Eq.(10); Compute F(X_{l}^{z}),seethe classical function; UpdateF_{min}; Update X^{z(pbest)}; End 

End 

Update X^{gbest}; Verify stopping criteria;SolutionX^{gbest}, X^{gbest} isresults of optimization parameters 
Table 10. Parameters obtained by the three PSO methods and ZN
Parameter 
Improved PSO 
ZN 
K_{P} 
2.9 
3.6 
T_{S} 
1.8212 
2.3076 
T_{D}. 
0.01978 
0.05616 
Time 
0.1509 s 
0.2599 s 
The results of the comparison test are displayed in Table 10 and Figure 12. It can be seen from the table that the IPSO reached the equilibrium faster than the traditional method. Figure 12 shows a small overshoot and fast convergence to the set value, indicating that the IPSObased optimization outperformed the traditional method.
Figure 12. Response of PID control system
In order to optimize the control effect of the PID controller, this paper improves the PSO by replacing the fixed weight with the adaptive weight and introducing the flying time. Then, the IPSO was validated through comparison with the basic PSO and the PSOω, x. After that, the IPSO was applied to optimize the parameters of the PID controller. With a secondorder inertia model as the control object, the IPSOoptimized parameters of PID controller were contrasted with those optimized by the traditional parameter optimization method for PID controller (Ziegler–Nichols optimization method). The comparison shows that the IPSO outperformed the traditional method in both convergence speed and accuracy. The research findings shed new light on the parameter optimization of the PID controller and the application of the PSO.
Atyabi A., Samadzadegan S. (2011). Particle swarm optimizationa survey. Applications of Swarm Intelligence, pp. 167179.
Gong D. W., Zhang J. H., Zhang Y. (2011). Multiobjective particle swarm optimization for robot path planning in environment with danger sources. Journal of Computers, Vol. 6, No. 8, pp. 15541561. http://dx.doi.org/10.4304/jcp.6.8.15541561
Kennedy J. (1998). The behavior of particles. Evolutionary Programming VII, pp. 581590. http://dx.doi.org/10.1007/BFb0040809
Meng L., Han P., Ren Y., Wang D. (2013). Design of PID controller based on multiobjective particle swarm optimization algorithm. Computer Simulation, Vol. 30, No. 7, pp. 388391.
Ono S., Nakayama S. (2009). Multiobjective particle swarm optimization for robust optimization and its hybridization with gradient search. IEEE International Conference on Evolutionary Computations, pp. 16291636. http://dx.doi.org/10.1109/CEC.2009.4983137
Sun J., Feng B., Xu W. B. (2004). A global search strategy of quantum behaved particle swarm optimization. IEEE Conf. on Cybernetics and Intelligent Systems Piscataway, pp. 111116. http://dx.doi.org/10.1109/ICCIS.2004.1460396
Tao X. M., Liu F. R., Liu Y., Tong Z. J. (2012). Multiscale cooperative mutation particle swarm optimization algorithm. Journal of Software, Vol. 23, No. 7, 18051815. http://dx.doi.org/10.1007/s1095701609591
Yang Z., Chen Z., Fan Z., Li X. (2010). Tuning of PID controller based on improved paticleswarmoptimization. Control Theory & Application, Vol. 27, No. 10, 13451352.
Yu G., Liu G., Liu Z. F., Liu X. (2013). Multiobjective optimal planning of distributed generation based on quantum differential evolution algorithm. Power System Protection and Control, No. 14, 6672.
Yu S., Cao Z. (2014). Optimization parameters of PID controller parameters based on seeker optimization algorithm. Computer Simulation, Vol. 31, No. 9, 347350.
Yu T. M., Yan D. S. (2006). Differential evolution algorithm formultiobjective optimization. Journal of Changchun University of Technology, Vol. 16, No. 4, pp. 7780.