Design of Two-Dimensional Recursive Digital Filter Using Multi Particle Swarm Optimization Algorithm

Design of Two-Dimensional Recursive Digital Filter Using Multi Particle Swarm Optimization Algorithm

Lakhdar Kaddouri* Amel B.H. Adamou-Mitiche Lahcene Mitiche

Laboratoire de Recherche Modélisation Simulation et Optimisations des Systèmes Complexes Réels, Université ZIANE Achour de Djelfa, Djelfa 17000, Algeria

Corresponding Author Email: 
kaddourilakhdar@gmail.com
Page: 
559-566
|
DOI: 
https://doi.org/10.18280/jesa.530415
Received: 
11 June 2020
|
Revised: 
26 July 2020
|
Accepted: 
3 August 2020
|
Available online: 
30 September 2020
| Citation

© 2020 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Particle Swarm Optimization (PSO) is an evolutionary algorithm widely used in optimization problems. It is characterized by a fast convergence, which can lead the algorithm to stagnate in local optima. In the present paper, a new Multi-PSO algorithm for the design of two-dimensional infinite impulse response (IIR) filters is built. It is based on the standard PSO and uses a new initialization strategy. This strategy is relayed to two types of swarms: a principal and auxiliaries. To improve the performance of the algorithm, the search space is divided into several areas, which allows a best covering and leading to a better exploration in each zone separately. This solved the problem of fast convergence in standard PSO. The results obtained demonstrate the effectiveness of the Multi-PSO algorithm in the filter coefficients optimization.

Keywords: 

2D filter, recursive filters, optimization, multi-PSO, stability

1. Introduction

The applications of two-dimensional (2-D) digital signal processing have been developed rapidly in recent decades. This led to a growing interest in the design of 2-D filters due to a variety of applications in different domains such as digital image, medical data, artificial vision, radar and sensor data processing [1].

The main functions of the filter are to remove unwanted parts of a signal, such as random noise, or to extract useful parts. For example, an image degraded by wideband noise, can be improved without blurring its edges. The magnetic record can be made more readable by suppressing certain signal of large amplitude and low frequency [2].

The digital filters are typically classified into two groups: infinite impulse response (IIR) or recursive filter and finite impulse response (FIR) or non-recursive filter [3]. The FIR filters are generally easier to implement because they are non-recursive and always stable (by definition). On the other hand, it is much more difficult to obtain linear phase responses and to control overall frequency responses with IIR filters. However, very sharp and narrow transition band frequency responses can be easily realized with IIR filters. this feature makes them suitable for a broad range of applications. Generally, the IIR filters are capable of having more accurate responses than those yielded by FIR filters [4]. But the IIR filters suffer from their instability problem. To remedy this problem, some researchers have adopted certain solutions as revealed in literature [5-8].

To solve the instability problem in any IIR design, important optimization-based methods have been emerged especially, those based on evolutionary techniques (ETs), [4, 9-11], where the main idea of optimization-based methods is the design of the two-dimensional digital IIR filters considered as a constrained minimization problem.

Among these methods, the particle swarm optimization (PSO) appears as a powerful tool in optimization problems. It uses only two mathematical equations with primitive operators and is conceptually very simple. Inspired from the animal social behavior evolving in swarms like fishes and flights [12], PSO shows an ease of implementation in comparison with the genetic algorithm (GA), and it has fewer parameters to adjust. It has already been applied successfully in many application areas, including function optimization [13, 14]. However, several studies [15] showed some problems in controlling the balance between exploration (global investigation of the search place) and exploitation (the fine search a round a local optimum). This is why, considerable efforts have addressed this problem [4, 10, 11].

In this paper, a novel Multi-PSO algorithm is utilized for the design of two-dimensional IIR filters. it is based on Standard PSO and characterized by the diversity of particles without decreasing the convergence speed and it uses a new initialization strategy. This strategy is relayed to two types of swarms: the main swarm denoted by S1, and s auxiliary ones denoted by S2i, where $1 \leq i \leq s$. To enhance the performance of the algorithm, the search space is partitioned into several zones. Then, each initialization of S2i is performed in one of these zones. This partitioning allows a best covering of the search space, thus leading to a better exploration in each zone separately. The final table with all the best auxiliary swarm positions will compose the initial population of the main swarm. Once the main swarm is initialized, the auxiliary swarms are no longer used. The main swarm takes over and continues the search in the same way as the standard PSO algorithm until a stop criterion is reached.

2. Formulation of the Design Problem

Consider a two-dimensional IIR filter with its transfer function, H(z1, z2) being in the form [6]:

$H\left(z_{1}, z_{2}\right)=A \prod_{k=1}^{K} \frac{\sum_{j=0}^{J_{2}(k)} \sum_{i=0}^{J_{1}(k)} b_{i j}^{(k)} z_{1}^{-i} z_{2}^{-j}}{\sum_{j=0}^{I_{2}(k)} \sum_{i=0}^{I_{1}(k)} d_{i j}^{(k)} z_{1}^{-i} z_{2}^{-j}}$          (1)

where,

  • H(z1, z2): consists of a cascade of K filter sections,

  • $b_{i j}^{(k)}$ and $d_{i j}^{(k)}, k=\overline{1, K}$: are the unknown coefficients of each section,

  • A is a constant gain that may be left arbitrary during the design procedure,

  • $J_{1}^{(k)}, J_{2}^{(k)}, I_{1}^{(k)}$and $I_{2}^{(k)}$ are integers, they must be chosen between 0 and 2 in order insure the stability.

The design task of 2-D filters is to find the transfer function coefficients H(z1, z2) as in Eq. (1), such that the magnitude function $M\left(\omega_{1}, \omega_{2}\right)=\left|H\left(e^{-j \omega_{1}}, e^{-j \omega_{2}}\right)\right|$ approximates the desired amplitude response Md(w1,w2) in some optimum sense [4].

Thus, the design of a 2-D recursive filter is equivalent to the following constrained minimization problem [1]:

$\begin{aligned} \text {Minimize error} &=\sum_{n_{1}=0}^{N_{1}-1} \sum_{n_{2}=0}^{N_{2}-1}\left(M\left(\omega_{1}, \omega_{2}\right)\right.\left.-M_{d}\left(\omega_{1}, \omega_{2}\right)\right)^{2} \end{aligned}$            (2)

where,

  • N1 and N2 are sampling numbers of the signal on two axes

  • $\omega_{i}=n_{i} \frac{\pi}{N_{i}}(i=1,2)$.

In this study, the model filter is defined as:

$M_{d}\left(\omega_{1}, \omega_{2}\right)=\left\{\begin{array}{ccc}1 & \text {if } & \sqrt{\omega_{1}^{2}+\omega_{2}^{2}} \leq 0.08 \pi \\ 0.5 \text { if } 0.08 \pi & \leq \sqrt{\omega_{1}^{2}+\omega_{2}^{2}} \leq 0.12 \pi \\ 0 & \text { otherwise. }\end{array}\right.$             (3)

with N1=N2 =50 [4, 5].

Figure 1. Desired amplitude response Md of the 2-D filter

To solve the stability problem the work in [5] is exploited and a polynomial of the following form is used

$D\left(s_{1}, s_{2}\right)=\sum_{j=0}^{I_{2}} \sum_{i=0}^{I_{1}} d_{i j} s_{1}^{i} s_{2}^{j}$               (4)

where,

(s1,s2) are two variables of the Laplace transform and (I2, I1) are two integers equal to 1 or 2. The coefficients dij are expressed by nonzero real variables [ql]. The relationship between the terms dij and the parameters [ql] is expressed by the analog network (Contains n1 capacitors c1=s1 and n2 capacitors c2=s2) as show in Figure 2 [16].

Figure 2. Lossless and frequency-independent network

If the network is lossless and independent of frequency, its admittance Y is real and can be expressed as [5]:

$Y=\left[\begin{array}{cccccc}0& \mid & y_{12} & y_{13} & \cdots & y_{1 n} \\ --- & \mid & --- & --- & -- & --- \\ -y_{12} & \mid & 0 & y_{23} & \cdots & y_{2 n} \\ -y_{13} & \mid & -y_{23} & 0 & \cdots & y_{3 n} \\ \vdots & \mid & \vdots & \vdots & & \cdots \\ -y_{1 n} & \mid & -y_{2 n} & -y_{3 n} & \cdots & 0\end{array}\right] =\left[\begin{array}{ccc}Y_{11} & \mid & Y_{12} \\ --- & \mid & --- \\ -Y_{12}^{T} & \mid & Y_{22}\end{array}\right]$            (5)

The network admittance, which terminate by a resistor on one end, and n1+n2 capacitors on the other, can be expressed as follows [5]:

$\hat{Y}\left(s_{1}, s_{2}, y_{k l}\right)$=Y+diag[1 $\underset{\begin{array}{lll}S_{1} & \cdots & S_{1}\end{array}}{\overset{{n_{1}}}{\longleftrightarrow}}\underset{\begin{array}{lll}S_{2} & \cdots & S_{2}\end{array}}{\overset{{n_{2}}}{\longleftrightarrow}}$]        (6)

According to the network’s theory, the input admittance can be deduced as:

$Y_{i n}\left(s_{1}, s_{2}, y_{k l}\right)=1+\frac{Y_{12} \operatorname{adj}\left[\hat{Y}_{22}\left(s_{1}, s_{2}, y_{k l}\right)\right] Y_{12}^{T}}{\operatorname{det}\left[\hat{Y}_{22}\left(s_{1}, s_{2}, y_{k l}\right)\right]}$                  (7)

The determinant of the admittance matrix [Yin(s1,s2,ykl)] is the polynomial D(s1,s2,ykl) Because of the lossless network terminated by a resistor, the D(s1,s2,ykl) is a strictly Hurwitz polynomial of two variables s1 and s2 , for any set of real parameters  values: Yin(s1,s2,ykl); $1<k<l \leq n$.It means that the obtained polynomial D(s1,s2,ykl) is the determinant of the admittance matrix of a lossless and frequency-independent network terminated by capacitors s1, s2 and an input resistor which satisfies the condition:

$D\left(s_{1}, s_{2}, y_{k l}\right) \neq 0$ For $\operatorname{Re}\left(s_{1}\right) \geq 0$ and $\operatorname{Re}\left(s_{2}\right) \geq 0$

The obtained above result is illustrated in the network of three terminals, which ends with a capacitor s1, capacitor s2 and resistor R = 1 Ohm, as shown in Figure 3. The admittance matrix is [16]:

$\hat{Y}\left(s_{1}, s_{2}, y_{k l}\right)=\left[\begin{array}{ccc}1 & y_{12} & y_{13} \\ -y_{12} & s_{1} & y_{23} \\ -y_{13} & -y_{2,} & s_{2}\end{array}\right]$                  (8)

Figure 3. Network of three terminals

The determinant of $\hat{Y}\left(s_{1}, s_{2}, y_{k l}\right)$ is [12]:

$\operatorname{det}\left(\hat{Y}\left(s_{1}, s_{2}, y_{k l}\right)\right)=y_{23}^{2}+y_{13}^{2} s_{1}+y_{12}^{2} s_{2}+s_{1} s_{2}$            (9)

and also:

$D\left(s_{1}, s_{2}\right)=\sum_{j=0}^{I_{2}} \sum_{i=0}^{I_{1}} d_{i j} s_{1}^{i} s_{2}^{j}=d_{00}+d_{10} s_{1}+d_{01} s_{2}+d_{11} s_{1} s_{2}$                (10)

By comparing (9) and (10), the following result is obtained:

$d_{00}=y_{23}^{2}, d_{10}=y_{13}^{2}, d_{01}=y_{12}^{2}$ et $d_{11}=1$

Put y23=q1; y13=q2; and y12=q3.

The coefficients dij are expressed by the parameters qi.

Case I: I1=I2=1, thus,

$d_{00}=q_{1}^{2}$

$d_{10}=q_{2}^{2}$

$d_{01}=q_{3}^{2}$

$d_{11}=1$

Let’s consider a lossless network terminated by two capacitors s1, one capacitor s2, and one input resistor (n1=2, n2=1) [16].

$\hat{Y}\left(s_{1}, s_{2}, y_{k l}\right)=\left[\begin{array}{cccc}1 & y_{12} & y_{13} & y_{14} \\ -y_{12} & s_{1} & y_{23} & y_{24} \\ -y_{13} & y_{23} & s_{1} & y_{34} \\ -y_{14} & -y_{24} & -y_{34} & s_{2}\end{array}\right]$           (11)

By taking:

y12=q1, y13=q2, y14=q3, y23=q4, y24=q5 and y34=q6, the coefficients dij can be deduced as:

Case II (I1=2 and I2=1), thus,

$\begin{aligned} d_{00} &=\left(q_{1} q_{6}-q_{2} q_{5}+q_{3} q_{4}\right)^{2} \\ d_{10} &=q_{5}^{2}+q_{6}^{2} \\ d_{20} &=q_{3}^{2} \\ d_{01} &=q_{4}^{2} \\ d_{11} &=q_{1}^{2}+q_{2}^{2} \\ d_{21} &=1 \end{aligned}$

Similarly, when I1=2 and I2=2, we have [16]:

$\begin{aligned} d_{00} &=\left(q_{5} q_{10}+q_{6} q_{9}+q_{7} q_{8}\right)^{2}, \\ d_{10} &=\left(q_{1} q_{10}-q_{3} q_{7}+q_{4} q_{6}\right)^{2}+\left(q_{2} q_{10}-q_{3} q_{9}+q_{4} q_{8}\right)^{2}, \\ d_{20} &=q_{10}^{2}, \\ d_{01} &=\left(q_{1} q_{8}-q_{2} q_{6}+q_{3} q_{5}\right)^{2}+\left(q_{1} q_{9}-q_{2} q_{7}+q_{4} q_{5}\right)^{2}, \\ d_{11} &=q_{6}^{2}+q_{7}^{2}+q_{8}^{2}+q_{9}^{2}, \\ d_{21} &=q_{3}^{2}+q_{4}^{2}, \\ d_{02} &=q_{5}^{2}, \\ d_{12} &=q_{1}^{2}+q_{2}^{2}, \\ d_{22} &=1, \end{aligned}$

It is worth noting, that when D(s1, s2) satisfies the stability conditions, the D(z1, z2), deduced by the bilinear transformation, satisfies the stability conditions [16]:

$\left.D\left(s_{1}, s_{2}\right)\right|_{s_{1}=\frac{\left(1-z_{1}^{-1}\right)}{\left(1+z_{1}^{-1}\right)}, s_{2}=\frac{\left(1-z_{2}^{-1}\right)}{\left(1+z_{2}^{-1}\right)}}=D\left(z_{1}, z_{2}\right) \neq 0$ for $\left|z_{1}\right| \geq 1$ et $\left|z_{2}\right| \geq 1$.$           (12)

This means that [16], the polynomial D(z1, z2) has not zeros outside the region:

$\left\{\left|z_{1}\right| \cup\left|z_{2}\right| ;\left|z_{1}\right|<1\right.$ and $\left.\left|z_{2}\right|<1\right\}$         (13)

For this investigation, two types of transfer function H(z1,z2) will be studied:

2.1 General transfer function of the form

$H\left(z_{1}, z_{2}\right)=h_{0}\left(\frac{a_{00}+a_{01} Z_{2}+a_{02} Z_{2}^{2}+a_{10} Z_{1}+a_{20} Z_{1}^{2}}{\left(1+b_{1} Z_{1}+C_{1} Z_{2}+d_{1} Z_{1} Z_{2}\right)\left(1+b_{2} Z_{1}+C_{2} Z_{2}+d_{2} Z_{1} Z_{2}\right)}\right.\left.+\frac{a_{11} Z_{1} Z_{2}+a_{12} Z_{1} Z_{2}+a_{21} Z_{1}^{2} Z_{2}+a_{22} Z_{1}^{2} Z_{2}^{2}}{\left(1+b_{1} Z_{1}+C_{1} Z_{2}+d_{1} Z_{1} Z_{2}\right)\left(1+b_{2} Z_{1}+C_{2} Z_{2}+d_{2} Z_{1} Z_{2}\right)}\right)$       (14)

the magnitude $M\left(\omega_{1}, \omega_{2}\right)$ is given as:

$M\left(\omega_{1}, \omega_{2}\right)=\left|H\left(z_{1}, z_{2}\right)\right|$          (15)

with: $z_{1}=e^{-j \omega_{1}}$ et $z_{2}=e^{-j \omega_{2}}$

then 

$M\left(\omega_{1}, \omega_{2}\right)=H_{0}\left|\frac{A_{R}-j A_{I}}{\left(B_{1 R}-j B_{1 I}\right)\left(B_{2 R}-j B_{2 I}\right)}\right|$              (16)

with:

$A_{R}=a_{00}+a_{01} \cos \left(\omega_{2}\right)+a_{02} \cos \left(2 \omega_{2}\right)+a_{10} \cos \left(\omega_{1}\right)$

$\quad+a_{20} \cos \left(2 \omega_{1}\right)+a_{11} \cos \left(\omega_{1}+\omega_{2}\right)$

$\quad+a_{12} \cos \left(\omega_{1}+2 \omega_{2}\right)+a_{21} \cos \left(2 \omega_{1}\right.$

$\left.\quad+\omega_{2}\right)+a_{22} \cos \left(2 \omega_{1}+2 \omega_{2}\right)$

$\begin{aligned} A_{I}=a_{01} \sin \left(\omega_{2}\right) &+a_{02} \cos \left(2 \omega_{2}\right)+a_{10} \sin \left(\omega_{1}\right) \\ &+a_{20} \sin \left(2 \omega_{1}\right)+a_{11} \sin \left(\omega_{1}+\omega_{2}\right) \\ &+a_{12} \sin \left(\omega_{1}+2 \omega_{2}\right)+a_{21} \sin \left(2 \omega_{1}\right.\\ &\left.+\omega_{2}\right)+a_{22} \sin \left(2 \omega_{1}+2 \omega_{2}\right) \end{aligned}$

$B 1_{R}=1+b_{1} \cos (\omega)+c_{1} \cos \left(\omega_{2}\right)+d_{1} \cos \left(\omega_{1}+\omega_{2}\right)$

$B 1_{I}=b_{1} \sin \left(\omega_{1}\right)+c_{1} \sin \left(\omega_{2}\right)+d_{1} \sin \left(\omega_{1}+\omega_{2}\right)$

$B 2_{R}=1+b_{2} \cos \left(\omega_{1}\right)+c_{2} \cos \left(\omega_{2}\right)+d_{2} \cos \left(\omega_{1}+\omega_{2}\right)$

$B 2_{I}=b_{2} \sin \left(\omega_{1}\right)+c_{2} \sin \left(\omega_{2}\right)+d_{2} \sin \left(\omega_{1}+\omega_{2}\right)$

The optimized vector x given as:

$x\left(\begin{array}{llllllll}a_{01} & a_{02} & a_{10} & a_{11} & a_{12} & a_{20} & a_{21} & a_{22} & b_{1} & c_{1} & d_{1} & b_{2} & c_{2} d_{2} & H_{0}\end{array}\right)$

2.2 Transfer function with numerator as mirror image polynomial

A two-dimensional filter given by its transfer function:

$H\left(z_{1}, z_{2}\right)=\frac{N\left(z_{1}, z_{2}\right)}{D\left(z_{1}, z_{2}\right)}$         (17)

The polynomial N(z1, z2) is expressed by the form [16]:

$N\left(z_{1}, z_{2}\right)=\left[\begin{array}{lll}1 & z_{1}^{-1} & z_{1}^{-2}\end{array}\right]\left[\begin{array}{lll}b_{00}^{(k)} & b_{01}^{(k)} & b_{02}^{(k)} \\ b_{10}^{(k)} & b_{11}^{(k)} & b_{10}^{(k)} \\ b_{02}^{(k)} & b_{01}^{(k)} & b_{00}^{(k)}\end{array}\right]\left[\begin{array}{c}1 \\ z_{2}^{-1} \\ z_{2}^{-2}\end{array}\right]$           (18)

It satisfies the conditions:

$b_{i, j}=b_{-i,-j}$ and $b_{-i, j}=b_{i,-j}$

In this case of J1=J2=2:

$H\left(z_{1}, z_{2}\right)=H_{0}\left(\frac{z_{1}^{-1} z_{2}^{-1}\left(b_{11}+b_{01}\left(z_{1}+z_{2}^{-1}\right)+b_{10}\left(z_{2}+z_{2}^{-1}\right)\right)}{D_{1} \cdot D_{2}}\right.\left.+\frac{z_{1}^{-1} z_{2}^{-1}\left(b_{02}\left(z_{1} z_{2}^{-1}+z_{2} z_{1}^{-1}\right)+b_{00}\left(z_{1} z_{2}+z_{1}^{-1} z_{2}^{-1}\right)\right)}{D_{1} \cdot D_{2}}\right)$            (19)

where,

$D_{1}=1+b_{1} z_{1}^{-1}+C_{1} z_{2}^{-1}+d_{1} z_{1}^{-1} z_{2}^{-1}$

$D_{2}=1+b_{2} z_{1}^{-1}+C_{2} z_{2}^{-1}+d_{2} z_{1}^{-1} z_{2}^{-1}$

The magnitude $M\left(\omega_{1}, \omega_{2}\right)$ is given as:

$M\left(\omega_{1}, \omega_{2}\right)=H_{0}\left|\frac{A_{r}}{\left(B_{1 r}+j B_{1 i}\right)\left(B_{2 r}+j B_{2 i}\right)}\right|$          (20)

with:

$A_{r}=b_{11}+2 b_{01} \cos \left(\omega_{1}\right)+2 b_{10} \cos \left(\omega_{2}\right)+2 b_{02} \cos \left(\omega_{1}-\right.\left.\omega_{2}\right)+2 b_{00} \cos \left(\omega_{1}+\omega_{2}\right)$

$B_{1 r}=1+b_{1} \cos \left(\omega_{1}\right)+c_{1} \cos \left(\omega_{2}\right)+d_{1} \cos \left(\omega_{1}+\omega_{2}\right)$

$B_{1 i}=b_{1} \sin \left(\omega_{1}\right)+c_{1} \sin \left(\omega_{2}\right)+d_{1} \sin \left(\omega_{1}+\omega_{2}\right)$

$B_{2 r}=1+b_{2} \cos \left(\omega_{1}\right)+c_{2} \cos \left(\omega_{2}\right)+d_{2} \cos \left(\omega_{1}+\omega_{2}\right)$

$B_{2 i}=b_{2} \sin \left(\omega_{1}\right)+c_{2} \sin \left(\omega_{2}\right)+d_{2} \sin \left(\omega_{1}+\omega_{2}\right)$

The optimized vector x is then [16]

$\left.\begin{array}{llllllllllll}x\left(b_{00}\right. & b_{01} & b_{02} & b_{10} & b_{11} & b_{1} & c_{1} & d_{1} & b_{2} & c_{2} & d_{2} & H_{0}\end{array}\right)$.

3. PSO Algorithm

Particle Swarm Optimization (PSO) is an evolutionary algorithm that uses a population of candidate particles, to develop an optimal solution to the problem.

Proposed in 1995 by Eberhart and Kennedy [12, 17]. It was inspired from the social behavior of the animals evolving in swarms such as fish and birds. In fact, these animals have relatively complex displacement dynamics, whereas each individual has a limited "intelligence" and only a local knowledge of its position in the swarm. The local information and the memory of each individual are used to decide its displacement.

At the beginning of the algorithm, the swarm particles are randomly initialized in the searching space of the problem. Then, at each iteration, every particle moves with combining of the two linear components cited below. At the iteration t + 1, the velocity and position vectors are calculated from Eqns. (21) and (22), respectively:

$\begin{aligned} v_{i, j}^{t+1}=w * v_{i, j}^{t}+C_{1} * r_{1}\left(p b e s t_{i, j}^{t}-x_{i, j}^{t}\right)+C_{2}  * r_{2}\left(g b e s t_{i, j}^{t}-x_{i, j}^{t}\right) \end{aligned}$         (21)

$x_{i, j}^{t+1}=x_{i, j}^{t}+v_{i, j}^{t+1} \quad j \in\{1,2,3, \ldots, D\}$           (22)

where,

w: a constant, called the inertia coefficient.

C1 and C2: two constants, called acceleration coefficients.

r1 and r2: two random numbers, chosen uniformly from the interval [0,1].

In a searching space of dimension D, the particle i of the swarm is modelled by its position vector $\overrightarrow{x_{i}}\left(x_{i 1}, x_{i 2}, x_{i 3}, x_{i 4}, \ldots \ldots \ldots, x_{i D}\right) \quad$ and $\quad$ its $\quad$ velocity vector $\overrightarrow{v_{l}}\left(v_{i 1}, v_{i 2}, v_{i 3}, v_{i 4}, \ldots \ldots \ldots, v_{i D}\right) .$ The quality of its position is determined by the value of the objective function at this point. The particle remembers the best local position, by which it has already passed, is denoted by $\vec{P}$ best $\left(\right.$pbest$_{i 1},$ pbest $_{i 2}, \ldots,$ pbest $\left._{i D}\right)$.

The best global position [18], reached by the swarm's particles, is denoted by: $\overrightarrow{\text {G}}$best$\left(\right.$gbest$_{i 1},$gbest$_{i 2}, \ldots,$gbest $\left._{i D}\right)$, referring to the general version of PSO, where all of the particles in the swarm are considered to be neighbors to the particle i.

Some improvements have been made to the basic algorithm, especially in terms of divergence control. In particular, the inclusion of the Vmax parameter that makes it able to limit the divergence of the particles. In addition, other studies have been conducted on the dynamics of particles, attempted to analyze the algorithm when the convergence conditions of the swarm are insured [19, 20].

The combination between the parameters w, c1 and c2 adjusts the balance between the diversification phases and intensification of the searching process [14]. Clerc and Kennedy [21] have shown that good convergence can be obtained by making these parameters dependent. The use of a constriction coefficient $\chi$ (or constriction factor) makes it possible to better control the divergence of the swarm and to get rid of the definition of Vmax. This variant of PSO, which has been widely used in the literature, is known as the canonical PSO. Using the constriction coefficient, Eq. (21) becomes:

$\begin{aligned} v_{i, j}(t+1)=\chi * &\left(w * v_{i, j}(t)+\phi_{1} * r_{1}\right. *\left(p b e s t_{i, j}-x_{i, j}(t)\right)+\phi_{1} * r_{1} \left.*\left(g b e s t_{i, j}-x_{i, j}(t)\right)\right) \end{aligned}$       (23)

with: $\chi=\frac{2}{\emptyset-2+\sqrt{\emptyset^{2}-4 \emptyset}}$ and $\emptyset=\emptyset_{1}+\emptyset_{2}, \emptyset>4$

3.1 The Multi-PSO algorithm

Multi-PSO is a new variant of the original Particle Swarm Optimization algorithm, in which two ideas were added; The first idea is to use two types of swarms, a principal and auxiliaries. The second idea is the partition of the search space into several zones. in another meaning, the algorithm is based on Standard PSO with a new initialization strategy. This strategy is based on the use of two kinds of swarms: the main swarm denoted by S1, and s auxiliary ones denoted by S2i, where 1 ≤ i ≤ s [18]. In the Multi-PSO algorithm, every auxiliary swarm is intended to discover an area in the search space by performing a number of iterations, its role ends when the best position of the zone is reached. More precisely, the populations of the auxiliary swarms are initialized randomly in the different zones, according to a partition already made in the searching space. Then, the particles of each swarm perform n predefined generations in their area and, at the end, the best-found position is saved in a vector array. The best obtained positions by all auxiliary swarms are therefore grouped in a vector of size equal to the number of swarms initialized and also to the number of zones. The all best positions will then compose the initial population of the main swarm. Once the main swarm is initialized, the auxiliary swarms are no longer used. The main swarm continues the search by the same way as the standard PSO algorithm until a stop criterion is reached.

3.2 Flowchart of the used program

3.3 Proposed algorithm

1: Input model filter (Md) as given in (3).

2: Start the program and load the initial global settings.

3: Generation of the initial population of the auxiliary swarm(i).

4: Execution of auxiliary PSO with $\mathrm{K}$ iterative periods (update of velocity $\vec{v}_{l}$ and position $\overrightarrow{x_{l}}$ vectors in Eqns. (21) and (22) respectively and take the best global position).

5: Save the best position of auxiliary swarm(i) in gbest (i).

6: If the auxiliary swarm(i) is not the last one then increases i and go to 3. 

7: Initialization of the main swarm by all best positions of auxiliary swarms.

8: Execution of the main PSO (update of velocity $\overrightarrow{v_{l}}$ and position $\overrightarrow{x_{l}}$ vectors in Eqns. (21) and (22) respectively and take the best global position).

9: Save the global best position.

10: Output the coefficients and show frequency magnitude response of the optimized filter.

4. Results of Simulation and Discussion

For the simulation of the program. The Multi-PSO algorithm parameters are presented in the Table 1:

Table 1. Parameters of the Multi-PSO algorithm

 Parameters

Value

Number of auxiliary swarms

10

Population size of auxiliary swarm

150

Iterative periods of auxiliary PSO

100

Iterative periods of main PSO

100

Inertia coefficient w

0.4 to 0.9

Acceleration coefficients

C1=C2=2.05

 
4.1 The general transfer function

$H\left(z_{2}, z_{2}\right)=h_{0}\frac{1+a_{01} z_{2}+a_{02} z_{2}^{2}+a_{10} z_{1}+a_{20} z_{1}^{2}+a_{11} z_{1} z_{2}+a_{12} z_{1} Z_{2}^{2}+a_{21} z_{1}^{2} Z_{2}+a_{22} z_{1}^{2} z_{2}^{2}}{\left(1+b_{1} z_{1}+c_{1} z_{2}+d_{1} z_{1} z_{2}\right)\left(1+b_{2} z_{1}+c_{2} z_{2}+d_{2} z_{1} z_{2}\right)}$          (24)

The optimized vector x:

$x\left(\begin{array}{lllllllll}a_{01} & a_{02} & a_{10} & a_{11} & a_{12} & a_{20} & a_{21} & a_{22} & b_{1} & c_{1} & d_{1} & b_{2} & c_{2}\end{array} d_{2} h_{0}\right)$

x(0.4083-0.3521  0.2272 0.3112-0.7478-1.114  0.7196 1.7258 -0.9279-0.9274  0.8765-0.4518-0.4249-0.0175 0.001)

Minimum error obtained after optimization is:

Error1 = 1.2216

4.2 Transfer function with a mirror image polynomial numerator

$H\left(z_{1}, z_{2}\right)=H_{0}\left(\frac{z_{1}^{-1} z_{2}^{-1}\left(b_{11}+b_{01}\left(z_{1}+z_{2}^{-1}\right)+b_{10}\left(z_{2}+z_{2}^{-1}\right)\right)}{D_{1} \cdot D_{2}}\right.\left.+\frac{z_{1}^{-1} z_{2}^{-1}\left(b_{02}\left(z_{1} z_{2}^{-1}+z_{2} z_{1}^{-1}\right)+b_{00}\left(z_{1} z_{2}+z_{1}^{-1} z_{2}^{-1}\right)\right)}{D_{1} \cdot D_{2}}\right)$            (25)

where,

$D_{1}=1+b_{1} z_{1}^{-1}+C_{1} z_{2}^{-1}+d_{1} z_{1}^{-1} z_{2}^{-1}$

$D_{2}=1+b_{2} z_{1}^{-1}+C_{2} z_{2}^{-1}+d_{2} z_{1}^{-1} z_{2}^{-1}$

The amplitude is:

$M\left(\omega_{1}, \omega_{2}\right)=H_{0}\left|\frac{A_{r}}{\left(B 1_{r}+j B 1_{i}\right)\left(B 2_{r}+j B 2_{i}\right)}\right|$               (26)

Minimum error obtained after optimization:

Error2 = 1.1428

the vector:

x(b00  b01  b10  b11  b1  c1  d1  b2  c2  d2  H0)

x(0.2444 -0.9507 -0.3804 -1.4545 -0.1707 -0.0008 -0.1742 -0.6908 -0.3747  0.1565 -0.5511  0.0065)

(a) HMAPSO

(b) Proposed algorithm (equation N° 24)

(c) MEPSO

(d) SA-PSO

(e) Proposed algorithm (equation N° 25)

(f) BBO–PSO

Figure 4. Comparison of amplitude responses M(w1, w2) by using different algorithm (MEPSO, HMAPSO, SA-PSO, BBO–PSO and proposed algorithm)

Note: in Figure 4 $\mathrm{F}_{\mathrm{x}}=\omega_{1}$ and $\mathrm{F}_{\mathrm{y}}=\omega_{2}$

4.3 Discussion

In order to prove the efficiency of our proposed algorithm, it is compared to some important works: MEPSO [4], HMAPSO [11], BBO–PSO [22] and SA-PSO [10] dealing with the PSO algorithm and proving their good performance compared to other algorithms.

Table 2. Final results (filter coefficients and minimum error) with different algorithms

Coefficients

BBO–PSO

SA-PSO

MEPSO

HMAPSO

Proposed Algorithm

$a_{01}$

0.3840

0.3069

0.3061

0.5815

0.4083

$a_{02}$

− 0.8816

-0.9806

0.9949

0.2207

-0.3521

$a_{10}$

− 0.4478

0.1681

0.3935

0.4387

0.2272

$a_{11}$

− 0.7609

-0.0431

-0.0338

0.4045

0.3112

$a_{12}$

1.1371

-0.1820

0.6481

-1.4084

-0.7478

$a_{20}$

− 0.1803

-0.7270

1.2345

-0.5720

-1.1144

$a_{21}$

0.2938

-0.3249

0.5030

-0.8418

0.7196

$a_{22}$

0.2473

1.6358

0.4481

2.277

1.7258

$b_{1}$

-0.1249

-1.4201

-1.0239

-0.9078

-0.9279

$b_{2}$

-1.0553

-0.9178

0.0342

-0.9058

-0.9274

$c_{1}$

-0.1890

-0.6530

-0.9605

0.8373

0.8765

$c_{2}$

-1.0594

-0.9127

-0.0371

-0.9075

-0.4518

$d_{1}$

-0.8986

1.0081

0.9523

-0.9101

-0.4249

$d_{2}$

1.1366

0.8545

-0.9056

0.8406

-0.0175

$h_{0}$

0.0013

0.0022

0.00034

0.00024

0.001

Minimum Error

2.8518

2.629

9.0005

1.58657

1,221

  • Checking the Stability condition [4, 10, 11, 22]:

In order to verified stability of the filters resulted by MEPSO, HMAPSO, BBO–PSO and SA-PSO, we use the values of bk, ck, and dk (k=1, 2) in Table 2

Table 3. Stability checking table

ALGORITHM

filter resulted

MEPSO

unstable

SA-PSO

unstable

BBO–PSO

stable

HMAPSO

stable

Proposed Algorithm (Multi-PSO)

stable

 
From Table 2 it can be easily verified that our proposed algorithm (Multi-PSO) is classified in better in terms of good optimization quantified by the minimum error value (error =1.2216), in second rank, comes the HMAPSO with an error value (error =1.586576) and the lastly BBO–PSO with MEPSO.

We can point out that for the SA-PSO and MEPSO algorithms, the instability problem is solved by good optimization that guarantees the stability, whereas their resulted filters are unstable when checking the stability condition (Table 3). On other hand our proposed algorithm always ensures filter stability.

According to the obtaining results it is clear that the frequency Magnitude responses in the two methods (equation (24) and (25)) of our proposed algorithm (see Figure 4) are very close to the model filter (see Figure 1).

In addition, the Minimum error obtained by our model after optimization (error1 and error2) are very small compared to the results given by the HMAPSO, SA-PSO, BBO–PSO and MEPSO algorithms. More than that, our model is always stable.

5. Conclusion

In this study, a new Multi-PSO algorithm for the two-dimensional recursive digital IIR filter design is investigated. It is characterized by the diversity of the particles and the use of two ideas. The first consists of using two types of swarms, a principal and auxiliaries. The second idea is the partition of the search space into several zones. Thus, leading to a better exploration of the problem. The results obtained with Multi-PSO are compared with other algorithms previously reported. They indicate that Multi-PSO based methods exhibit better performance in all experiments and provide best optimum solution during search mechanism. In conclusion, these new ideas presented above opened the gate to new methods that could be applied to other optimization algorithms, such as evolutionary algorithms. but the proposed Multi-PSO takes a long time in simulation which is disadvantage.

Nomenclature

2-D

two-dimensional

ETs

evolutionary techniques

FIR

finite impulse response

GA

genetic algorithm

HMAPSO

Hybrid Multiagent Particle Swarm Optimization

IIR

infinite impulse response

MEPSO

modified particle swarm optimizer

Min. Err

Minimum error

PSO

particle swarm optimization

  References

[1] Mladenov, V.M., Mastorakis, N.E. (2001). Design of two-dimensional recursive filters by using neural networks. IEEE Transactions on Neural Networks, 12(3): 585-590. https://doi.org/10.1109/72.925560

[2] Lu, W.S., Antoniou, A. (1992). Two-Dimensional Digital Filters. Marcel Dekker. INC, CRC Press. 

[3] Lim, J.S. (1990). Two-Dimensional Signal and Image Processing. PH, 195-196. 

[4] Das, S., Konar, A. (2007). A swarm intelligence approach to the synthesis of two-dimensional IIR filters. Engineering Applications of Artificial Intelligence, 20(8): 1086-1096. https://doi.org/10.1016/j.engappai.2007.02.004

[5] Ramamoorthy, P.A., Bruton, L.T. (1979). Design of stable two‐dimensional analogue and digital filters with applications in image processing. International Journal of Circuit Theory and Applications, 7(2). https://doi.org/10.1002/cta.4490070210

[6] Aly, S., Fahmy, M. (1978). Design of two-dimensional recursive digital filters with specified magnitude and group delay characteristics. IEEE Transactions on Circuits and Systems, 25(11): 908-915. https://doi.org/10.1109/TCS.1978.1084407

[7] O’Connor, B.T., Huang, T.S. (1978). Stability of general two-dimensional recursive digital filters. IEEE Transactions on Acoustics, Speech, and Signal Processing, 26(6): 550-560. https://doi.org/10.1109/TASSP.1978.1163162

[8] Xie, L.J. (2011). A criterion for Hurwitz polynomials and its applications. International Journal of Modern Education and Computer Science (IJMECS), 125(4): 38-40. https://doi.org/10.5815/ijmecs.2011.01.06

[9] Mastorakis, N.E., Gonos, I.F., Swamy, M.N.S. (2003). Design of two-dimensional recursive filters using genetic algorithms. IEEE Transactions on Circuits and Systems, 50(5): 634-639. https://doi.org/10.1109/TCSI.2003.811019

[10] Dhabal, S., Venkateswaran, P. (2014). Two-dimensional IIR filter design using simulated annealing based particle swarm optimization. Journal of Optimization, 2014: 1-10. https://doi.org/10.1155/2014/239721

[11] Kumar, R., Kumar, A. (2010). Design of two-dimensional infinite impulse response recursive filters using hybrid multiagent particle swarm optimization. Applied Artificial Intelligence, 24(4): 295-312. https://doi.org/10.1080/08839511003715204

[12] Kennedy, J., Eberhart, R.C. (1995). Particle Swarm Optimization. IEEE International Conference on Neural Networks, Perth, WA, Australia, Australia, pp. 1942-1948. https://doi.org/10.1109/ICNN.1995.488968

[13] Shi, Y., Eberhart, R.C. (1999). Empirical study of particle swarm optimization, in: Proceedings of Congress on Evolutionary Computation. Washington, DC, USA, pp. 1945-1949. https://doi.org/10.1109/CEC.1999.785511

[14] Shi, Y., Eberhart, R.C. (1998). A modified particle swarm optimizer. IEEE International Conference on Evolutionary Computation, Anchorage, AK, USA, pp. 69-73. https://doi.org/10.1109/ICEC.1998.699146

[15] Angeline, P.J. (1998). Evolutionary optimization versus particle swarm optimization and philosophy and performance difference. 7th Annual Conference on Evolutionary Programming, San Diego, USA, pp. 601-610. https://doi.org/10.1007/BFb0040811

[16] Shenoi, B.A. (1999). Magnitude and Delay Approximation of 1-D and 2-D Digital Filters. Springer-Verlag Berlin Heidelberg, 190-195. 1st edition. 

[17] Eberhart, R., Kennedy, J. (1995). A new optimizer using particle swarm theory. IEEE the Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 39-43. https://doi.org/10.1109/MHS.1995.494215

[18] El Dor, A., Clerc, M., Siarry, P. (2012). A multi-swarm PSO using charged particles in a partitioned search space for continuous optimization. Computational Optimization and Applications, 53: 271-295. https://doi.org/10.1007/s10589-011-9449-4

[19] Kennedy, J. (1998). The behavior of particles. LNCS, Springer, In Proceedings of the 7th Conference on Evolutionary Computation, pp. 581-589. https://doi.org/10.1007/BFb0040809

[20] Ozcan, E., Mohan, C. (1999). Particle Swarm Optimization: surfing the waves. IEEE Congress on Evolutionary Computation, pp. 1939-1944. https://doi.org/10.1109/CEC.1999.785510

[21] Clerc, M., Kennedy, J. (2002). The particle swarm: explosion, stability, and convergence in multi-dimensional complex space. IEEE Transactions on Evolutionary Computation, 20(1): 1671-1676. https://doi.org/10.1109/4235.985692

[22] Lv, C., Yan, S., Cheng, G., Xu, L., Tian, X.Y. (2016). Design of two-dimensional IIR digital filters by using a novel hybrid optimization algorithm. Multidim. Syst. Sign. Process, 28: 1267-1281. https://doi.org/10.1007/s11045-016-0397-0