Back to Search Start Over

Uyarlamalı süzgeçler

Authors :
Aysel, Ridvan
Kayran, Ahmet H.
Diğer
Publication Year :
1994
Publisher :
Fen Bilimleri Enstitüsü, 1994.

Abstract

Uyarlamalı süzme (adaptive filtering) probleminin çeşitli rekürsif algoritmalarla sunulan pekçok çözümü vardır. Uyarlamak süzgeçlerdeki (Adaptive Filters) bu rekürsif algoritmaların çıkarılmasında kullanılan yaklaşımlar üç temel grupta toplanırlar: 1.Wiener Süzgeç teorisine dayalı yaklaşım, 2.Kalman Süzgeç teorisine dayalı yaklaşım, 3.En Küçük Kare yöntemine dayalı yaklaşım. İlk iki.yaklaşımı oluşturan Wiener Süzgeçleri ve Kalman Süzgeçleri istatistiksel kavramlara dayalıdır yani bir istatistiksel önbilgi kullanırlar. Klasik En Küçük Kare yöntemine dayalı yaklaşım ise, deterministik yapısı itibarıyla diğer iki yaklaşımdan farklıdır. En küçük kare metoduna göre, uyarlamak süzgecin gerçeklenmesi için kullanılacak yapıya bağlı olarak, üç temel farklı uyarlamak süzme algoritmaları sınıfı tanımlanabilir: 1.Rekürsif En Küçük Kare (RLS : Recursive Least Squares) Algoritması, 2.Rekürsif En Küçük Kare Kafes (RLSL : Recursive Least Squares Lattice) Algoritması, 3.QR Analizli En Küçük Kare (QRD-LS: QR Decomposition-Least Squares) Algoritması. Bu çakşmada, Wiener Süzgeç teorisine dayak yaklaşım olarak En Küçük Ortalama Kare (LMS : Least Mean Square) algoritması, En Küçük Kare yöntemine dayak yaklaşım olarak da Rekürsif En Küçük Kare (RLS) ve Rekürsif En Küçük Kare Kafes (RLSL) algoritmaları çıkarılmış ve bu algoritmalar Uyarlamak Kestirim (Adaptive Prediction) ve Uyarlamak Dengeleme (Adaptive Equalization) deneylerinde kullanılarak çeşitli özellikleri incelenmiştir. Çakşmanın birinci bölümünde, Uyarlamak Süzgeçlere giriş yapılarak yukarıda belirtilen üç temel yaklaşım üzerinde durulmuş ve kısaca uyarlamak süzgeç uygulamalarından bahsedilmiştir, ikinci bölümde, uyarlamak süzgeç yapısı incelenmiştir. Üçüncü bölümde LMS algoritması, dördüncü bölümde RLS algoritması ve beşinci bölümde RLSL algoritması çıkarılarak altıncı bölümde bu algoritmalar yukarıdaki iki bilgisayar deneyinde kullanılmıştır. Yedinci bölümde, deney sonuçlan değerlendirilmiş ve sonuçlar yorumlanmıştır. Whenever there is a requirement to process signals that result from operation in an environment of unknown statistics, the use of an adaptive filter offers an attractive solution the problem as it usually provides a significant improvement in performance över the use of a fîxed filter designed by conventional methods. The use of adaptive filters also provides new signal processing capabilities that would not be possible othenvise. Thus adaptive filters are succesfully applied in such dirTerent fields as communacations, control, radar, sonar, seismology, image processing and pattern recognition. The design of a Wiener filter requires a priori information about the statistics of the data to be processed. The filter is optimum only when the statistical characteristics of the input data match the a priori information on which the design of the filter is based. When this information is not known completely, however, it may not be possible to design the Wiener filter ör else the design may no longer be optimum. A straightfonvard approach that used in such situations is the `estimate and plug` procedure. This a two stage process whereby the filter first `estimates` the statistical parameters of the relevant signals and then `plugs` the results so obtained into a nonrecursive formula for computing the filter parameters. For real-time operation, this procedure has the disadvantage of requiring excessively elaborate and costiy hardware. A more efBcient method is to use an Adaptive Filter. Adaptive filter relies for its operation on a recursive algorithm, which makes it possible for the filter to perform satisfactorily in an environment where complete knowledge of the relevant signal characteristics is not available. in a stationary environment, after successive iterations of the algorithm, it converges to the optimum Wiener solution in some statistical sense, in a nonstationary environment, the algorithm offers a tracking capability, whereby it can track time variations in the statistics of the input data, provided that the variations are sufficiently slow. There are lots of solutions to adaptive filtering problem that represented by a variety of recursive algorithms, each of which offers desirable features of its own. Basically, three distinct methods may be identified for deriving recursive algorithms for the operation of adaptive filters : l. Approach based on Wiener Filter theory. A transversal filter is used as the structural basis for implementing the adaptive filter. The finite impulse response of such a filter is defined by a set of tap weights.For the case of stationary inputs, the mean squared error is precisely a second order fimction of the tap weights in the transversal filter. The dependence of the mean squared error on the unknown tap weights may be viewed to be in the form of a multidimensional paraboloid with a uniquely defined bottom ör minimum point. This paraboloid is identifîed as the error performance surface. The tap weights corresponding to the minimum point of the surface define the optimum Wiener solution. To develop a recursive algorithm for uptading the tap weights of the adaptive transversal filter, first the Normal Equation which is defining the optimum Wiener solution is modified through the use of the Method of Steepest Descent, a well-known technique in optimization theory. This modification requires the use of a gradient vector, the value of which depends on two parameters : the correlation matiix of the tap inputs in the transversal filter and the cross-correlation vector between the desired response and the same tap inputs. Next, instantaneous values for these correlations are used so as to derive an estimate for the gradient vector. The resulting algorithm is widely known as the least mean square (LMS) algorithm. This algorithm is simple and yet capable of achieving satisfactory performance under the right conditions. Its majör limitations are a relatively slow rate of convergence and a sensitivity to variations in the eigenvalue spread which is defined as the ratio of the maximum to minimum eigenvalues of the correlation matrbc of the tap inputs. in a nonstationary environment, the orientation of the error performance surface varies continuously /vith time. in this case, The LMS algorithm has the added task of continually tracking the bottom of tiıe error performance surface. Indeed, tracking /vill occur provided that the input data vary slowly compared to the learning rate of the LMS algorithm. in Chapter 3, the LMS algorithm is derived and studied its behavior in a stationary environment. 2. Approach based on Kalman Filter theory. The recursive solution to the Kalman filtering problem may be used to derive düTerent recursive algorithm for updating the tap weight vector of the adaptive transversal filter. These algorithms can provide a much faster rate of convergence than that attainable by the LMS algorithm. Their rate of convergence is essentially insensitive to the eigenvalue spread problem. The basic limitation of these algorithms is their computational complexity which is a direct consequence of the matrk formulation of the solution to the Kalman filtering problem. The development of Kalman filter theory and its use to derive recursive algorithms for adaptive transversal filters may be a different thesis subject.3. Method of Least Squares. The theory of Wiener and Kalman filters, decribed above, is based on statistical concepts. The approach based on the classical method of least squares differs from these two in that it is deterministik in its formulation right from the start. According to the method of least squares, an index of performance that consists of the sum of weighted error squares is minimized, where the error is defined as the difference between some desired response and actual filter output. Depending on the structure used for implementing the adaptive filter, three basically different classes of adaptive filtering algorithms that originate from the method of least squares may be identified : (a) Recursive least squares algorithm. As with adaptive filtering algorithms derived from the Wiener and Kalman filters, the recursive least squares (RLS) algorithm also assumes the use of a transversal filter as the structural basis of the adaptive filter. The derivation of the algorithm relies on a basic result in linear algebra known as the matrix inversion lemma. The RLS algorithm bears a close relationship to the Kalman algorithm for adaptive transversal filters. As such, the RLS algorithm enjoys the same virtues and suffers from the same limitation (computational complexity) as the Kalman algorithm. By exploiting certain properties that arise in the case of serialized data, various schemes have been developed to overcome the computational complexity of the RLS algorithm. One scheme is known as the fast RLS algorithm. Another particularly useful scheme is known as the fast transversal filters (FTF) algorithm that uses a parallel combination of 4 transversal filters. These fast algorithms effectively retain the advantages of the ordinary RLS algorithm, and yet their computational complexity is reduced to a level comparable to that of the simple IMS algorithm. The derivation of the RLS algorithm is presented in Chapter 4. (b) Recursive least squares lattice algorithm. In this formulation of the recursive least squares problem, the issue of computational complexity is resolved by using a multistage lattice predictor as structural basis for implementing the adaptive filter. This predictor consists of a cascade of stages, each in the form of a lattice; hence, the name. An important property of the multistage lattice predictor is that its individual stages are decoupled from each other in a time-averaged sense. This property is exploited in the derivation of the recursive least squares lattice (RLSL) algorithm that involves both time and order updates. The RLSL algorithm is rapidly convergent, robust and computationally efficient. Moreover, modular structure of the multistage lattice predictor and associated parts makes the RLSL algorithm well suited for implementation on a single silicon chip using very large scale integration (VLSI) technology. The derivation of the RLSL algorithm is presented in Chapter 5. Xll(c) QR decomposition-least squares algorithm. This algorithm consists of an iterative open-loop configuration, and as such it differs from all the other adaptive filtering algorithms (discussed above) that involve the use of iterative closed-loop configurations. This algorithm is stable, robust, rapidly convergent and computationally efficient. Futhermore, it may be implemented using systolic arrays, which represent a highly efficient, dedicated structure. QRD-LS algorithm, including their implementations in systolic form may be studied in another thesis. The origins of the various adaptive filtering algorithms briefly described in the preceding and the basic structures used for their implementation may be summarized such as : Origin of the Algorithm Algorithm Structural Basis Wiener Filter Kalman Filter Method of Least Squares Least Mean Square (LMS) Kalman Algorithm assuming (a) Fixed state model (b) Random walk state model Recursive Least Squares (RLS) FastRLS FTF Transversal Filter Transversal Filter Transversal Filters Recursive Least Squares Lattice (RLSL) Multistage Lattice Predictor Recursive QR Decomposition Least Squares (QRD-LS) Systolic Arrays The theory of a closed-loop realization of adaptive filters is built around a transversal filter that is characterized by an impulse response of finite duration and which does not require complete a priori knowledge of the statistics of signals to be filtered. Two basic processes take place in an adaptive filter : 1. The adaptive process, which involves the automatic adjustment of the tap weights of the filter in accordance with some algorithm. 2. The filtering process, which involves (i) multiplying the tap inputs by the corresponding set of tap weights resulting from the adaptive process to produce an estimate of the desired response, and (ii) generating an estimation error by comparing xmthis estimate with the actual value of the desired responses. The estimation error is in turn used to actuate the adaptive process, thereby closing the feedback loop. Various algorithms have been developed for implementing the adaptive process. In Chapter 3, a widely used algorithm known as the least mean square (LMS) algorithm has been studied. A significant feature of the LMS algorithm is its simplicity; its does not require measurements of pertinent correlation functions, nor does it require matrix inversion. First, an old optimization technique known as the method of steepest descent, which is a recursive method for finding the minimum point of the error performance surface without knowledge of the error performance surface itself, has been briefly described. Because, this method provides some advantages for writing the expression for the LMS algorithm. Then, the important characteristics of the LMS algorithm have been analyzed when operating in a stationary environment. This analysis will not only help develop a deep understanding of the LMS algorithm, but also it gives invaluable insight into the operation of more complicated adaptive filtering algorithms. Finally, these equations have been obtained for the LMS algorithm : e(n) = d(n) - wff(n)u(n) w( n + 1) = w(n) + /iu(n)e*(n) In Chapter 4, the use of the method of least squares has been extended to develop a recursive algorithm for the design of adaptive transversal filters. The resulting algorithm is known as the recursive least squares (RLS) algorithm. The RLS algorithm yields a data adaptive filter in that for each set of input data there exists a different filter. The development of the RLS algorithm has been begun by reviewing some basic relations that pertain to the method of least squares. Then, by exploiting a relation in matrix algebra known as the matrix inversion lemma, RLS algorithm has been developed. An important feature of the RLS algorithm is that it utilizes all the information contained in the input data, extending back to the instant of time when the algorithm is initiated. The resulting rate of convergence is therefore typically faster than the simple LMS algorithm. This improvement in performance, however, is achieved at the expense of a large increase in computational complexity. Finally, these equations have been obtained for the RLS algorithm : k(n)= ^(n-Du(n) 1 + ArV(n)P(n - l)u(n) a(n) = tf(n) - w*(n - l)u(n) w(n) = w(n - 1) + k(n)a*(n) P(n) = XT,P(n - 1) - XT^u* (n)P(n - 1) The FTF algorithm uses a maximum of four transversal filters with a common input. In Chapter 5, another class of exact least squares algorithms based on a different structure, the multistage lattice predictor, which is modular in form, has been xivdescribed. They are known collectively as least squares lattice (LSL) algorithms, involving both order update and time update recursions. These algorithms are as efficient as the FTF algorithm in that they both essentially realize the same rate of convergence at the expense of a computational cost that increases linearly with the number of adjustable tap weights. The class of LSL algorithms is also just as robust as the RLS and FTF algorithms in that they are all essentially insensitive to variations in the eigenvalue spread of the correlation matrix of the input signal. The lattice predictors considered in this chapter may operate in nonstationary environments, though with time-variant reflection coefficients. The class of RLSL algorithms is by no means the only one available for the design of adaptive lattice predictors. Indeed, various adaptive lattice algorithms have been derived in the literature for different signal processing applications. A particular class of algorithms that deserves special mention is that of gradient adaptive lattice (GAL) algorithms proposed by Griffiths for the recursive computation of the reflection coefficients of multistage lattice predictors. The development of these algorithms was inspired by the LMS algorithm. The main virtue of GAL algorithms is their reduced computational complexity compared to RLSL algorithms. In general, however, their performance is inferior to that of RLSL algorithms. This may not be surprising since RLSL algorithms provide an exact solution to the linear least squares problem at each time step, whereas GAL algorithms invoke the use of approximations in their derivations. In this chapter, a detailed discussion of RLSL algorithm using a posteriori estimation errors has been presented for serialized data. However, first, some relations and notations that pertain to forward and backward prediction error filters have been reviewed. Finally, these equations have been obtained for the RLSL algorithm using a posteriori estimation errors : Am_,(n) -X Am_l(n - 1) + *>-»(»- 9g-»(n) Tf.m(n) = - rb,m(n) = - Am_,(n) B«_,(n - 1) 4.-i(n) Y»-i(n - 1) F»-i(n) fm(n) = fm_1(n) + itJ^b^Cn - 1) bm(n) = bm_1(n - 1) + It»(n)4-i(n) Fm(n)=Fm l(n)- A-l(n)2 B» = Bffl.1(n-l)-^f Fm-l(n) T.C-D-T-C-O-V'fa`1» P«(n)-Xpn(n-1) + ^k(n) m Bm(n) Ym(n) XV«m+lOO^mOO ~ 0)Mn) In Chapter 6, IMS, RLS and RLSL algorithms have been applied to computer experiments which are Adaptive Prediction and Adaptive Equalization. In Chapter 7, the results that pertain these algorithms have been derived and these following observations have been obtained : For fixed eigenvalue spread, increasing the step-size parameter /i reduces the number of iterations required for the LMS algorithm to reach steady state conditions. An increase in u,, however, causes a corresponding increase in the misadjustment (and hence the steady state value of the average squared error). For fixed step-size parameter i, increasing the eigenvalue spread increases the number of iterations required to reach steady state conditions and also increases the steady state value of the average squared error. The RLS algorithm converges much faster than the LMS algorithm. Rate of convergence of the RLS algorithm is essentially independent of the eigenvalue spread. The steady state value of the average squared error produced by the RLS algorithm is much smaller than in the case of the LMS algorithm. In both cases, however, it is sensitive to variations in the eigenvalue spread. The superior rate of convergence of the RLS over the LMS algorithm is realized only when the signal-to-noise ratio is high. The RLSL algorithm converges very rapidly. This rate of convergence is essentially the same as that of the RLS algorithm. Like the RLS algorithm, the rate of convergence of the RLSL algorithm is essentially insensitive to variations in the eigenvalue spread of the correlation matrix of the input signal. The RLS and LMS algorithms rely on the use of transversal filters for their implementation. On the other hand, the RLSL algorithms have a modular structure. The computational complexity of an LSL algorithm increases linearly with the order M. In the RLS algorithm, on the other hand, it increases with M2. xvi 181

Details

Language :
Turkish
Database :
OpenAIRE
Accession number :
edsair.od.....10208..60a793d0eeee7873484e9be4a54c5528