现代数字信号处理AdvancedDigitalSignalProcessingch3adaptivefilter_第1页
现代数字信号处理AdvancedDigitalSignalProcessingch3adaptivefilter_第2页
现代数字信号处理AdvancedDigitalSignalProcessingch3adaptivefilter_第3页
现代数字信号处理AdvancedDigitalSignalProcessingch3adaptivefilter_第4页
现代数字信号处理AdvancedDigitalSignalProcessingch3adaptivefilter_第5页
已阅读5页,还剩60页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

Advanced

Digital

SignalProcessing(Modern

Digital

Signal

Processing)Chapter

3

Adaptive

Linear

FilterOutput

signaly(n)Input

signalx(n)3.1

IntroductionBasic

Form

of

the

Adaptive

FilterAdaptive

filter

withadjustable

parametersSupervising

signald(n)AdaptivealgorithmErrore(n)Desired

signal

or-Adaptive

linear

filter:the

adaptive

filter

is

linear.Wiener

Filter

&

Adaptive

Linear

FilterAdaptive

linear

filterh(n)w(n)x(n)y(n):

estimation

of

s(n)Optimum

criteria:

MMSE;are

known;h(n):

nonadjustable.d(n)s(n)Wiener

filterv(n)x(n)The

statistics

of

s(n)

and

v(ont)hers;The

statistics

of

s(n)

and

v(nStationary

random

signals;

are

unknown,

but

with

anavailable

d(n)

or

e(n)

;Deterministic,

stationary

ornon-stationary

random

signalse(n)y(n)

:

estimation

of

d(n)Optimum

criteria:

MMSE

orThe

Classes

of

Adaptive

Linear

FilterBy

the

length

of

linear

filterFIR:

always

stable;

good

convergenceproperties;

possibly

linear-phasedIIR:

probably

less

estimation

error

(residual)than

FIRBy

the

structure

of

linear

filterTransversalLattice:

fast

convergence;

insensitive

to

finitword-length

effects;

modular

structureBy

the

adaptive

algorithmLeast

mean

square

(LMS)Recursive

least

square

(RLS)Other

variances

of

LMS

or

RLSOnly

the

transversal

adaptive

FIR

filters

are

discusPerformances

of

Adaptive

FilterConvergence

rate

of

adaptive

algorithmMisadjustment

Computational

complexity

of

adaptivealgorithm

Expected

properties

of

adaptive

filterstructure:

high

modularity,

parallelism,concurrency

(suitable

for

implementationwith

VLSI)Numerical

stability

and

numerical

accuracyRobustness

Adaptive

algorithm

is

insensitive

to

theinitial

values3.2

Transversal

Adaptive

FIR

FilterMultiple

Input

Adaptive

Linear

CombinerSingle

Input

Adaptive

FIR

Filter

Optimum

Solution

(MMSE)

of

AdaptiveFIR

FilterSolution

for

FIWiener

filter3.3

MSE

Performance

SurfaceMSE

Performance

FunctionQuadratic

function

with

single

globaloptimumOne

weight:

parabolaTwo

weights:

paraboloidMore

than

two

weights:

hyper-paraboloidL+1

weights:

a

hyper-paraboloid

in

the

L+2domain

space■Weight

Deviation

VectorWeight

deviation

vectorThe

v(n)

is

the

deviation

of

the

weight

vector

w(n)from

the

optimal

weight

vector

w*.Any

departure

of

the

w(n)

from

the

w*

would

causean

excess

mean-square

error

with

a

quadratic

formThe

performance

function

in

v(n)

coordinate

sysThe

v(n)

coordinate

system

is

a

shifting

of

the

w(n)coordinate

system.Principle

Axes

Coordinate

Systemprinciple

axes

coordinate

systemThe

principle

axes

coordinate

system

is

a

rotation

ofthe

v(n)

coordinate

system.The

performance

function

in

the

principle

axescoordinate

systemThe

natural

coordinate

systemThe

shifted

coordinate

systemThe

principle

axes

coordinate

systemPerformance

SurfaceSearching

the

Performance

SurfaceThe

objective

of

adaptive

algorithms

is

tosearch

the

single

optimum

point

of

performancesurface

from

an

arbitrary

start

point.3.4

LMS

Adaptive

AlgorithmThe

Gradient

of

Performance

SurfaceNatural

coordinate

systemShifted

coordinate

systemPrinciple

axes

coordinate

systemis

the

step

size

or

adaptive

constant.

Itgoverns

the

stability

of

algorithm,misadjustment

and

the

rate

of

convergence.

Steepest

Descent

Method

(B.

Widrow,1959)Basic

principlesSearching

the

optimum

point

along

thenegative

gradient

direction,

such

a

directionis

the

one

with

the

steepest

descent

of

theperformance

function.Natural

coordinate

systemPrinciple

axes

coordinate

systemShifted

coordinate

systemSufficient

condition

for

convergenceifi.e.orthenSufficientconditionTransition

processThe

convergence

takes

place

independentlyalong

each

of

the

principal

axes.

As

the

iteratprocess

advances,

the

rate

of

convergence

oneach

axis

is

governed

by

a

unique

geometricratio

determined

by

the

correspondingeigenvalue.UnstableStable

but

withdamp

vibrationStable

andconverge

graduallyTheμshould

be

a

balance

between

the

stabilityand

the

convergence

rate.Limitations

of

steepest

descent

algorithm

The

modification

of

weights

in

each

iteratioisWhen

the

weight

deviation is

very

small,the

weight

modifications

in

each

iteration

is

alvery

little,

hence

the

convergence

rate

of

steepdescent

algorithm

is

slow.

The

steepest

descent

algorithm

is

not

applicif

the

statistics

of

random

signal

is

unknownLeast

Mean

Square

(LMS)

AlgorithmEstimation

of

the

gradientLMS

algorithmUnbiasedestimatioLearning

curves

of

a

weightDeterministic

signalStationaryrandomsignalThe

average

of

50

learning

curvesDeterministic

signalStationaryrandomsignalMisadjustmentExcess

mean

square

errorHence,With

some

rational

assumptions,

it

can

be

justifthat

after

transition

processMisadjustmentIt

means

that

the

misadjustment

is

proportional

tostep

size

μ.The

μ

should

be

a

tradeoff

between

the

rate

ofconvergence

and

the

misadjustment.Some

variable

step-size

algorithms

in

which

the

μreduced

gradually

along

with

the

transition

procmay

be

adopted

sometimes

to

accommodate

therequirements

for

both

the

convergence

rate

and

tmisadjustment.Comments

on

LMS

algorithmSimplicity

and

low

computation

loadsRelatively

slow

convergence

rate

and

longtransition

processThe

BP

algorithm

of

feed-forward

neuralnetwork

is

the

generalization

of

LMS

algorithm3.5

RLS

Adaptive

AlgorithmLeast

Square

(LS)

EstimationTransversal

FIR

filterOptimum

criterionAn

accumulatederror

functionλ:

forgetting

factor,

0<λ≤1;λ<1:

suitable

for

nonstationary

signalLS

estimationLetandThenWiener-Hopf

equationby

substituting

theUnbiased

and

consistentestimation

when

theobservation

noise

is

whiteexpectation

with

the

sumIt

is

cumbersome

and

almost

impractical

tocalculate

the

M×M

inverse

matrix

R-1(n)

at

eachinstant

n,

hence

such

a

LS

estimation

is

hardlyused

in

real-time

applications.Recursive

Least

Square

(RLS)

AlgorithmThe

matrix

inversion

lemmaLet

A

and

B

be

two

positive

definite

M×Mmatrices

related

bywhere,

D:

N×N

positive

definite

matrix;C:

M×N

matrix.Then

the

inverse

matrix

of

A

can

be

expressedthenRecursive

solution

for

R-1(n)andRecursive

solution

for

w(n)RLS

algorithmInitializationGain

vectorPrediction

errorWeight

updatingT(n)

updating

Comparison

between

LMS

and

RLSalgorithmThe

LMS

is

more

simple

and

has

lesscomputation

complexity

than

the

RLSThe

RLS

usually

converges

more

quickly

thanthe

LMSThe

RLS

is

more

suitable

for

non-stationaryrandom

signals3.6

Applications

of

Adaptive

FilterAdaptive

Modeling

(System

IdentificatioNoise-free

casex(n):

known

input

signal,

often

pseudorandom

(whitnoise

exerted

to

the

unknown

system

deliberately;d(n):

desired

signal,

the

output

of

the

unknown

sysNoise-included

caseN(n):

observation

noise,

uncorrelated

with

the

x(This

makes

it

possible

to

obtain

a

system

model

onliModel:

the

adaptive

filterLinear

model:

FIR

or

IIR

filter

withcorresponding

adaptive

algorithmsNonlinear

model:

e.g.

feed-forward

neuralnetwork

(with

BP

algorithm)Adaptive

Inverse

Filtering

(Inverse

ModelDelayed

inverse

modelingDelay:

make

the

inverse

system

H-1(z)

casual;If

the

H

(z)

is

not

minimum

phase

(with

aunstable

H-1(z)),

then

a

FIR

adaptive

filter

can

beused

to

approximate

the

inverse

system.Adaptive

channel

equalizerp(n):

Pilot

signal

(training

signal),

uncorrelwith

x(n)

and

known

by

the

receiver,

its

delayserved

as

a

desired

signal

for

adaptive

equalizeSecondarysignaly(n)Reference

signalx(n)AdaptivefilterAdaptive

CancellingBasic

principlesPrimary

signal

d(n)AdaptivealgorithmThe

reference

signal

x(n)

should

be

correlated

wiall

or

some

parts

of

the

primary

signal

d(n).

Only

thcorrelated

parts

of

x(n)

and

d(n)

can

be

cancelled.Error

orresidualsignale(n)-

Canceling

maternal

heartbeat

in

fetalelectrocardiographys(n):

fetal

electrocardiography;

x(n):

maternalCanceling

noise

in

speech

signals

Adaptive

echo

canceller

in

long-distancetelephone

Adaptive

notch

filter

(cancelling

a

singlefrequency

interference)Referenceinput

x(n)AdaptivefilterActive

noise

control

(ANC)ANC

systemPrimary

noise

d(n)Residualnoisee(n)Secondary

noise

y(n)-

Secondarysound

pathy(n)x(n)W(z)d(n)-C(z)e(n)Filtered-x

LMS

algorithmIn

ANC

system,

the

secondary

sound

path

between

theadaptive

filter

and

the

combiner

makes

the

x(n)e(n)

an

iestimation

of

the

performance

surface

gradient

which

wiprobably

lead

the

LMS

algorithm

to

be

divergent.

The

filalgorithm

rectifies

the

gradient

estimation

by

filteriwith

an

estimation

of

the

secondary

sound

path.y(n)W(z)d(n

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论