《python高维数据分析》课件-第2章_第1页
《python高维数据分析》课件-第2章_第2页
《python高维数据分析》课件-第2章_第3页
《python高维数据分析》课件-第2章_第4页
《python高维数据分析》课件-第2章_第5页
已阅读5页,还剩58页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

Chapter2

The

Solution

of

Least

Squares

Problems2.1Linear

Least

Squares

Estimatio2.2AGeneralized“Pseudo-Inverse”

Approachto

Solvingthe

Least-squaresProblem2.1.1Example:AutoregressiveModelling

Anautoregressive(AR)processisarandomprocesswhichistheoutputofanallpole-filterwhenexcitedbywhitenoise.Thereasonforthisterminologyismadeapparentlater.Inthisexample,wedealindiscretetime.Anall-polefilterhasatransferfunctionH(z)givenbytheexpression

2.1LinearLeastSquaresEstimation

where

ziare

the

poles

of

the

filter

and

hiare

the

coefficients

of

the

correspondingpolynomial

in

z.

Let

W(z)

and

Y(z)

denote

the

z-transforms

of

the

input

and

outputsequences,

respectively.

If

W(z)=σ2(corresponding

to

a

white

noise

input)

then

or

for

this

specific

case,

Thusequation(2.1.2)maybeexpressedas

Wenowwishtotransformthisexpressionintothetimedomain.Eachofthetime-domainsignalsofequation(2.1.3)aregivenbythecorrespondinginversez-transformrelationshipas

and

the

input

sequence

corresponding

to

the

z-transform

quantity

σ2

is

where

wn

is

a

white

noise

sequence

with

power

σ2.The

left-hand

side

of

equation(2.1.3)

isthe

product

of

z-transforms.

Thus,the

time-domain

representation

of

the

left-han

dside

ofequation

(2.1.3)

is

the

convolution

of

the

respective

time-domain

representations.

Thususing

equation(2.1.3)

to

(2.1.6)we

have

or

Repeatingthisequationformdifferentvaluesoftheindexiwehave

Soagain,itmakessensetochoosetheh’sinequation(2.1.5)sothatthepredictingtermYhisascloseaspossibletoypinthe2-normsense.Hence,asbefore,wechoosehtosatisfy

Noticethatiftheparametershareknowntheautoregressiveprocessiscompletelycharacterized.

2.1.2The

Least-Squares

Solution

We

define

our

regression

model

corresponding

to

equation(2.1.11)

as

and

we

wish

to

determine

the

value

xLS

which

solves

where

A∈Rm×n,m>n,b∈Rm.The

matrix

A

is

assumedf

ull

rank.

WenowdiscussafewrelevantpointsconcerningtheLSproblem:

·Thesystemequation(2.1.12)isoverdeterminedandhencenosolutionexistsinthegeneralcaseforwhichAx=bexactly.

·Ofallcommonlyusedvaluesofpforthenorm‖·‖pinequation(2.1.12),p=2istheonlyoneforwhichthenormisdifferentiableforallvaluesofx.Thus,foranyothervalueofp,theoptimalsolutioninnotobtainablebydifferentiation.

·NotethatforQorthonormal,wehave(onlyforp=2)

Thisfactisusedtoadvantagelateron.

·Wedefinetheminimumsumofsquaresoftheresidual‖AxLS-b‖22asρ2LS.

·Ifr=rank(A)<n,thenthereisnouniquexLSwhichminimizes‖Ax-b‖2.However,thesolutioncanbemadeuniquebyconsideringonlythatelementofset{xLS∈Rn|‖AxLS-b‖2=min}whichhasminimumnorm.

2.1.3Interpretation

of

the

Normal

Equations

Equation(2.1.23)

can

be

written

in

the

form

or

where

is

the

leastsquares

error

vector

between

AxLS

and

b,

rLSmust

be

orthogonal

to

R(A)

forthe

LS

solution

xLS.Hence,

the

name“normal

equations”.This

fact

gives

an

importantinterpretation

to

least-square

sestimation,

whichwe

now

illustrate

for

the

3×2case.

Equation

(2.1.11)

may

be

expressed

as

Thisinterpretationmaybeaugmentedasfollows.Fromweseethat

HencethepointAxLSwhichisinR(A)isgivenby

WherePistheprojectorontoR(A).Thus,weseefromanotherpointofviewthattheleast-squaressolutionistheresultofprojectingb(theobservation)ontoR(A).

Thereisafurtherpointwewishtoaddressintheinterpretationofthenormalequations.Substitutingequation(2.1.26)into(2.1.25)wehave

Thus,rLSistheprojectionofbontoR(A)⊥.Wecannowdeterminethevalueρ2LS,whichisthesquared2-normoftheLSresidual:

2.1.4Properties

of

the

LS

Estimate

Here

we

consider

the

regression

equation(2.1.11)

again.

It

is

reproduced

below

forconvenience.

InordertodiscussusefulandinterestingpropertiesoftheLSestimatewemakethefollowingassumptions:

A1:

nisazeromeanrandomvectorwithuncorrelatedelements;i.e.,E(nnT)=σ2I.

A2:Aisaconstantmatrixwhichisknownwithnegligibleerror.Thatis,thereisnouncertaintyinA.

UnderA1andA2,wehavethefollowingpropertiesoftheLSestimategivenbyequation(2.1.26).

XLS

is

an

Unbiased

Estimate

of

X0

the

True

Value

To

show

this,we

have

from

equation

(2.1.26)

Butfromtheregressionequation(2.1.29),werealizethattheobserveddata

baregeneratedfromthetruevaluesx0ofx.Hencefromequation(2.1.29)

ThereforetheE(x)isgivenas

whichfollowsbecauseniszeromeanfromassumptionA1.ThereforetheexpectationofxisitstruevalueandxLSisunbiased.

CovarianceMatrixofxLS

Thedefinitionofthecovariancematrixcov(xLS)ofthenon-zeromeanprocessxLSis:

ForthesepurposeswedefineE(xLS)as

Substitutingequation(2.1.34)and(2.1.26)in(2.1.33),wehave

FromassumptionA2wecanmovetheexpectationoperatorinside.Therefore,

xLSisaBLUE

Accordingtoequation(2.1.26),weseethatxLSisalinearestimatesinceitisalineartransformationofb,wherethetransformationmatrixis(ATA)-1AT.FurtherfromSectionweseethatxLSisunbiased.Withthefollowingtheorem,weshowthatxLSisthebestlinearunbiasedestimator(BLUE).

ProbabilityDensityFunctionofxLS

ItisafundamentalpropertyofGaussian-distributedrandomvariablesthatanylineartransformationofaGaussiandistributedquantityisalsoGaussian.Fromequation(2.1.26)weseethatxLSisalineartransformationofb,whichisGaussianbyhypothesis.SincetheGaussianpdfiscompletelyspecifiedfromtheexpectationandcovariance,givenrespectivelybyequation(2.1.32)and(2.1.36),thenxLShastheGaussianpdfgivenby

WeseethattheellipticaljointconfidenceregionofxLSisthesetofpointsψdefinedas

where

k

is

some

constant

which

determines

the

probability

level

that

an

observation

willfall

within

ψ.

Note

that

if

the

joint

confidence

region

becomes

elongated

in

any

direction,then

the

variance

of

the

associated

components

of

xLSbecome

large.

Let

us

rewrite

thequadratic

form

in

equation(2.1.44)

as

Theorem2

TheleastsquaresestimatexLSwillhavelargevariancesifatleastoneoftheeigenvaluesofATAissmallwheretheassociatedeigenvectorshavesignificantcomponentsalongthex-axes.

Maximum-LikelihoodProperty

Inthisvein,theleast-squaresestimatexLSisthemaximumlikelihoodestimateofx0.Toshowthisproperty,wefirstinvestigatetheprobabilitydensityfunctionofn=Ax-b,givenforthemoregeneralcasewherecov(n)=Σ:

2.1.5Linear

Least-Squares

Estimation

and

the

Cramer

Rao

Lower

Bound

In

this

sectionwe

discuss

the

relationship

between

the

cramer

rao

lower

bound(CRLB)

and

the

linear

least-squares

estimate.

We

first

discussthe

CRLB

itself,

and

thengo

onto

discuss

the

relationship

between

the

CRLB

and

linear

leastsquares

estimation

inwhite

and

coloured

noise.

The

Crame

rRao

Lower

Bound

Here

we

assume

that

the

observed

data

b

is

generated

from

the

model(2.1.29),forthe

specific

case

when

the

noise

n

is

a

joint

Gaussian

zero

mean

process.In

order

to

addressthe

CRLB,

we

consider

a

matrix

J

defined

by

Inourcase,Jisdefinedasamatrixofsecondderivativesrelatedtoequation(2.1.45).Theconstanttermsprecedingtheexponentinarenotfunctionsofx,andsoarenotrelevantwithregardtothedifferentiation.Thusweneedtoconsideronlytheexponentialtermofequation(2.1.45).Becauseoftheln(·)operationreducestothesecondderivativematrixofthequadraticformintheexponent.ThissecondderivativematrixisreferredtoastheHessian.Theexpectationoperatorofequation(2.1.46)isredundantinourspecificcasebecauseallthesecondderivativequantitiesareconstant.Thus,

UsingtheanalysisofSectionand,itiseasytoshowthat

Least-Squares

Estimation

and

the

CRLB

for

White

Noise

Using

equation(2.1.45),

we

now

evaluate

the

CRLB

for

data

generated

according

tothe

linearreg

ression

model

of

(2.1.11),

for

the

specific

case

of

white

noise

where

Σ=σ2I.That

is,if

we

observe

data

which

obey

the

model

(2.1.11),

what

is

the

lowest

possiblevariance

on

the

estimates

given

by

equation

(2.1.26)

from

(2.1.48),

Least-SquaresEstimationandtheCRLBforColouredNoise

Inthiscase,weconsiderΣtobeanarbitrarycovariancematrix,i.e.,E(nnT)=Σ.Bysubstitutingequation(2.1.45)andevaluating,wecaneasilyshowthattheFisherinformationmatrixJforthiscaseisgivenby

WenowdeveloptheversionofthecovariancematrixoftheLSestimatecorrespondingtoequation(2.1.36)forthecolourednoisecase.Supposeweusethenormalequation(2.1.23)toproducetheestimatexLSforthiscolourednoisecase.UsingthesameanalysisasinSection,exceptusingE(b-Ax0)(b-Ax0)T=Σinsteadofσ2Iasbefore,weget:

Noticethatinthecolourednoisecasewhenthenoiseispre-whitenedasinequation(2.1.53),theresultingmatrixcov(xLS)isequivalenttoJ-1inequation(2.1.51)whichisthecorrespondingformoftheCRLB;i.e.,theequalityoftheboundisnowsatisfied,providedthenoiseispre-whiten.

Hence,inthepresenceofcolourednoisewithknowncovariancematrix,pre-whiteningthenoisebeforeapplyingthelinearleast-squaresestimationprocedurealsoresultsinaMVUEofx.Wehaveseenthisisnotthecasewhenthenoiseisnotpre-whitened.

2.2AGeneralized“Pseudo-Inverse”Approach

toSolving

the

Least-squares

Problem

2.2.1Least

Squares

Solution

Using

the

SVD

Previously

we

have

seen

that

the

LS

problemmay

be

posed

as

where

the

observation

b

is

generated

from

the

regression

model

b=Ax0+n.

For

the

casewhere

A

is

full

rankwe

saw

that

the

solution

xLSwhich

solves

is

given

by

the

normalequation

WearegivenA∈Rm×n,m>nandrank(A)=r≤n.IfthesvdofAisgivenasUΣVT,thenwedefineA+asthepseudo-inverseofA,definedby

ThematrixΣ+isrelatedtoΣinthefollowingway.If

then

Theorem

WhenAisrankdeficienttheuniquesolutionxLSminimizingsuchthat‖x‖2isminimumisgivenby

whereA+isdefinedbyequation(2.2.3).Further,wehave

2.2.2Interpretation

of

the

Pseudo-Inverse

Geometrical

Interpretation

Let

us

now

take

an

other

look

at

the

geometry

of

least

squares.It

sho

wsa

simple

LSproblem

for

the

case

A∈R2×1.We

again

see

that

xLS

is

the

solution

which

corresponds

toprojecting

b

onto

R(A).In

fact,substituting

into

the

expression

AxLS,we

get

But,forthespecificcasewherem>n,weknowfromourpreviousdiscussiononlinearleastsquares,that

wherePistheprojectorontoR(A).Comparingequation(2.2.18)and(2.2.19),andnotingtheprojectorisunique,wehave

Thus,thematrixAA+isaprojectorontoR(A).

ThismayalsobeseeninadifferentwayasfollowsUsingthedefinitionofA+,wehave

WhereIristher×ridentityandUr=[u1,…,ur

].Fromourdiscussiononprojectors,weknowUrUrTisalsoaprojectorontoR(A)whichisthesameasthecolumnspaceofA.

RelationshipofPseudo-InverseSolutiontoNormalEquations

SupposeA∈Rm×n,m>n,thenormalequationsgiveus

butthepseudo-inversegives:

Inthefullrankcase,thesetwoquantitiesmustbeequal.Wecanindeedshowthisisthecaseasfollows:

Welet

betheEDofATAandwelettheSVDofATbedefinedas

Usingtheserelationswehave

asdesired,wherethelastlinefollowsfrom.Thus,forthefull-rankcaseform>n,A+=(ATA

)

-1AT.Inasimilarway,wecanalsoshowthatA+=A(ATA

)

-1forthecasem<n.

The

Pseudo-Inverse

as

a

Generalized

Linear

System

Solver

If

we

are

willing

to

accept

the

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论