python高维数据分析PPT完整全套教学课件_第1页
python高维数据分析PPT完整全套教学课件_第2页
python高维数据分析PPT完整全套教学课件_第3页
python高维数据分析PPT完整全套教学课件_第4页
python高维数据分析PPT完整全套教学课件_第5页
已阅读5页,还剩864页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

目录contentsChapter1BasisofMatrixCalculatioChapter2TheSolutionofLeastSquaresProblemsChapter3PrincipalComponentAnalysiChapter4PartialLeastSquaresAnalysisChapter5RegularizatioChapter6TransferMethodChapter1

Basis

of

Matrix

Calculation1.1Fundamental

Concepts1.2The

Most

Basic

Matrix

Decompositio1.3Singular

Value

Decomposition(SVD)1.4The

Quadratic

Form

1.1FundamentalConceptsThe

purpose

of

this

chapter

is

to

review

important

fundamental

concepts

in

linear

algebra,

as

a

foundation

for

the

rest

of

the

course.We

first

discuss

the

fundamentalbuildingblocks,suchasanoverviewofmatrixmultiplicationfroma“bigblock”

perspective,

linear

independence,

subspaces

and

related

ideas,rank,etc.,upon

which

the

rigor

of

linear

algebra

rests.

We

then

discuss

vector

norms,and

various

interpretations

ofthematrixmultiplicationoperation

1.1.1Notation

Throughout

this

course,

we

shall

indicate

that

a

matrix

A

is

of

dimensionm×n,and

whose

elements

are

taken

from

the

set

of

real

numbers,by

then

otation

A∈Cm×n.Thismeans

that

the

matrix

A

belongs

to

the

Cartesian

product

of

the

real

numbers,

taken

m×n

times,

one

for

each

element

of

A.

In

a

similar

way,

the

notation

A∈Cm×nmeans

thematrix

is

of

dimensionm×n,

and

the

elements

are

taken

fromthe

set

of

complexnumbers.

By

the

matrix

dimension

m×n,

we

mean

A

consists

of

m

rows

and

ncolumns.

Similarly,thenotationa∈Rm(Cm)impliesavectorofdimensionmwhoseelementsaretakenfromthesetofreal(complex)numbers.By“dimensionofavector”,wemeanitslength,i.e.,thatitconsistsofmelements.

Also,weshallindicatethatascalaraisfromthesetofreal(complex)numbersbythenotationa∈R(C).Thus,anuppercaseboldcharacterdenotesamatrix,alowercaseboldcharacterdenotesavector,andalowercasenon-boldcharacterdenotesascalar.

Byconvention,avectorbydefaultistakentobeacolumnvector.Further,foramatrixA,wedenoteitsi-thcolumnasai.Wealsoimplythatitsj-throwisajT,eventhoughthisnotationmaybeambiguous,sinceitmayalsobetakentomeanthetransposeofthej-thcolumn.Thecontextofthediscussionwillhelptoresolvetheambiguity.

1.1.2“Bigger-Block”Interpretations

of

Matrix

Multiplication

Let

us

define

the

matrix

product

C

as

Inner-ProductRepresentation

Ifaandbarecolumnvectorsofthesamelength,thenthescalarquantityaTbreferredtoastheinnerproductofaandb.IfwedefineaiT∈Rkasthei-throwofAandbj∈Rkasthej-thcolumnofB,thentheelementcijofCisdefinedastheinnerproductaiTbj.Thisistheconventionalsmall-blockrepresentationofmatrixmultiplication.

ColumnRepresentation

Thisisthenextbigger-blockviewofmatrixmultiplication.Herewelookatformingtheproductonecolumnatatime.Thej-thcolumncjofCmaybeexpressedasalinearcombinationofcolumnsaiofAwithcoefficientswhicharetheelementsofthej-thcolumnofB.Thus,

Outer-ProductRepresentation

Thisisthelargest-blockrepresentation.Letusdefineacolumnvectora∈RmandarowvectorbT∈Rn.Thentheouterproductofaandbisanm×nmatrixofrankoneandisdefinedasabT.Nowletai

andbiTbethei-thcolumnandrowofAandBrespectively.ThentheproductCmayalsobeexpressedas

MatrixPre-andPost-Multiplication

Letusnowlookatsomefundamentalideasdistinguishingmatrixpre-andpost-multiplication.Inthisrespect,consideramatrixApre-multipliedbyBtogiveY=BA.(Allmatricesareassumedtohaveconformabledimensions).ThenwecaninterpretthismultiplicationasBoperatingonthecolumnofAtogivethecolumnsoftheproduct.ThisfollowsbecauseeachcolumnyioftheproductisatransformedversionofthecorrespondingcolumnofA;i.e.,yi

=Bai,i=1,2,…,n.Likewise,

let‘sconsiderApost-multipliedbyamatrixCtogiveX=AC.Then,weinterpretthismultiplicationasCoperatingontherowsofA,becauseeachrowXiToftheproductisatransformedversionofthecorrespondingrowofA;i.e.,XjT=ajTC,j=1,2,…,m,wherewedefineajTasthej-throwofA.

Example1:

ConsideranorthonormalmatrixQofappropriatedimension.Weknowthatmultiplicationbyanorthonormalmatrixresultsinarotationoperation.TheoperationQArotateseachcolumnofA.TheoperationAQrotateseachrow.

Thereisanotherwaytointerpretpre-multiplicationandpost-multiplication.AgainconsiderthematrixApre-multipliedbyBtogiveY=BA.Thenaccordingtoequation(1.1.2),thej-thcolumn

yiofYisalinearcombinationofthecolumnsofB,whosecoefficientsarethej-thcolumnofA.Likewise,forX=AB,wecansaythatthei-throwXiTofXisalinearcombinationoftherowsofB,whosecoefficientsarethei-throwofA.

Eitheroftheseinterpretationsisequallyvalid.Beingcomfortablewiththerepresentationsofthissectionisabigstepinmasteringthefieldoflinearalgebra.

1.1.3Fundamental

Linear

Algebra

Linear

Independence

Suppose

we

have

a

set

of

n

m-dimensional

vectors

{a1,a2,…,an},where

ai∈Rm,i=1,2,…,n.This

set

is

linearly

independent

under

the

conditions

Asetofnvectorsislinearlyindependentifann-dimensionalspacemaybefomedbytakingallpossiblelinearcombinationsofthevectors.Ifthedimensionofthespaceislessthann,thenthevectorsarelinearlydependent.Theconceptofavectorspaceandthedimensionofavectorspaceismademorepreciselater.

Notethatasetofvectors{a1,a2,…,an},wheren>mcannotbelinearlyindependent.

Example2:

Thissetislinearlyindependent.Ontheotherhand,thesetisnot.

Thisfollowsbecausethethirdcolumnisalinearcombinationofthefirsttwo.-1timesthefirstcolumnplus--1timesthesecondequalsthethirdcolumn.Thus,thecoefficientscjinequation(1.1.4)resultinginzeroareanyscalarmultipleofequation(1.1.1).

Span,RangeandSubspaces

Span

Thespanofavectorset[a1,a2,…,an],writtenasspan[a1,a2,…,an],whereai∈Rm,isthesetofpointsmappedby

Thesetofvectorsinaspanisreferredtoasavectorspace.Thedimensionofavectorspaceisthenumberoflinearlyindependentvectorsinthelinearcombinationwhichformsthespace.Notethatthevectorspacedimensionisnotthedimension(length)ofthevectorsformingthelinearcombinations.

Example3:

Considerthefollowing2vectorsinFig.1.1.Fig.1.1The

span

of

the

sevectors

is

the

(infinite

extension

of

the)plane

of

the

paper

Subspaces

Givenaset(space)ofvectors[a1,a2,…,an]∈Rm,m≥n,asubspaceSisavectorsubsetthatsatisfiestworequirements:

1.Ifx

andyareinthesubspace,thenx+yisstillinthesubspace.

2.Ifwemultiplyanyvectorxinthesubspacebyascalarc,thencxisstillinthesubspace.

Range

TherangeofamatrixA∈Rm×n,denotedR(A),isasubspace(setofvectors)satisfying

Example4:

R(A)isthesetofalllinearcombinationsofanytwocolumnsofA.Inthecasewhenn<m(i.e.,Aisatallmatrix),itisimportanttonotethatR(A)isindeedasubspaceofthem-dimensional“universe”Rm.Inthiscase,thedimensionofR(A)islessthanorequalton.Thus,R(A)doesnotspanthewholeuniverse,andthereforeisasubspaceofit.

MaximallyIndependentSet

Thisisavectorsetwhichcannotbemadelargerwithoutlosingindependence,andsmallerwithoutremainingmaximal;i.e.itisasetcontainingthemaximumnumberofindependentvectorsspanningthespace.

ABasis

Abasisforasubspaceisanymaximallyindependentsetwithinthesubspace.Itisnotunique.

Example5:

AbasisforthesubspaceSspanningthefirst2columnsof

is

OrthogonalComplementSubspace

IfwehaveasubspaceSofdimensionnconsistingofvectors[a1,a2,…,an],ai∈Rm,i=1,2,…,n,forn≤m,theorthogonalcomplementsubspaceS⊥ofSofdimensionm-nisdefinedas

i.e.,anyvectorinS⊥isorthogonaltoanyvectorinS.ThequantityS⊥

ispronounced“S-prep”.

Example6:

TakethevectorsetdefiningSfromExample5:

then,abasisforS⊥is

Rank

Rankisanimportantconceptwhichwewillusefrequentlythroughoutthiscourse.Webrieflydescribeonlyafewbasicfeaturesofrankhere.Theideaisexpandedmorefullyinthefollowingsections.

1.Therankofamatrixisthemaximumnumberoflinearlyindependentrowsorcolumns.Thus,itisthedimensionofabasisforthecolumns(rows)ofamatrix.

2.RankofA(denotedrank(A)),isthedimensionofR(A).

3.IfA=BC,andr1=rank(B),r2=rank(C),then,rank(A)≤min(r1,r2).

4.AmatrixA∈Rm×nissaidtoberankdeficientifitsrankislessthanmin(m,n)Otherwise,itissaidtobefullrank.

5.IfAissquareandrankdeficient,thendet(A)=0.

6.Itcanbeshownthatrank(A)=rank(AT).Moreissaidonthispointlater.

Example7:

TherankofAinExample6is3,whereastherankofAinExample4is2.

NullSpaceofA

ThenullspaceN(A)ofAisdefinxedas

Example8:

LetAbeasbeforeinExample4.ThenN(A)=c(1,1,-2)T,wherecisarealconstant.

Afurtherexampleisasfollows.Take3vectors[a1,a2,a3],whereai∈R3,i=1,2,3,thatareconstrainedtolieina2-dimensionalplane.Thenthereexistsazerolinearcombinationofthesevectors.ThecoefficientsofthislinearcombinationdefineavectorxwhichisinthenullspaceofA=[a1,a2,a3

].Inthiscase,weseethatAisrankdeficient.

Anotherimportantcharacterizationofamatrixisitsnullity.ThenullityofAisthedimensionofthenullspaceofA.InExample8above,thenullityofAisone.Wethenhavethefollowinginterestingproperty:

1.1.4Four

Fundamental

Subspaces

of

a

Matrix

The

four

matrix

subspaces

of

concern

are:

the

column

space,

the

row

space,

and

theirrespective

orthogonal

complements.

The

development

of

these

four

subspaces

is

closelylinked

to

N(A)

and

R(A).

We

assume

for

this

section

that

A∈Rm×n,

r≤min(m,n),where

r

=rank

(A).

TheColumnSpace

ThisissimplyR(A).Itsdimensionisr.ItisthesetofalllinearcombinationsofthecolumnsofA.

TheOrthogonalComplementoftheColumnSpace

ThismaybeexpressedasR(A)⊥,withdimensionm-r.ItmaybeshowntobeequivalenttoN(AT),asfollows:Bydefinition,N(AT)isthesetxsatisfying:

TheRowSpace

TherowspaceisdefinedsimplyasR(AΤ),withdimensionr.TherowspaceistherangeoftherowsofA,orthesubspacespannedbytherows,orthesetofallpossiblelinearcombinationsoftherowsofA.

TheOrthogonalComplementoftheRowSpace

ThismaybedenotedasR(AΤ)⊥.Itsdimensionisn-r.ThissetmustbethatwhichisorthogonaltoallrowsofA:i.e.,forxtobeinthisspace,xmustsatisfy

Thus,thesetx,whichistheorthogonalcomplementoftherowspacesatisfyingequation(1.1.16),issimplyN(A).

Wehavenotedbeforethatrank(A)=rank(AT).Thus,thedimensionoftherowandcolumnsubspacesareequal.Thisissurprising,becauseitimpliesthenumberoflinearlyindependentrowsofamatrixisthesameasthenumberoflinearlyindependentcolumns.Thisholdsregardlessofthesizeorrankofthematrix.Itisnotanintuitivelyobviousfactandthereisnoimmediatelyobviousreasonwhythisshouldbeso.Nevertheless,therankofamatrixisthenumberofindependentrowsorcolumns.

1.1.5Vector

Norms

A

vector

norm

is

a

means

of

expressing

the

length

or

distance

associated

with

avector.A

norm

on

a

vector

space

Rn

is

a

function

f,which

maps

a

point

in

Rninto

a

point

in

R.Formally,thisisstatedmathematicallyasf:

Wedenotethefunctionf(x)as‖x‖.Thep-norms:Thisisausefulclassofnorms,generalizingontheideaoftheEuclideannorm.Theyaredefinedby

Ifp=1:

which

is

simply

the

sum

of

absolute

values

of

the

elements.

If

p=2:

which

is

the

familiar

Euclidean

norm.

If

p=∞:

whichisthelargestelementofx.Thismaybeshowninthefollowingway.Asp→∞,thelargesttermwithintheroundbracketsinequation(1.1.17)dominatesalltheothers.Thereforeequation(1.1.17)maybewrittenas

1.1.6Determinants

Consider

a

square

matrix

A∈Rm×m.We

can

define

the

matrix

Aij

as

the

submatrixobtained

from

Aby

deleting

the

i-th

row

and

j-th

column

of

A.The

scalar

number

det(Aij)

(wheredet

(·)

denotes

determinant)

is

called

the

minor

associated

with

the

elementaij

of

A.The

signed

minor

cij(-1)j+i

det

(Aij)

is

called

the

cofactor

of

aij.

ThedeterminantofAisthem-dimensionalvolumecontainedwithinthecolumns(rows)ofA.Thisinterpretationofdeterminantisveryusefulasweseeshortly.Thedeterminantofamatrixmaybeevaluatedbytheexpression

1.1.7Properties

of

Determinants

Before

we

begin

this

discussion,

let

us

define

the

volume

of

a

parallelopiped

definedby

the

set

of

column

vectors

comprising

a

matrix

as

the

principal

volume

of

that

matrix.We

have

the

following

properties

of

determinants,

which

are

stated

with

outproof:

1.det(AB)=det(A)det(B)A,B∈Rm×m.

2.det(A)=det(AT).

3.det(cA)=cmdet(A),c∈R,A∈Rm×m.

4.det(A)=0⇔Aissingular.

5.det(A)=,whereλiaretheeigen(singular)valuesofA.

6.Thedeterminantofanorthonormalmatrixis±1.

7.IfAisnonsingular,thendet(A-1)=[det(A)]-1.

8.IfBisnonsingular,thendet(B-1AB)=det(A).

9.IfBisobtainedfromAbyinterchanginganytworows(orcolumns),thendet(B)=-det(A).

10.IfBisobtainedfromAbybyaddingascalarmultipleofonerowtoanother(orascalarmultipleofonecolumntoanother),thendet(B)=det(A).

1.2The

Most

Basic

Matrix

Decomposition

1.2.1GaussianElimination

In

this

section

we

discuss

the

concept

of

Gaussian

elimination

in

some

detail.

ButwepresentaveryquickreviewbyexampleoftheelementaryapproachtoGaussianelimination.

Given

the

system

of

equations

WhereA∈R3×3isnonsingular.Theabovesystemcanbeexpandedintotheform.

TosolvethesystemwetransformthissystemintothefollowinguppertriangularsystembyGaussianelimination:

using

a

sequence

of

elementary

row

operations

as

follows

OnceAhasbeentriangularized,thesolutionxisobtainedbyapplyingbackwardsubstitutiontothesystemUx=b.Withthisprocedurexnisfirstdeterminedfromthelastequationof(1.2.3).Thenxn-1maybedeterminedfromthesecondlastrow,etc.Thealgorithmmaybesummarizedbythefollowingschema:

WhatabouttheAccuracyofBackSubstitution?

Withoperationsonfloatingpointnumberswemustbeconcernedabouttheaccuracyoftheresultsincethefloatingpointnumbersthemselvescontainerror.Wewanttoknowifitispossiblethatthesmallerrorsinthefloatingpointrepresentationofrealnumberscanleadtolargeerrorsinthecomputedresult.Inthisvien,wecanshowthatthecomputedsolutionx^obtainedbybacksubtitutionsatisfiestheexpression.

1.2.2The

LU

Decomposition

Suppose

we

canfind

a

lower

triangularmatrix

L∈Rn×n

with

ones

along

themaindiagonal

and

an

upper

triangular

matrix

U∈Rn×n

such

that:

This

decomposition

of

A

is

referred

to

as

the

LU

decomposition.

To

solve

the

systemAx=b,or

LUx=bwe

define

the

variable

z

as

z=Ux

and

then

1.2.3The

LDM

Factorization

If

no

zero

pivots

are

encountered

during

the

Gaussian

elimination

process,

then

thereexist

unit

lower

triangular

matrices

L

and

Mand

a

diagonal

matrix

D

such

that

Justification

SinceA=LUexists,letU=DMTbeuppertriangular,wheredi=uii;hence,A=LDMTwhichwastobeshown.EachrowofMTisthecorrespondingrowofUdividedbyitsdiagonalelement.

WethensolvethesystemAx=bwhichisequivalenttoLDMTx=binthreesteps:

1.lety=DMTxandsolveLy=Pr(n2flops)

2.lety=MTxandsolveDz=y(nflops)

3.solveMTx=z(n2flops)

1.2.4The

LDL

Decomposition

for

Symmetric

Matrices

For

a

symmetric

non-singular

matrix

A∈Rn×n,

the

factors

L

and

Mareidentical.

Proof

LetA=LDMT.The

matrix

M-1AM-T=M-1LD

is

symmetric

(from

lef

than

dside)and

lower

triangular(from

righ

than

dside).

Hence,they

are

both

diagonal.

But

Disnonsingular,

soM-1Lis

alsodia

gonal.ThematricesMand

Lare

bothunit

lowertriangular

(ULT).It

can

be

easily

shown

that

the

inverse

of

aULT’s

matrix

is

also

ULT,and

furthermore,the

product

o

ULT’s

is

ULT.Therefore

M-1

is

ULT,and

sois

M-1L.ThusM-1L=I;M=L.

1.2.5Cholesky

Decomposition

We

now

consider

several

modifications

to

the

LUdecomposition,

which

ultimatelyleaduptotheCholeskydecomposition.Thesemodificationsare1)theLDMdecomposition,

2)

theLDLdecompositiononsymmetricmatrices,

and

3)theLDLdecomposition

on

positive

definite

symmetricmatrices.The

Cholesky

decomposition

isrelevant

only

for

square

symmetric

positive

definite

matrices

and

is

an

important

concent

insignal

processing.

Several

examples

of

the

useof

the

Cholesky

decomposition

are

providedat

the

end

of

the

section

ForA∈Rn×n

symmetricandpositivedefinite,thereexistsalowertriangularmatrixG∈Rn×nwithpositivediagonalentries,suchthatA=GGT.

1.2.6Applications

and

Examples

of

the

Cholesky

Decomposition

Generating

Vector

Processes

with

Desired

Covariance

We

may

use

the

Cholesky

decomposition

to

generate

a

random

vector

process

with

adesired

covariance

matrix

Σ∈Rn×n.Since

must

be

symmetric

and

positive

definite,let

WhiteningaProcess

Thisexampleisessentiallytheinverseoftheonejustdiscussed.Supposewehaveastationaryvectorprocessxi

∈Rn,i=1,2,…,n.Thisprocesscouldbethesignalsreceivedfromtheelementsofanarrayofnsensors,itcouldbesetsofnsequentialsamplesofanytime-varyingsignal,orsetsofdatainatapped-delaylineequalizeroflengthn,attimeinstantst1,t2,…,etc.Lettheprocessxconsistofasignalpartsiandanoisepartvi:

Sincethereceivedsignalx=s+v,thejointprobabilitydensityfunctionp(x|s)ofthereceivedsignalvectorx,giventhenoiselesssignals,inthepresenceofGaussiannoisesamplesvwithcovariancematrixΣissimplythepdfofthenoiseitself,andisgivenbythemulti-dimensionalGaussianprobabilitydensityfunctiondiscussedinSection1.2.1:

1.2.7Eigendecomposition

Eigenvaluedecompositionistodecomposeamatrixintothefollowingform:

WhereQistheeigenvectorofthismatrixA,andtheorthogonalmatrixisinvertible.Σ=diag(λ1,λ2,…,λn)isadiagonalarraywitheachdiagonalelementbeinganeigenvalue.

Eigenvalues

and

Eigenvectors

Suppose

we

have

a

matrix

A:

We

investigate

its

eigenvalues

and

eigenvectors.

Suppose

we

take

the

product

Ax1,where

x1=[0,1]T,

as

show

nin

Fig.1.2.

Example1:

Considerthematrixgivenby

Itmaybeeasilyverifiedthatanyvectorinspan[e2,e3]isaneigenvectorassociatedwiththezerorepeatedeigenvalue.

Property1

IftheeigenvaluesofaHermitiansymmetricmatrixaredistinct,thentheeigenvectorsareorthogona.

Property5

IfvisaneigenvectorofamatrixA,thencvisalsoaneigenvector,wherecisanyrealorcomplexconstant.

TheprooffollowsdirectlybysubstitutingcvforvinAv=λv.Thismeansthatonlythedirectionofaneigenvectorcanbeunique;itsnormisnotunique.

OrthonormalMatrices

Beforeproceedingwiththeeigendecompositionofamatrix,wemustdeveloptheconceptofanorthonormalmatrix.Thisformofmatrixhasmutuallyorthogonalcolumns,eachofunitnorm.Thisimpliesthat

Whereδij

istheKroneckerdelta,andqiandqjarecolumnsoftheorthonormalmatrixQ.Withinmind,wenowconsidertheproductQTQ.Theresultmaybevisualizedwiththeaidofthediagrambelow:

Equation(1.2.32)followsdirectlyfromthefactQhasorthonormalcolumns.ItisnotsoclearthatthequantityQQTshouldalsoequaltheidentity.Wecanresolvethisquestioninthefollowingway.SupposethatAandBareanytwosquareinvertiblematricessuchthatAΒ=I.Then,ΒAΒ=Β.Byparsingthislastexpression,wehave

Property6

Thevector2-normisinvariantunderanorthonormaltransformation.

IfQisorthonormal,then

TheEigendecomposition(ED)ofaSquareSymmetricMatrix

AlmostallmatricesonwhichEDareperformed(atleastinsignalprocessing)aresymmetric.Agoodexampleiscovariancematrices,whicharediscussedinsomedetailinthenextsection。

LetA∈Rn×n

besymmetric.Then,foreigenvaluesandeigenvectorsvi,wehave

Lettheeigenvectorsbenormalizedtounit2-norm.Thenthesenequationscanbecombined,orstackedside-by-sidetogether,andrepresentedinthefollowingcompactform:

WhereV=[v1,v2,…,vn](i.e.,eachcolumnofVisaneigenvector),and

Correspondingcolumnsfromeachsideofrepresentonespecificvalueoftheindexiin.BecausewehaveassumedAissymmetric,fromProperty1,theviareorthogonal.Furthermore,sincewehaveassumed‖vi‖2=1,Visanorthonormalmatrix.Thus,post-multiplyingbothsidesofbyVTandusingVVT=Iweget

Matrixp-Norms

Amatrixp-normisdefinedintermsofavectorp-norm.Thematrixp-normofanarbitrarymatrixA,denoted‖A‖p,isdefinedas

where“sup”means

supremum;

i.e,.the

largest

value

of

the

argument

over

all

values

of

x≠0.Sinceapropertyofavectornormis‖cx‖p=c‖x‖pforanyscalarc,wecanchoosecinequation(1.2.40)sothat‖x‖p=1.Then,anequivalentstatementtoequation(1.2.40)is

FrobeniusNorm

TheFrobeniusnormisthe2-normofthevectorconsistingofthe2-normsoftherows(orcolumns)ofthematrixA:

PropertiesofMatrixNorms

1.ConsiderthematrixA∈Rm×nandthevectorx∈Rn.Then,

Thispropertyfollowsbydividingbothsidesoftheaboveby‖x‖p,andapplying.

2.IfQandZareorthonormalmatricesofappropriatesize,then

and

Thus,weseethatthematrix2-normandFrobeniusnormareinvarianttopre-andpost-multiplicationbyanorthonormalmatrix.

3.Further

Wheretr(·)denotesthetraceofamatrix,whichisthesumofitsdiagonalelements.

1.2.9Covariance

Matrices

Here,we

investigate

the

concepts

andpropertiesof

the

covariancematrix

Rxxcorresponding

to

a

stationary,

discrete-time

random

process

x[n].We

break

the

infinitesequence

x[n]

into

windows

of

lengthm

,

as

shown

in

Fig.1.3.

The

windows

generallyoverlap;

in

fact,they

are

typically

displaced

fromone

another

by

only

one

sample.Thesamples

with

in

the

i-th

window

become

an

m-length

vector

xi,i=1,2,…,n.

Hence,

thevector

corresponding

to

each

window

is

a

vector

sample

from

the

random

process

x[n].

Processing

randomsignals

in

this

way

is

the

fundamental

first

step

inmany

forms

ofelectronic

systemwhich

deal

with

real

signals,

such

as

process

identifi

cation,control,

orany

form

of

communication

system

including

telephones,

radio,

radar,

sonar,

etc.

ThecovariancematrixRxx∈Rm×mcorrespondingtoastationaryorWSSprocessx[n]isdefinedas

WhereμisthevectormeanoftheprocessandE(·)denotestheexpectationoperatoroverallpossiblewindowsofindexioflengthminFig.1.3.Oftenwedealwithzero-meanprocesses,inwhichcasewehave

However,

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论