第2课数据预处理技术_第1页
第2课数据预处理技术_第2页
第2课数据预处理技术_第3页
第2课数据预处理技术_第4页
第2课数据预处理技术_第5页
已阅读5页,还剩44页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

第2课数据预处理技术副教授1内容提纲Whypreprocessthedata?DatacleaningDataintegrationandtransformationDatareductionDiscretizationandconcepthierarchygenerationSummary2WhyDataPreprocessing?Dataintherealworldisdirtyincomplete:lackingattributevalues,lackingcertainattributesofinterest,orcontainingonlyaggregatedatae.g.,occupation=“”noisy:containingerrorsoroutlierse.g.,Salary=“-10”inconsistent:containingdiscrepanciesincodesornamese.g.,Age=“42”Birthday=“03/07/1997”e.g.,Wasrating“1,2,3”,nowrating“A,B,C”e.g.,discrepancybetweenduplicaterecords3WhyIsDataDirty?Incompletedatacomesfromn/adatavaluewhencollecteddifferentconsiderationbetweenthetimewhenthedatawascollectedandwhenitisanalyzed.human/hardware/softwareproblemsNoisydatacomesfromtheprocessofdatacollectionentrytransmissionInconsistentdatacomesfromDifferentdatasourcesFunctionaldependencyviolation4WhyIsDataPreprocessingImportant?Noqualitydata,noqualityminingresults!Qualitydecisionsmustbebasedonqualitydatae.g.,duplicateormissingdatamaycauseincorrectorevenmisleadingstatistics.DatawarehouseneedsconsistentintegrationofqualitydataDataextraction,cleaning,andtransformationcomprisesthemajorityoftheworkofbuildingadatawarehouse.—BillInmon5Multi-DimensionalMeasureofDataQualityAwell-acceptedmultidimensionalview:AccuracyCompletenessConsistencyTimelinessBelievabilityValueaddedInterpretabilityAccessibilityBroadcategories:intrinsic,contextual,representational,andaccessibility.6MajorTasksinDataPreprocessingDatacleaningFillinmissingvalues,smoothnoisydata,identifyorremoveoutliers,andresolveinconsistenciesDataintegrationIntegrationofmultipledatabases,datacubes,orfilesDatatransformationNormalizationandaggregationDatareductionObtainsreducedrepresentationinvolumebutproducesthesameorsimilaranalyticalresultsDatadiscretizationPartofdatareductionbutwithparticularimportance,especiallyfornumericaldata7Formsofdatapreprocessing

8DataCleaningImportance“Datacleaningisoneofthethreebiggestproblemsindatawarehousing”—RalphKimball“Datacleaningisthenumberoneproblemindatawarehousing”—DCIsurveyDatacleaningtasksFillinmissingvaluesIdentifyoutliersandsmoothoutnoisydataCorrectinconsistentdataResolveredundancycausedbydataintegration9MissingDataDataisnotalwaysavailableE.g.,manytupleshavenorecordedvalueforseveralattributes,suchascustomerincomeinsalesdataMissingdatamaybeduetoequipmentmalfunctioninconsistentwithotherrecordeddataandthusdeleteddatanotenteredduetomisunderstandingcertaindatamaynotbeconsideredimportantatthetimeofentrynotregisterhistoryorchangesofthedataMissingdatamayneedtobeinferred.10HowtoHandleMissingData?Ignorethetupleusuallydonewhenclasslabelismissing(assumingthetasksinclassification—noteffectivewhenthepercentageofmissingvaluesperattributevariesconsiderably).Fillinthemissingvaluemanually tedious+infeasible?Fillinitautomaticallywithaglobalconstant:e.g.,“unknown”,anewclass?!theattributemeantheattributemeanforallsamplesbelongingtothesameclass:smarterthemostprobablevalue:inference-basedsuchasBayesianformulaordecisiontree11NoisyDataNoise:randomerrororvarianceinameasuredvariableIncorrectattributevaluesmayduetofaultydatacollectioninstrumentsdataentryproblemsdatatransmissionproblemstechnologylimitationinconsistencyinnamingconventionOtherdataproblemswhichrequiresdatacleaningduplicaterecordsincompletedatainconsistentdata12HowtoHandleNoisyData?Binningmethod:firstsortdataandpartitioninto(equi-depth)binsthenonecansmoothbybinmeans,smoothbybinmedian,smoothbybinboundaries,etc.ClusteringdetectandremoveoutliersCombinedcomputerandhumaninspectiondetectsuspiciousvaluesandcheckbyhuman(e.g.,dealwithpossibleoutliers)Regressionsmoothbyfittingthedataintoregressionfunctions13SimpleDiscretizationMethods:BinningEqual-width(distance)partitioning:DividestherangeintoNintervalsofequalsize:uniformgridifAandBarethelowestandhighestvaluesoftheattribute,thewidthofintervalswillbe:W=(B–A)/N.Themoststraightforward,butoutliersmaydominatepresentationSkeweddataisnothandledwell.Equal-depth(frequency)partitioning:DividestherangeintoNintervals,eachcontainingapproximatelysamenumberofsamplesGooddatascalingManagingcategoricalattributescanbetricky.14BinningMethodsforDataSmoothingSorteddataforprice(indollars)

4,8,9,15,21,21,24,25,26,28,29,34*Partitioninto(equi-depth)bins:

-Bin1:4,8,9,15

-Bin2:21,21,24,25

-Bin3:26,28,29,34*Smoothingbybinmeans:-Bin1:9,9,9,9

-Bin2:23,23,23,23

-Bin3:29,29,29,29*Smoothingbybinboundaries:

-Bin1:4,4,4,15

-Bin2:21,21,25,25

-Bin3:26,26,26,3415ClusterAnalysis16Regressionxyy=x+1X1Y1Y1’17DataIntegrationDataintegration:combinesdatafrommultiplesourcesintoacoherentstoreSchemaintegrationintegratemetadatafromdifferentsourcesEntityidentificationproblem:identifyrealworldentitiesfrommultipledatasources,e.g.,A.cust-idB.cust-#Detectingandresolvingdatavalueconflictsforthesamerealworldentity,attributevaluesfromdifferentsourcesaredifferentpossiblereasons:differentrepresentations,differentscales,e.g.,metricvs.Britishunits18HandlingRedundancyinDataIntegrationRedundantdataoccuroftenwhenintegrationofmultipledatabasesThesameattributemayhavedifferentnamesindifferentdatabasesOneattributemaybea“derived”attributeinanothertable,e.g.,annualrevenueRedundantdatamaybeabletobedetectedbycor-relationalanalysisCarefulintegrationofthedatafrommultiplesourcesmayhelpreduce/avoidredundanciesandinconsistenciesandimproveminingspeedandquality19DataTransformationSmoothing:removenoisefromdataAggregation:summarization,datacubeconstructionGeneralization:concepthierarchyclimbingNormalization:scaledtofallwithinasmall,specifiedrangemin-maxnormalizationz-scorenormalizationnormalizationbydecimalscalingAttribute/featureconstructionNewattributesconstructedfromthegivenones20DataTransformation:Normalizationmin-maxnormalization(最小-最大规范化)z-scorenormalization(z-score规范化)normalizationbydecimalscaling(小数定标规范化)WherejisthesmallestintegersuchthatMax(||)<121DataReductionStrategiesAdatawarehousemaystoreterabytesofdataComplexdataanalysis/miningmaytakeaverylongtimetorunonthecompletedatasetDatareductionObtainareducedrepresentationofthedatasetthatismuchsmallerinvolumebutyetproducethesame(oralmostthesame)analyticalresultsDatareductionstrategiesDatacubeaggregation(数据立方体聚集)Dimensionalityreduction—removeunimportantattributesDataCompressionNumerosityreduction—fitdataintomodelsDiscretizationandconcepthierarchygeneration22DataCubeAggregationThelowestlevelofadatacubetheaggregateddataforanindividualentityofintereste.g.,acustomerinaphonecallingdatawarehouse.MultiplelevelsofaggregationindatacubesFurtherreducethesizeofdatatodealwithReferenceappropriatelevelsUsethesmallestrepresentationwhichisenoughtosolvethetaskQueriesregardingaggregatedinformationshouldbeansweredusingdatacube,whenpossible23DimensionalityReductionFeatureselection(i.e.,attributesubsetselection):Selectaminimumsetoffeaturessuchthattheprobabilitydistributionofdifferentclassesgiventhevaluesforthosefeaturesisascloseaspossibletotheoriginaldistributiongiventhevaluesofallfeaturesreduce#ofpatternsinthepatterns,easiertounderstandHeuristicmethods(duetoexponential#ofchoices):step-wiseforwardselection(逐步向前选择)step-wisebackwardelimination(逐步向后删除)combiningforwardselectionandbackwardeliminationdecision-treeinduction24ExampleofDecisionTreeInductionInitialattributeset:{A1,A2,A3,A4,A5,A6}A4?A1?A6?Class1Class2Class1Class2>Reducedattributeset:{A1,A4,A6}25HeuristicFeatureSelectionMethodsThereare2d

possiblesub-featuresofdfeaturesSeveralheuristicfeatureselectionmethods:Bestsinglefeaturesunderthefeatureindependenceassumption:choosebysignificancetests.Beststep-wisefeatureselection:Thebestsingle-featureispickedfirstThennextbestfeatureconditiontothefirst,...Step-wisefeatureelimination:RepeatedlyeliminatetheworstfeatureBestcombinedfeatureselectionandelimination:Optimalbranchandbound:Usefeatureeliminationandbacktracking26DataCompressionStringcompressionThereareextensivetheoriesandwell-tunedalgorithmsTypicallylosslessButonlylimitedmanipulationispossiblewithoutexpansionAudio/videocompressionTypicallylossycompression,withprogressiverefinementSometimessmallfragmentsofsignalcanbereconstructedwithoutreconstructingthewholeTimesequenceisnotaudioTypicallyshortandvaryslowlywithtime27DataCompressionOriginalDataCompressedDatalosslessOriginalDataApproximatedlossy28WaveletTransformationDiscretewavelettransform(DWT):linearsignalprocessing,multiresolutionalanalysisCompressedapproximation:storeonlyasmallfractionofthestrongestofthewaveletcoefficientsSimilartodiscreteFouriertransform(DFT),butbetterlossycompression,localizedinspaceMethod:Length,L,mustbeanintegerpowerof2(paddingwith0s,whennecessary)Eachtransformhas2functions:smoothing,differenceAppliestopairsofdata,resultingintwosetofdataoflengthL/2Appliestwofunctionsrecursively,untilreachesthedesiredlengthHaar2Daubechie429GivenNdatavectorsfromk-dimensions,findc<=korthogonalvectorsthatcanbebestusedtorepresentdataTheoriginaldatasetisreducedtooneconsistingofNdatavectorsoncprincipalcomponents(reduceddimensions)EachdatavectorisalinearcombinationofthecprincipalcomponentvectorsWorksfornumericdataonlyUsedwhenthenumberofdimensionsislargePrincipalComponentAnalysis30X1X2Y1Y2PrincipalComponentAnalysis31NumerosityReduction(数值归约)ParametricmethodsAssumethedatafitssomemodel,estimatemodelparameters,storeonlytheparameters,anddiscardthedata(exceptpossibleoutliers)Log-linearmodels:obtainvalueatapointinm-DspaceastheproductonappropriatemarginalsubspacesNon-parametricmethods

DonotassumemodelsMajorfamilies:histograms,clustering,sampling32RegressionandLog-LinearModelsLinearregression:DataaremodeledtofitastraightlineOftenusestheleast-squaremethodtofitthelineMultipleregression:allowsaresponsevariableYtobemodeledasalinearfunctionofmultidimensionalfeaturevectorLog-linearmodel:approximatesdiscretemultidimensionalprobabilitydistributions33Linearregression:Y=+XTwoparameters,andspecifythelineandaretobeestimatedbyusingthedataathand.usingtheleastsquarescriteriontotheknownvaluesofY1,Y2,…,X1,X2,….Multipleregression:Y=b0+b1X1+b2X2.Manynonlinearfunctionscanbetransformedintotheabove.Log-linearmodels:Themulti-waytableofjointprobabilitiesisapproximatedbyaproductoflower-ordertables.Probability:p(a,b,c,d)=abacadbcdRegressAnalysisandLog-LinearModels34HistogramsApopulardatareductiontechniqueDividedataintobucketsandstoreaverage(sum)foreachbucketCanbeconstructedoptimallyinonedimensionusingdynamicprogrammingRelatedtoquantizationproblems.35ClusteringPartitiondatasetintoclusters,andonecanstoreclusterrepresentationonlyCanbeveryeffectiveifdataisclusteredbutnotifdatais“smeared”Canhavehierarchicalclusteringandbestoredinmulti-dimensionalindextreestructuresTherearemanychoicesofclusteringdefinitionsandclusteringalgorithms,furtherdetailedinfuture36SamplingAllowaminingalgorithmtorunincomplexitythatispotentiallysub-lineartothesizeofthedataChoosearepresentativesubsetofthedataSimplerandomsamplingmayhaveverypoorperformanceinthepresenceofskewDevelopadaptivesamplingmethodsStratifiedsampling(分层采样):Approximatethepercentageofeachclass(orsubpopulationofinterest)intheoveralldatabaseUsedinconjunctionwithskeweddataSamplingmaynotreducedatabaseI/Os(pageatatime).37SamplingSRSWOR(simplerandomsamplewithoutreplacement)SRSWRRawData38SamplingRawDataCluster/StratifiedSample39HierarchicalReductionUsemulti-resolutionstructurewithdifferentdegreesofreductionHierarchicalclusteringisoftenperformedbuttendstodefinepartitionsofdatasetsratherthan“clusters”ParametricmethodsareusuallynotamenabletohierarchicalrepresentationHierarchicalaggregationAnindextreehierarchicallydividesadatasetintopartitionsbyvaluerangeofsomeattributesEachpartitioncanbeconsideredasabucketThusanindextreewithaggregatesstoredateachnodeisahierarchicalhistogram40DiscretizationThreetypesofattributes:Nominal—valuesfromanunorderedsetOrdinal—valuesfromanorderedsetContinuous—realnumbersDiscretization:dividetherangeofacontinuousattributeintointervalsSomeclassificationalgorithmsonlyacceptcategoricalattributes.ReducedatasizebydiscretizationPrepareforfurtheranalysis41DiscretizationandConcepthierachyDiscretization

reducethenumberofvaluesforagivencontinuousattributebydividingtherangeoftheattributeintointervals.IntervallabelscanthenbeusedtoreplaceactualdatavaluesConcepthierarchies

reducethedatabycollectingandreplacinglowlevelconcepts(suchasnumericvaluesfortheattributeage)byhigherlevelconcepts(suchasyoung,middle-aged,orsenior)42DiscretizationandConceptHierarchyGenerationforNumericDataBinning(seesectionsbefore)Histogramanalysis(seesectionsbefore)Clusteringanalysis(seesectionsbefore)Entropy-baseddiscretizationSegmentationbynaturalpartitioning43Entropy-BasedDiscretizationGivenasetofsamplesS,ifSispartitionedintotwointervalsS1andS2usingboundaryT,theentropyafterpartitioningisTheboundarythatminimizestheentropyfunctionoverallpossibleboundariesisselectedasabinarydiscretization.Theprocessisrecursivelyappliedtopartitionsobtaineduntilsomestoppingcriterionismet,e.g.,Experimentsshowthatitmayreducedatasizeandimproveclassificationaccuracy44SegmentationbyNaturalPartitioningAsimply3-4-5rulecanbeusedtosegmentnumericdataintorelativelyuniform,“natural”intervals.Ifanintervalcovers3,6,7or9distinctvaluesatthemostsignificantdigit,partitiontherangeinto3equi-widthintervalsIfitcovers2,4,or8distinctvaluesatthemostsignificantdigit,partitiontherangeinto4intervalsIfitcovers1,5,or10distinctvaluesatthemostsignificantdigit,partitiontherangeinto5intervals45Exampleof3-4-5Rule(-$4000-$5,000)(-$400-0)(-$400--$300)(-$300--$200)(-$200--$100)(-$100-0)(0-$1,000)(0-$200)($200-$400)($400-$600)($600-$800)($800-$1,000)($2,000-$5,000)($2,000-$3,000)($3,000-$4,000)($4,000-$5,000)($1,000-$2,000)($1,000-$1,200)($1,200-$1,400)($1,400-$1,600)($1,600-$1,800)($1,800-$2,000)msd=1,000 Low=-$1,000 High=$2,000Step2:Step4:Step1:-$351 -$159 profit $1,838 $4,700 MinLow(i.e,5%-tile) High(i.e,95%-0tile)Maxcount(-$1,000-$2,000)(-$1,000-0)(0-$1,000)Step3:($1,000-$2,000)46ConceptHierarchyGenerationforCategoricalDataSpecificationofapartialorderingofattributesexplicitlyattheschemalevelbyusersorexpertsstreet<city<state<countrySpecificationofaportionofahierarchybyexplicitdatagrouping{Urbana,Champaign,Chicago}<IllinoisSpecificationofasetofattributes.SystemautomaticallygeneratespartialorderingbyanalysisofthenumberofdistinctvaluesE.g.,street<city<state<countrySpecificationofonlyapartialsetofatt

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

最新文档

评论

0/150

提交评论