




版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
1、机器学习题库极大似然1、 MLestimationofexponentialmodel(10)AGaussiandistributionisoftenusedtomodeldataontherealline,butissometimesinappropriatewhenthedataareoftenclosetozerobutconstrainedtobenonnegative.Insuchcasesonecanfitanexponentialdistribution,whoseprobabilitydensityfunctionisgivenby1展pxe节bGivenNobservatio
2、nsxidrawnfromsuchadistribution:(a) Writedownthelikelihoodasafunctionofthescaleparameterb.(b) Writedownthederivativeoftheloglikelihood.(c)GiveasimpleexpressionfortheMLestimateforb.期(LL)IX:6=TT-d二人t工3工(b)-Alog(i)-TkHI=I髀 T 十由i=ldistributionp|withhyperparameters 丫,suchthattheposteriordistributionp|X,pX
3、|p|p|与先验的分布族相同(a)Supposethatthelikelihoodisgivenbytheexponentialdistributionwithrateparameter 人:x2、换成 Poisson 分布:px|e,y0,1,2x!,Nllogpx|i1NxilogNi1Nxilogi1Nlogxii1logx!贝叶斯1、贝叶斯公式应用假设在考试的多项选择中,考生知道正确答案的概率为 p,猜测答案的概率为 1-p,并且假设考生知道正确答案答对题的概率为 1,猜中正确答案的概率为 1/-m,其中 m 为多选项的数目。那么已知考生答对题目,求他知道正确答案的概率。:pknown
4、|correctpknown,correctppknown11p1p一m2、ConjugatepriorsGivenalikelihoodpx|foraclassmodelswithparameters0,aconjugatepriorisapx|ShowthatthegammadistributionGamma|,isaconjugatepriorfortheexponential.Derivetheparameterupdategivenobservationsx1,,xNandthepredictiondistributionpxN1|4,xN.(n)Ey|ioiiontial;mdGm
5、ninnTheiktilinclisF(XA)=口3%cxp(Mrandtlirpriori/J(A|八,J)=中打门后壮,|nr:”i-LtKth。L山山LJiepowtcriuiinP(A|KXAexp(-A)-fr,r1+exp%(;+产国)Tlircforothepiraiiwtrrupclatesarcasfollows:FortbepredictiondistributionwecomputetheIblljriugintegral;XcxM事州|tr+A.d+(rg+wr(Or+.V)(3+HJV+.nnirTlirn=it|fl)=(1-即ifandpriorisabft+1b
6、+k-1FVrUBptHdlctioiidlstributtnncomputethsfallowingkrtcgr近jp(工工s件Xi=kM阳/(L-例八%曰,怙W+L匕+4-1%出口“+/*+*-)r(A+jrg+r曲r(fj+i)r(&+ji-jr(+A+A-+-i)r(fj+匕+Q/(七+/+/-2)a+1r(4+A:1)r(ci+14-A+f_Ijcj+fe+Zk+f-1r(nb+k)r(&+A+f-2)=1)一十十+mwluretlwlipinill:imntestepHw时打口4加。1fnmuUan/依+cnwricd?)(j)(Forextracredit)A,tatisticT
7、(1)isjsaidtuheforsipajEigriet.erorinotherwords,itisiiidcpciHlcntof/ShowthatforarandomvariallcXtirnwiifromancxpoDcutialfhiuilydeikiityij),inasiifficieutstatisticfor牛(Shuwthatafactorization川 北 廿 (/ : ) ; *)=/( 止 ) 七 (1” : 叶 )hiHciryrindsiiffi.ci.eiLtfor打(看)to1KUastatisticMrtf).(k) (Forextracredit)Supp
8、oseX一XnaredrawniidfromanexponentiaJfamilydensity吓What如nowthesufficientstatisticT(2r14.u_j-ft)tbrif?三、判断题(1)给定 n 个数据点,如果其中一半用于训练,另一半用于测试,则训练误差和测试误差之间的差别会随着 n的增加而减小。(2)极大似然估计是无偏估计且在所有的无偏估计中方差最小,所以极大似然估计的风险最小。(3)回归函数 A 和 B,如果 A 比 B 更简单,则 A 几乎一定会比 B 在测试集上表现更好。(4)全局线性回归需要利用全部样本点来预测新输入的对应输出值,而局部线性回归只需利用查询
9、点附近的样本来预测输出值。所以全局线性回归比局部线性回归计算代价更高。(5)Boosting 和 Bagging 都是组合多个分类器投票的方法,二者都是根据单个分类器的正确率决定其权重。(6)Intheboostingiterations,thetrainingerrorofeachnewdecisionstumpandthetrainingerrorofthecombinedclassifiervaryroughlyinconcert(F)Whilethetrainingerrorofthecombinedclassifiertypicallydecreasesasafunctionofbo
10、ostingiterations,theerroroftheindividualdecisionstumpstypicallyincreasessincetheexampleweightsbecomeconcentratedatthemostdifficultexamples.(7)OneadvantageofBoostingisthatitdoesnotoverfit.(F)(8)Supportvectormachinesareresistanttooutliers,i.e.,verynoisyexamplesdrawnfromadifferentdistribution.(F)(9)在回归
11、分析中,最佳子集选择可以做特征选择,当特征数目较多时计算量大;岭回归和 Lasso模型计算量小,且 Lasso 也可以实现特征选择。(10)当训练数据较少时更容易发生过拟合。(12)在核回归中,最影响回归的过拟合性和欠拟合之间平衡的参数为核函数的宽度。(13)IntheAdaBoostalgorithm,theweightsonallthemisclassifiedpointswillgoupbythesamemultiplicativefactor.(T)=exp(d)7.2pointstrue/falsenAdaBoost,weightedtrainingerrorqofthetif,we
12、akclassifierontrainingdatdwitliweightsDttendstoiiKTeaseasafunctionoff.SOLUTION:True.Inthecourseofboostingiterationstheweakclassifiersareforcedtotrytoclarifymoredifficultexamples.Theweightswillincreaseforexamplesthatarerepeatedlymisclassifiedbytheweakclassifiers.Theweightedtrainingerrorctoftheitfweak
13、classifieronthetrainingdat日thereforetendstoincrease.9.2pointsConsiderapointthatiscorrectlyclassifiedanddistantfromthedecisionboundary.WhywouldSrMsdecisionboundarybeunaffectedbyihispoint,buttheoneIrrtTiHtlyloginlicrrgriss.ioiibrHlfFd(HI?SOLUJION:Thehingelo5susedbySUM5giveszeroweighttothesepointswhile
14、thelog-lossusedbylogisticregressiongivesalittlebitofweighttothesepoints.(14) True/False:Inaleast-squareslinearregressionproblem,addinganL2regularizationpenaltycannotdecreasetheL2errorofthesolutionw?onthetrainingdata.(F)(15) True/False:Inaleast-squareslinearregressionproblem,addinganL2regularizationp
15、enaltyalwaysdecreasestheexpectedL2errorofthesolutionw?onunseentestdata(F).(16)除了 EM 算法,梯度下降也可求混合高斯模型的参数。(T)(20)Anydecisionboundarythatwegetfromagenerativemodelwithclass-conditionalGaussiandistributionscouldinprinciplebereproducedwithanSVMandapolynomialkernel.True!Infact,sinceclass-conditionalGaussia
16、nsalwaysyieldquadraticdecisionboundaries,theycanbereproducedwithanSVMwithkernelofdegreelessthanorequaltotwo.(21)AdaBoostwilleventuallyreachzerotrainingerror,regardlessofthetypeofweakclassifierituses,providedenoughweakclassifiershavebeencombined.False!Ifthedataisnotseparablebyalinearcombinationofthew
17、eakclassifiers,AdaBoostcantachievezerotrainingerror.(22)TheL2penaltyinaridgeregressionisequivalenttoaLaplacepriorontheweights.(F)(23)Thelog-likelihoodofthedatawillalwaysincreasethroughsuccessiveiterationsoftheexpectationmaximationalgorithm.(F)(24) Intrainingalogisticregressionmodelbymaximizingthelik
18、elihoodofthelabelsgiventheinputswehavemultiplelocallyoptimalsolutions.(F)四、回归1、考虑回归一个正则化回归问题。在下图中给出了惩罚函数为二次正则函数,当正则化参数 C取不同值时,在训练集和测试集上的 10g 似然(meanlog-probability)。(10 分)(1)说法“随着 C 的增加,图 2 中训练集上的 10g 似然永远不会增加”是否正确,并说明理由。(2)解释当 C 取较大值时,图 2 中测试集上的 10g 似然下降的原因。22、考虑线性回归模型:yNWoW1X,,训练数据如下图所不。(10 分)(1)用
19、极大似然估计参数,并在图(a)中画出模型。(3 分)(2)用正则化的极大似然估计参数,即在 10g 似然目标函数中加入正则惩罚函数并在图(b)中画出当参数 C 取很大值时的模型。(3 分)(3)在正则化后,高斯分布的方差2-是变大了、变小了还是不变?(4 分)3,1,我们用 1-10 阶多项式特征,采用线性回归模型来学习 x 与 y 之间的关系(高阶特征模型包含所有低阶特征)(1)现在n20个样本上,训练 1 阶、2 阶、8 阶和 10 阶特征的模型,然后在一个大规模的独立的测试集上测试,则在下 3 列中选择合适的模型(可能有多个选项),并解释第 3 列中你选择的模型为什么测试误差小。(10
20、分)训练误差最小训练误差最大测试误差最小1 阶特征的线性模型X2 阶特征的线性模型X8 阶特征的线性模型X10 阶特征的线性模型X现在n106个样本上,训 I 练 1 阶、2 阶、8 阶和 10 阶特征的模型,然后在一个大规模的独3.考虑二维输入空间点XTX1,X2上的回归问题,其中Xj1,1,j1,2在单位正方形内训练样本和测试样本在单位正方形中均匀分布输出模型为yNX3X510 x1x27x125x2,损失函数取平方误差损失。立的测试集上测试,则在下 3 列中选择合适的模型(可能有多个选项),并解释第 3 列中你选择的模型为什么测试误差小。(10 分)训练误差最小训练误差最大测试误差最小1
21、 阶特征的线性模型X2 阶特征的线性模型8 阶特征的线性模型XX10 阶特征的线性模型X(3)Theapproximationerrorofapolynomialregressionmodeldependsonthenumberoftrainingpoints.(T)(4)Thestructuralerrorofapolynomialregressionmodeldependsonthenumberoftrainingpoints.(F)4、Wearetryingtolearnregressionparametersforadatasetwhichweknowwasgeneratedfroma
22、polynomialofacertaindegree,butwedonotknowwhatthisdegreeis.Assumethedatawasactuallygeneratedfromapolynomialofdegree5withsomeaddedGaussiannoise(thatis2345yw)wxw?xwxw4xwxFortrainingwehave100 x,ypairsandfortestingweareusinganadditionalsetof100 x,ypairs.Sincewedonotknowthedegreeofthepolynomialwelearntwom
23、odelsfromthedata.ModelAlearnsparametersforapolynomialofdegree4andmodelBlearnsparametersforapolynomialofdegree6.Whichofthesetwomodelsislikelytofitthetestdatabetter?Answer:Degree6polynomial.Sincethemodelisadegree5polynomialandwehaveenoughtrainingdata,themodelwelearnforasixdegreepolynomialwilllikelyfit
24、averysmallcoefficientforx6.Thus,eventhoughitisasixdegreepolynomialitwillactuallybehaveinaverysimilarwaytoafifthdegreepolynomialwhichisthecorrectmodelleadingtobetterfittothedata.5、Input-dependentnoiseinregression八,0,1Ordinaryleast-squaresregressionisequivalenttoassumingthateachdatapointisgeneratedacc
25、ordingtoalinearfunctionoftheinputpluszero-mean,constant-varianceGaussiannoise.Inmanysystems,however,thenoisevarianceisitselfapositivelinearfunctionoftheinput(whichisassumedtobenon-negative,i.e.,x=0).a)Whichofthefollowingfamiliesofprobabilitymodelscorrectlydescribesthissituationintheunivariatecase?(H
26、int:onlyoneofthemdoes.)(iii)iscorrect.InaGaussiandistributionovery,thevarianceisdeterminedbythecoefficientofy2;so2.2byreplacingbyx,wegetavariancethatincreaseslinearlywithx.(Notealsothechangetothenormalization“constant.)(i)hasquadraticdepenXe 的 edoesnotchangethevarianceatall,itjustrenamesw1.b)Circlet
27、heplotsinFigure1thatcouldplausiblyhavebeengeneratedbysomeinstanceofthemodelfamily(ies)youchose.(11)and(iii).(Notethat(iii)worksfor20.)(i)exhibitsalargevarianceatx=0,andthevarianceappearsindependentofx.c)True/False:Regressionwithinput-dependentnoisegivesthesamesolutionasordinaryregressionforaninfinit
28、edatasetgeneratedaccordingtothecorrespondingmodel.True.Inbothcasesthealgorithmwillrecoverthetrueunderlyingmodel.d)Forthemodelyouchoseinpart(a),writedownthederivativeofthenegativeloglikelihoodwithrespecttowi.Theuegativeluglikclilujod今(Wi-(+可13)2工卬2Hiidtheileiivatiivew.r.t认NotethatforlinestlLioiiglith
29、e-orisin(w=0),theoptiiidHolmtionliasthepartknlarhsimpleform南=w工LtisputssHilctoLikeI1ULLdeiiviitiM?dfdielugwirlioutiwjticiiiUiutIUJJ;exp(j)=/:weumyIUKlikcliliuixlatorHgoorlrcfion!Phi,theyiuipli0rlivh:nKlliiigofniidriplcdatap“in,IICCHHSPtlicproductofpr口bM/Hticshcouucsa即川ioflogprobhliiitios.II111.五、分类1
30、 .产生式模型vs,判别式模型(a) Yourbillionairefriendneedsyourhelp.Sheneedstoclassifyjobapplicationsintogood/badcategories,andalsotodetectjobapplicantswholieintheirapplicationsusingdensityestimationtodetectoutliers.Tomeettheseneeds,doyourecommendusingadiscriminativeorgenerativeclassifier?Why?产生式模型因为要估计密度px|y(b)
31、Yourbillionairefriendalsowantstoclassifysoftwareapplicationstodetectbug-proneapplicationsusingfeaturesofthesourcecode.Thispilotprojectonlyhasafewapplicationstobeusedastrainingdata,though.Tocreatethemostaccurateclassifier,doyourecommendusingadiscriminativeorgenerativeclassifier?Why?判别式模型样本数较少,通常用判别式模
32、型直接分类效果会好些(d)Finally,yourbillionairefriendalsowantstoclassifycompaniestodecidewhichonetoacquire.Thisprojecthaslotsoftrainingdatabasedonseveraldecadesofresearch.Tocreatethemostaccurateclassifier,doyourecommendusingadiscriminativeorgenerativeclassifier?Why?产生式模型样本数很多时,可以学习到正确的产生式模型2、logstic 回归averageI
33、og-probabilityoftestlabels00.511522.533.54regularizationparameterCFigure2:Log-probabilityoflabelsasafunctionofregularizationparameterCHereweusealogisticregressionmodeltosolveaclassificationproblem.InFigure2,wehaveplottedthemeanlog-probabilityoflabelsinthetrainingandtestsetsafterhavingtrainedtheclass
34、ifierwithquadraticregularizationpenaltyanddifferentvaluesoftheregularizationparameterC.1、 Intrainingalogisticregressionmodelbymaximizingthelikelihoodofthelabelsgiventheinputswehavemultiplelocallyoptimalsolutions.(F)Answer:Thelog-probabilityoflabelsgivenexamplesimpliedbythelogisticregressionmodelisac
35、oncave(convexdown)functionwithrespecttotheweights.The(only)locallyoptimalsolutionisalsogloballyoptimal2、 Astochasticgradientalgorithmfortraininglogisticregressionmodelswithafixedlearningratewillfindtheoptimalsettingoftheweightsexactly.(F)Answer:Afixedlearningratemeansthatwearealwaystakingafinitestep
36、towardsimprovingthelog-probabilityofanysingletrainingexampleintheupdateequation.Unlesstheexamplesaresomehow“aligned”,wecoilitinuejumpingfromsidetosideoftheoptimalsolution,andwillnotbeabletogetarbitrarilyclosetoit.Thelearningratehastoapproachtozerointhecourseoftheupdatesfortheweightstoconverge.3、 The
37、averagelog-probabilityoftraininglabelsasinFigure2canneverincreaseasweincreaseC.(T)Strongerregularizationmeansmoreconstraintsonthesolutionandthusthe(average)log-probabilityofthetrainingexamplescanonlygetworse.4、 ExplainwhyinFigure2thetestlog-probabilityoflabelsdecreasesforlargevaluesofC.AsCincreases,
38、wegivemoreweighttoconstrainingthepredictor,andthusgivelessflexibilitytofittingthetrainingset.Theincreasedregularizationguaranteesthatthetestperformancegetsclosertoaveragelog-prcoftraininglabels4-O.thetrainingperformance,butasweover-constrainourallowedpredictors,wearenotabletofitthetrainingsetatall,a
39、ndalthoughthetestperformanceisnowveryclosetothetrainingperformance,botharelow.5、 Thelog-probabilityoflabelsinthetestsetwoulddecreaseforlargevaluesofCevenifwehadalargenumberoftrainingexamples.(T)Theaboveargumentstillholds,butthevalueofCforwhichwewillobservesuchadecreasewillscaleupwiththenumberofexamp
40、les.6、 Addingaquadraticregularizationpenaltyfortheparameterswhenestimatingalogisticregressionmodelensuresthatsomeoftheparameters(weightsassociatedwiththecomponentsoftheinputvectors)vanish.Aregularizationpenaltyforfeatureselectionmusthavenon-zeroderivativeatzero.Otherwise,theregularizationhasnoeffect
41、atzero,andweightwilltendtobeslightlynon-zero,evenwhenthisdoesnotimprovethelog-probabilitiesbymuch.3、正则化的 Logstic 回归ThisproblemwewillrefertothebinaryclassificationtaskdepictedinFigure1(a),whichweattempttosolvewiththesimplelinearlogisticregressionmodel户=L|x,叫,wa)=g(皿/-n吧工幻=;;(forsimplicitywedonotuseth
42、ebiasparameterw0).Thetrainingdatacanbeseparatedwithzerotrainingerror-seelineL1inFigure1(b)forinstance.(a)The2-dimensionaldatasetusedinProblem2Consideraregularizationapproachwherewetrytomaximize加耳以此|如 tik 心)-uj:=i-forlargeC.Notethatonlyw2ispenalized.WedliketoknowwhichofthefiiussinFigure1(b)couldarise
43、asaresultofsuchregularization.ForeachpotentiallineL2,L3orL4determinewhetheritcanresultfromregularizingw2.Ifnot,explainverybrieflywhynot.L2:No.Whenweregularizew2,theresultingboundarycanrelylessonthevalueofx2andthereforebecomesmorevertical.L2hereseemstobemorehorizontalthantheunregularizedsolutionsoitc
44、annotcomeasaresultofpenalizingw2L3:Yes.Herew2A2issmallrelativetow1A2(asevidencedbyhighslope),andeventhoughitwouldassignaratherlowlog-probabil(b)ThepointscanbeseparatedbyL1(solidline).PossibleotherdecisionboundariesareshownbyL2;L3;L4.itytotheobservedlabels,itcouldbeforcedbyalargeregularizationparamet
45、erC.L4:No.ForverylargeC,wegetaboundarythatisentirelyvertical(linex1=0orthex2axis).L4hereisreflectedacrossthex2axisandrepresentsapoorersolutionthanitscounterpartontheotherside.Formoderateregularizationwehavetogetthebestsolutionthatwecanconstructwhilekeepingw2small.L4isnotthebestandthuscannotcomeasare
46、sultofregularizingw2.(2)Ifwechangetheformofregularizationtoone-norm(absolutevalue)andalsoregularizew1wegetthefollowingpenalizedlog-likelihoodlog一洲X;,叫,帆)-(IWi|4-w2)-i=lConsideragaintheprobleminFigure1(a)andthesamelinearlogisticregressionmodel.AsweincreasetheregularizationparameterCwhichofthefollowin
47、gscenariosdoyouexpecttoobserve(chooseonlyone):(x)Firstw1willbecome0,thenw2.()w1andw2willbecomezerosimultaneously()Firstw2willbecome0,thenw1.()Noneoftheweightswillbecomeexactlyzero,onlysmallerasCincreasesThedatacanbeclassifiedwithzerotrainingerrorandthereforealsowithhighlog-probabilitybylookingatthev
48、alueofx2alone,i.e.makingw1=0.Initiallywemightprefertohaveanon-zerovalueforw1butitwillgotozeroratherquicklyasweincreaseregularization.Notethatwepayaregularizationpenaltyforanon-zerovalueofw1andifitdoesnhelp1classificationwhywouldwepaythepenalty?Theabsolutevalueregularizationensuresthatw1willindeedgot
49、oexactlyzero.AsCincreasesfurther,evenw2willeventuallybecomezero.Wepayhigherandhighercostforsettingw2toanon-zerovalue.Eventuallythiscostoverwhelmsthegainfromthelog-probabilityoflabelsthatwecanachievewithanon-zerow2.Notethatwhenw1=w2=0,thelog-probabilityoflabelsisafinitevaluenlog(0:5).1、SVMFigure4:Tra
50、iningset,maximummarginlinearseparator,andthesupportvectors(inbold).(1)Whatistheleave-one-outcross-validationerrorestimateformaximummarginseparationinfigure4?(weareaskingforanumber)(0)Basedonthefigurewecanseethatremovinganysinglepointwouldnotchancetheresultingmaximummarginseparator.Sinceallthepointsa
51、reinitiallyclassifiedcorrectly,theleave-one-outerroriszero.(2) Wewouldexpectthesupportvectorstoremainthesameingeneralaswemovefromalinearkerneltohigherorderpolynomialkernels.(F)Therearenoguaranteesthatthesupportvectorsremainthesame.Thefeaturevectorscorrespondingtopolynomialkernelsarenon-linearfunctio
52、nsoftheoriginalinputvectorsandthusthesupportpointsformaximummarginseparationinthefeaturespacecanbequitedifferent.(3) Structuralriskminimizationisguaranteedtofindthemodel(amongthoseconsidered)withthelowestexpectedloss.(F)Weareguaranteedtofindonlythemodelwiththelowestupperboundontheexpectedloss.(4) Wh
53、atistheVC-dimensionofamixtureoftwoGaussiansmodelintheplanewithequalcovariancematrices?Why?AmixtureoftwoGaussianswithequalcovariancematriceshasalineardecisionboundary.LinearseparatorsintheplanehaveVC-dimexactly3.4、SVM对如下数据点进行分类:(a) Plotthesesixtrainingpoints.Aretheclasses+,-linearlyseparable?yes(b) C
54、onstructtheweightvectorofthemaximummarginhyperplanebyinspectionandidentifythesupportvectors.Themaximummarginhyperplaneshouldhaveaslopeof-1andshouldsatisfyx1=3/2,x2=0.Thereforeitsequatixn+s2=3/2,andtheweightvectoris(1,1)T.(c) Ifyouremoveoneofthesupportvectorsdoesthesizeoftheoptimalmargindecrease,stay
55、thesame,orincrease?Inthisspecificdatasettheoptimalmarginincreaseswhenweremovethesupportvectors(1,0)or(1,1)m孙andstaysthesamewhenweremovetheothertwo.(d) (ExtraCredit)Isyouranswerto(c)alsotrueforanydataset?Provideacounterexampleorgiveashortproof.Whenwedropsomeconstraintsinaconstrainedmaximizationproble
56、m,wegetanoptimalvaluewhichisatleastasgoodthepreviousone.Itisbecausethesetofcandidatessatisfyingtheoriginal(larger,stronger)setofcontraintsisasubsetofthecandidatessatisfyingthenew(smaller,weaker)setofconstraints.So,fortheweakerconstraints,theoldoptimalsolutionisstillavailableandtheremaybeadditionssol
57、tonsthatareevenbetter.Inmathematicalform:maxfix)1,?=1.2.3UsingthemethodofLagrangemultipliersshowthatthesolutionisV?0,0,2,b1andthemarginislinearlyseparable?x1,.2x,x2.Arethe,-NoForoptimizationproblemswithinequalityconstraintssuchastheabove,weshouldapplyKKTconditionswhichisageneralizationofLagrangemult
58、ipliers.Howeverthisproblemcanbesolvedeasierbynotingthatwehavethreevectorsinthe3-dimensionalspaceandallofthemaresupportvectors.Hencetheall3constraintsholdwithequality.ThereforewecanapplythemethodofLagrangemultipliersto,min|wI2s.t.曲曲.1i)=1,i=1,2.3Vehave3constraints,andshouldhavp3Lagrangemultipliers.We
59、firstformtheLagrangianfunctionZ(w.A: whpreA=(ALh23)隈follows1,=71Ml同+%(如(与)+的1)i=lanfldificmitifitcwithrespectr.ooptimizationvarirtblnswandbmidtuzejo.Hww+尢物。(S)1=1吧产=3uby1=1l.Ningthedatapoints妣卬),wegetthefollowingequationsfromtheabovelines.wj+Aj而一入3=0(10)6-vAa=0(11)-A2-As=1)(12)Ai 一 43-人 3=0(13)(14)I
60、.m 电(10)and(14)嗝工1grt 支=0,Tlienph 喝日 iu 裁 tliisCoequalityconstraiiithintheoptiinizatioHprobleiikweget%=1(15)V2W2+tf3-1-t=1(16)4/22+L=1(17)(16)and(17)implytliatu”=0.mid七乜=2.llierctorctheo)tiniapciglitsarew=(0.0:2)an=LAntitheluarginis1/2.(e) Showthatthesolutionremainsthesameiftheconstraintsarechangedt
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 2025年1月份白酒酿造车间除尘代理设备维护协议
- 煤化工过程监控与先进控制技术考核试卷
- 灌溉设施在农业灌溉节水中的应用考核试卷
- 电机在电梯的平稳运行技术考核试卷
- 洗衣设备的快速安装考核试卷
- 电信网络技术优化与网络资源优化配置考核试卷
- 期货市场交易行为监控与预警系统考核试卷
- 石材加工企业文化建设与员工激励考核试卷
- 广东省茂名市2025届高三下学期二模试题 历史 含解析
- 2025雇佣合同 销售经理雇佣协议
- AGC培训课件教学课件
- 质量和食品安全管理手册有效版
- 妇女营养保健培训
- 时间序列的平稳性测试题及答案
- 2025-2030中国数据要素市场发展前景及趋势预测分析研究报告
- 陕西秦农银行招聘笔试真题2024
- 执法人员礼仪规范培训
- 4.1 中国的机遇与挑战课件 -2024-2025学年统编版道德与法治九年级下册
- 2025-2030中国纤维增强聚合物(FRP)钢筋行业市场现状供需分析及投资评估规划分析研究报告
- 茅台学院《汽车理论A》2023-2024学年第二学期期末试卷
- 肿瘤专科模考试题及答案
评论
0/150
提交评论