伍德里奇计量经济学讲义8.ppt_第1页
伍德里奇计量经济学讲义8.ppt_第2页
伍德里奇计量经济学讲义8.ppt_第3页
伍德里奇计量经济学讲义8.ppt_第4页
伍德里奇计量经济学讲义8.ppt_第5页
已阅读5页,还剩23页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、1,Multiple Regression Analysis,y = b0 + b1x1 + b2x2 + . . . bkxk + u 1. Estimation,2,Parallels with Simple Regression,b0 is still the intercept b1 to bk all called slope parameters u is still the error term (or disturbance) Still need to make a zero conditional mean assumption, so now assume that E(

2、u|x1,x2, ,xk) = 0 Still minimizing the sum of squared residuals, so have k+1 first order conditions,3,Interpreting Multiple Regression,4,A “Partialling Out” Interpretation,5,“Partialling Out” continued,Previous equation implies that regressing y on x1 and x2 gives same effect of x1 as regressing y o

3、n residuals from a regression of x1 on x2 This means only the part of xi1 that is uncorrelated with xi2 are being related to yi so were estimating the effect of x1 on y after x2 has been “partialled out”,6,Simple vs Multiple Reg Estimate,7,Goodness-of-Fit,8,Goodness-of-Fit (continued),How do we thin

4、k about how well our sample regression line fits our sample data? Can compute the fraction of the total sum of squares (SST) that is explained by the model, call this the R-squared of regression R2 = SSE/SST = 1 SSR/SST,9,Goodness-of-Fit (continued),10,More about R-squared,R2 can never decrease when

5、 another independent variable is added to a regression, and usually will increase Because R2 will usually increase with the number of independent variables, it is not a good way to compare models,11,Assumptions for Unbiasedness,Population model is linear in parameters: y = b0 + b1x1 + b2x2 + bkxk +

6、u We can use a random sample of size n, (xi1, xi2, xik, yi): i=1, 2, , n, from the population model, so that the sample model is yi = b0 + b1xi1 + b2xi2 + bkxik + ui E(u|x1, x2, xk) = 0, implying that all of the explanatory variables are exogenous None of the xs is constant, and there are no exact l

7、inear relationships among them,12,Too Many or Too Few Variables,What happens if we include variables in our specification that dont belong? There is no effect on our parameter estimate, and OLS remains unbiased What if we exclude a variable from our specification that does belong? OLS will usually b

8、e biased,13,Omitted Variable Bias,14,Omitted Variable Bias (cont),15,Omitted Variable Bias (cont),16,Omitted Variable Bias (cont),17,Summary of Direction of Bias,18,Omitted Variable Bias Summary,Two cases where bias is equal to zero b2 = 0, that is x2 doesnt really belong in model x1 and x2 are unco

9、rrelated in the sample If correlation between x2 , x1 and x2 , y is the same direction, bias will be positive If correlation between x2 , x1 and x2 , y is the opposite direction, bias will be negative,19,The More General Case,Technically, can only sign the bias for the more general case if all of th

10、e included xs are uncorrelated Typically, then, we work through the bias assuming the xs are uncorrelated, as a useful guide even if this assumption is not strictly true,20,Variance of the OLS Estimators,Now we know that the sampling distribution of our estimate is centered around the true parameter

11、 Want to think about how spread out this distribution is Much easier to think about this variance under an additional assumption, so Assume Var(u|x1, x2, xk) = s2 (Homoskedasticity),21,Variance of OLS (cont),Let x stand for (x1, x2,xk) Assuming that Var(u|x) = s2 also implies that Var(y| x) = s2 The

12、 4 assumptions for unbiasedness, plus this homoskedasticity assumption are known as the Gauss-Markov assumptions,22,Variance of OLS (cont),23,Components of OLS Variances,The error variance: a larger s2 implies a larger variance for the OLS estimators The total sample variation: a larger SSTj implies

13、 a smaller variance for the estimators Linear relationships among the independent variables: a larger Rj2 implies a larger variance for the estimators,24,Misspecified Models,25,Misspecified Models (cont),While the variance of the estimator is smaller for the misspecified model, unless b2 = 0 the mis

14、specified model is biased As the sample size grows, the variance of each estimator shrinks to zero, making the variance difference less important,26,Estimating the Error Variance,We dont know what the error variance, s2, is, because we dont observe the errors, ui What we observe are the residuals, i We can use the residuals to form an estimate of the error variance,27,Error Variance Estimate (cont),df = n (k + 1), or df = n k 1 df (i.e. degrees of freedom) is the

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论