Criado por Max Schnidman
quase 5 anos atrás
|
||
Questão | Responda |
Loss functions | Measure of distance between observed values and estimates \[L(y-\theta (x))\] |
Squared loss function | Minimizes squared distance between observed and estimated values Optimal is Conditional Expectation Function \[E[Y|X]\] \[E[(y-\theta)^2|X] = E[((y-\mu_x) - (\theta - \mu_x))^2|X]\] \[=V(y|x) + (\theta - \mu_x)\] |
Properties of the CEF | \[\theta (x) = argmin E[(y-c)^2|X]\] \[\epsilon = Y - E[Y|X] \implies E[\epsilon |X] = 0\] \[\implies E[X' \epsilon |X] = 0\] \[\implies E[h(X) \epsilon |X] = 0\] \[V(\epsilon) = E[V(Y|X)]\] \[C(X,\epsilon) = 0\] |
Best Linear Predictor (BLP) | \[X\beta\] \[\beta = argmin E[(Y-X\beta)^2]\] \[E[X'(Y-X\beta)]=0\] \[= E[X'X]^{-1} E[X'Y]\] \[V(U) = E[V(Y|X)] + E[\omega^2]\] Omega is difference between CEF and BLP |
Properties of i.i.d. sampling | \[E[Y_i] = \mu\] \[V(Y_i) = \sigma^2\] \[C(Y_i, Y_j) = 0\] Sample average converges to population average \[V(\bar{Y}) = \frac{\sigma^2}{n}\] |
Mean Squared Error | Sum of Squared Bias and Variance |
Asymptotic properties of samples | \[plim \bar{Y} = \mu\] \[plim V(Y) = 0\] Central Limit Theorem |
Uniform Kernel Estimate | \[frac{1/n \sum y_i \boldsymbol{1} (|x_i - x_0| \le \delta_n)}{1/n \sum \boldsymbol{1} (|x_i - x_0| \le \delta_n)}\] Limiting Distribution: \[N(\alpha, \beta)\] |
Matrix Algebra of Regressions | \[b_n = (X'X)^{-1}(X'Y) = Q^{-1}X'Y = AY\] \[\hat{Y} = X(X'X)^{-1}X'Y = NY\] \[e = Y - \hat{Y} = Y - NY = (I-N)Y = MY\] |
Limiting distribution of beta | \[N(0, E[X'X]^{-1} E[X'XU^2]E[X'X]^{-1}\] Sandwich form, robust against HESKD. If model HOSKD, \[\sigma^2 E[X'X]^{-1}\] |
CRM assumtions | \[1. E[Y|X] = X\beta\] \[2. V(Y|X) = \sigma^2 I\] \[3. Rank(X) = k\] 4. X is non-stochastic. |
Quer criar seus próprios Flashcards gratuitos com GoConqr? Saiba mais.