Erstellt von Max Schnidman
vor etwa 5 Jahre
|
||
Frage | Antworten |
Convergence in Probability | \[\underset{n \to \infty}{lim} P(|X_n - X| > \epsilon) = 0\] |
Almost Sure Convergence | \[P(\{\omega: X_n(\omega) \nrightarrow X(\omega)\}) = 0\] \[Y_n = \underset{k\ge n}{sup} |X_k - X| \overset{p}{\to} 0\] \[P(\underset{k \ge n}{sup} |X_k - X| > \epsilon) = 0\] |
Direction of Convergences | \[A.S. \to P \to D\] \[(R) \to P \to D\] |
Additional A.S. Convergences | \[\sum_{k=1}^{\infty}P(|X_n - X|>\epsilon) \le |\infty| \implies X_n \overset{a.s.}{\to} X\] \[X_n \overset{p}{\to} X \implies \exists X_{n} \ s.t. \ X_{n_k} \overset{a.s.}{\to} X\] |
Convergence in mean of order r | \[\underset{n\to\infty}{lim}E[[|X_n - X|]^r] = 0\] |
Cauchy Sequence in Probability | \[\underset{n,n\to\infty}{lim} P(|X_n - X_m| > \epsilon) = 0\] |
Borel-Cantelli Lemma | \[A = \cap_{n=1}^{\infty} \cup_{k\ge n} A_k\] \[\sum_{n=1}^{\infty} P(A_n) < \infty \implies P(A) = 0\] |
Continuous Mapping Theorem | \[X_n \to X \implies g(X_n) \to g(X)\] in probability or A.S. |
Convergence in Distribtuion | \[\int f(x)dF_n(x) \to \int f(x)dF(x)\] |
Class of Generalized Distributions | \[\underset{x_n \to +\infty}{lim} G_{X_1...X_n}(x_1,...,x_n) = G_{X_1...X_{n-1}}(x_1,...,x_{n-1})\] \[\underset{x_n \to -\infty}{lim} G_{X_1...X_n}(x_1,...,x_n) =0\] \[G(-\infty)\ge 0\] \[G(\infty)\le 1\] |
Helly-Bray Theorem | The class of generalized distributions is compact w.r.t. weak (distributional) convergence. |
Asymptotic Tightness | \[\forall \epsilon > 0 \ \exists N \ s.t. \ inf_n(F_n(N) - F_n(-N)) > 1-\epsilon\] |
Kinchin's Law of Large Numbers | Suppose \[\{X_n\}_{n=1}^{\infty}\] is i.i.d. sequence of r.v. w. \[E(X_n) = a\] and let \[S_n = \sum_{k=1}^n X_k\] Then \[\frac{S_n}{n} \overset{p}{\to} a\] |
Central Limit Theorem | If \[0 < \sigma^2 < \infty\] \[\underset{n\to\infty}{lim} \underset{x}{sup} |P(Z_n <x) - \Phi(x)| = 0\] \[Z_n = \sqrt{n}\frac{(S_n - a)}{\sigma}\] |
Convergence Properties | \[X_n \overset{p}{\to} c \equiv X_n \overset{d}{\to} c\] \[X_n \overset{d}{\to} X, |X_n- Y_N| \overset{p}{\to} 0,\implies Y_n \overset{d}{\to} X\] \[X_n \overset{p}{\to} X, Y_N \overset{p}{\to} c,\implies (X_n,Y_n) \overset{d}{\to} (X,c)\] |
Slutsky Theorem | \[X_n \overset{d}{\to} X, Y_N \overset{d}{\to} c \implies \] \[ 1. X_n + Y_n \overset{d}{\to} X + c\] \[ 2. X_n Y_n \overset{d}{\to} X c\] \[ 3. X_n /Y_n \overset{d}{\to} X / c\] |
Lindeberg-Feller CLT Condition | \[\sum_{i=1}^{k_n} E[||Y_{n,i}||^2]\boldsymbol{1}\{||Y_{n,i}||>\epsilon\} \to 0\] \[\lim_{n \to \infty} \frac{1}{s_n^2}\sum_{k = 1}^n \mathbb{E} \left[(X_k - \mu_k)^2 \cdot \mathbf{1}_{\{ | X_k - \mu_k | > \varepsilon s_n \}} \right] = 0\] |
Delta Method | \[X_n \overset{d}{\to} X, b_n \to 0\] \[\implies \frac{g(a + b_nX_n) - g(a)}{b_n} \overset{d}{\to} X g^{\prime}(a)\] |
Extremum Estimator | \[\theta_0 = argmax_{\theta \in \Theta} Q(\theta)\] \[Q(\theta) = E_{\theta_0}[g(Y, \theta)] = \int g(y, \theta)F(dy, \theta_0)\] |
Uniform Convergence | \[Pr(\underset{T\to\infty}{lim} \underset{\theta\in\Theta}{sup} Q_T(\theta) = 0) = 1 \implies Q_T(\theta) \overset{a.s.}{\to} 0 \] \[\underset{T\to\infty}{lim}Pr( \underset{\theta\in\Theta}{sup} Q_T(\theta) < \epsilon) = 1 \implies Q_T(\theta) \overset{p}{\to} 0 \] |
Assumptions for Extremum Estimation (Convergence in Probability) | \[1. \Theta \ is \ compact\] \[2. \hat{Q}_T(\theta) \ continuous \ in \ \Theta\] \[3. \hat{Q}_T(\theta) \overset{p}{\to} Q(\theta)\] uniformly 4. Identification (unique global maximum) |
Asymptotic Normality | \[1. \frac{\delta^2\hat{Q}}{\delta\theta\delta\theta^{\prime}} \ exists\] \[2. \frac{\delta^2\hat{Q}(\theta_T)}{\delta\theta\delta\theta^{\prime}\overset{p}{\to} A(\theta_0)}\] \[3. \sqrt{T}\frac{\delta\hat{Q}(\theta_0)}{\delta\theta} \overset{d}{\to} N(0,B(\theta_0))\] \[\implies \sqrt{T}(\hat{\theta} - \theta_0) \overset{d}{\to} N(0, A(\theta_0)^{-1\prime}B(\theta_0)A(\theta_0)^{-1}\] |
Assumptions for MLE | \[1. Y \sim F(\cdot, \theta_0)\] \[2. y_t \ i.i.d\] \[3. \theta \in \Theta \subset \boldsymbol{R}^p\] 4. Distribution is dictated by model |
MLE Objective Function | \[L(\theta) = E_{\theta_0}[log f(Y, \theta)\] |
Identification | \[Pr(ln(f(Y, \theta_0) \ne ln(f(Y, \theta^*)) > 0\] |
Score | \[\frac{\delta ln(f(Y, \theta)}{\delta \theta}\] Gradient of log likelihood Under typical assumptions, Expectation of 0 |
Information | \[Var(s(\theta, y))\] Unidentified models have a singular information matrix \[-E[\frac{\delta^2 ln(f(Y,\theta))}{\delta\theta\delta\theta^{\prime}}\] if regularity conditions are satisfied |
Cramer-Rao Lower Bound | \[Var(\sqrt{T}(\hat{\theta} - \theta_0)) \ge I_{\theta}^{-1}\] |
Asymptotic Efficiency | \[\underset{T \to\infty}{lim} Var(\hat{\theta}_T) = I_{\theta}^{-1}\] |
Type I Error | Rejecting when the Null is True |
Type II Error | Not rejecting the null when it is false |
Significance Level | \[P_{\theta}(\delta(X) = d_1) = P_{\theta}(X \in S_1) \le \alpha \forall \theta \in \Theta_H\] \[0<\alpha<1\] |
Size of the Test | \[\underset{\theta\in\Theta_h}{sup} P_{\theta}(X\in S_1)\] with fixed \[\alpha\] |
Power Function | \[\beta(\theta) = P_{\theta}(\delta(X) = d_1)\] |
Test Optimization | \[\underset{\phi(\cdot)}{max} \beta_{\phi}(\theta) = E_{\theta}[\phi(X)]\] \[s.t. \ E_{\theta}[\phi(X)] \le \alpha\] |
Simple Distributions | Class of distributions with a single distribution |
Composite Distributions | Class of distributions with multiple distribution |
Likelihood Ratio Test | \[\frac{P_1(x)}{P_0(x)}\] |
P-Value | Smallest Significance Level at which hypothesis would be rejected given observation \[\hat{p} = \hat{p}(x) = inf\{\alpha: x \in S_{\alpha}\}\] |
Normal PDF | \[\frac{1}{\sqrt{2\pi\sigma^2} e^{-\frac{(x - \mu)^2}{2\sigma^2}\] |
Bernoulli PDF | \[ q = 1-p \ if \ x = 0\] \[p \ if \ x = 1\] |
Binomial PMF | \[(n k) p^kq^{n-k}\] \[(n k) = \frac{n!}{k!(n-k)!}\] |
Uniform PMF | \[\frac{1}{b-a}\] in support 0 otherwise |
Poisson PMF | \[\frac{\lambda^k e^{-\lambda}}{k!}\] |
Cauchy PDF | \[\frac{1}{\pi \gamma[1 + (\frac{x - x_0}{\gamma})^2]}\] |
Chebychev's/Markov Inequality | \[P(g(X) \ge r) \le \frac{E[g(X)]}{r}\] |
Holder's Inequality | \[|EXY| \le E|XY| \le (E|X|^p)^{\frac{1}{p}}(E|Y|^q])^{\frac{1}{q}}\] |
Jensen's Inequality | \[E[g(X)] \le g(E[X])\] |
Möchten Sie mit GoConqr kostenlos Ihre eigenen Karteikarten erstellen? Mehr erfahren.