Skip to content

Commit ec13919

Browse files
committed
corrected some pages
Several small mistakes/errors were corrected in several proofs/definitions.
1 parent 136fe56 commit ec13919

5 files changed

Lines changed: 12 additions & 6 deletions

File tree

P/mlr-mle.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ the [maximum likelihood estimates](/D/mle) of $\beta$ and $\sigma^2$ are given
3232
$$ \label{eq:MLE-MLE}
3333
\begin{split}
3434
\hat{\beta} &= (X^\mathrm{T} V^{-1} X)^{-1} X^\mathrm{T} V^{-1} y \\
35-
\hat{\sigma}^2 &= \frac{1}{n} (y-X\hat{\beta})^\mathrm{T} (y-X\hat{\beta}) \; .
35+
\hat{\sigma}^2 &= \frac{1}{n} (y-X\hat{\beta})^\mathrm{T} V^{-1} (y-X\hat{\beta}) \; .
3636
\end{split}
3737
$$
3838

P/mlr-wls.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,12 @@ sources:
2020
in: "Methods and models for fMRI data analysis in neuroeconomics"
2121
pages: "Lecture 3, Slides 20/23"
2222
url: "http://www.socialbehavior.uzh.ch/teaching/methodsspring10.html"
23+
- authors: "Wikipedia"
24+
year: 2021
25+
title: "Weighted least squares"
26+
in: "Wikipedia, the free encyclopedia"
27+
pages: "retrieved on 2021-11-17"
28+
url: "https://en.wikipedia.org/wiki/Weighted_least_squares#Motivation"
2329

2430
proof_id: "P77"
2531
shortcut: "mlr-wls"

P/slr-mle.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ username: "JoramSoch"
2424
**Theorem:** Given a [simple linear regression model](/D/mlr) with independent observations
2525

2626
$$ \label{eq:slr}
27-
y = \beta_0 + \beta_1 x + \varepsilon, \; \varepsilon_i \sim \mathcal{N}(0, \sigma^2), \; i = 1,\ldots,n \; ,
27+
y_i = \beta_0 + \beta_1 x_i + \varepsilon_i, \; \varepsilon_i \sim \mathcal{N}(0, \sigma^2), \; i = 1,\ldots,n \; ,
2828
$$
2929

3030
the [maximum likelihood estimates](/D/mle) of $\beta_0$, $\beta_1$ and $\sigma^2$ are given by
@@ -90,7 +90,7 @@ and setting this derivative to zero gives the MLE for $\beta_1$:
9090

9191
$$ \label{eq:beta1-mle}
9292
\begin{split}
93-
\frac{\mathrm{d}\mathrm{LL}(\hat{\beta}_0,\hat{\beta}_1,\hat{\sigma}^2)}{\mathrm{d}\beta_0} &= 0 \\
93+
\frac{\mathrm{d}\mathrm{LL}(\hat{\beta}_0,\hat{\beta}_1,\hat{\sigma}^2)}{\mathrm{d}\beta_1} &= 0 \\
9494
0 &= \frac{1}{\hat{\sigma}^2} \sum_{i=1}^n (x_i y_i - \hat{\beta}_0 x_i - \hat{\beta}_1 x_i^2) \\
9595
0 &= \sum_{i=1}^n x_i y_i - \hat{\beta}_0 \sum_{i=1}^n x_i - \hat{\beta}_1 \sum_{i=1}^n x_i^2) \\
9696
0 &\overset{\eqref{eq:beta0-mle}}{=} \sum_{i=1}^n x_i y_i - (\bar{y} - \hat{\beta}_1 \bar{x}) \sum_{i=1}^n x_i - \hat{\beta}_1 \sum_{i=1}^n x_i^2 \\

P/slr-mle2.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ username: "JoramSoch"
2424
**Theorem:** Given a [simple linear regression model](/D/mlr) with independent observations
2525

2626
$$ \label{eq:slr}
27-
y = \beta_0 + \beta_1 x + \varepsilon, \; \varepsilon_i \sim \mathcal{N}(0, \sigma^2), \; i = 1,\ldots,n \; ,
27+
y_i = \beta_0 + \beta_1 x_i + \varepsilon_i, \; \varepsilon_i \sim \mathcal{N}(0, \sigma^2), \; i = 1,\ldots,n \; ,
2828
$$
2929

3030
the [maximum likelihood estimates](/D/mle) of $\beta_0$, $\beta_1$ and $\sigma^2$ are given by
@@ -70,7 +70,7 @@ $$ \label{eq:slr-mle-b}
7070
\end{split}
7171
$$
7272

73-
which [is equal to the ordinary least squares solution for simple linear regression](/P/slr-ols):
73+
which [is equal to the ordinary least squares solution for simple linear regression](/P/slr-ols2):
7474

7575
$$ \label{eq:slr-mle-b-qed}
7676
\begin{split}

P/slr-ols2.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ username: "JoramSoch"
2424
**Theorem:** Given a [simple linear regression model](/D/slr) with independent observations
2525

2626
$$ \label{eq:slr}
27-
y = \beta_0 + \beta_1 x + \varepsilon, \; \varepsilon_i \sim \mathcal{N}(0, \sigma^2), \; i = 1,\ldots,n \; ,
27+
y_i = \beta_0 + \beta_1 x_i + \varepsilon_i, \; \varepsilon_i \sim \mathcal{N}(0, \sigma^2), \; i = 1,\ldots,n \; ,
2828
$$
2929

3030
the parameters minimizing the [residual sum of squares](/D/rss) are given by

0 commit comments

Comments
 (0)