Skip to content

Commit 1267a4f

Browse files
authored
added 5 proofs
1 parent 24d0856 commit 1267a4f

5 files changed

Lines changed: 481 additions & 0 deletions

File tree

P/slr-mat.md

Lines changed: 108 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,108 @@
1+
---
2+
layout: proof
3+
mathjax: true
4+
5+
author: "Joram Soch"
6+
affiliation: "BCCN Berlin"
7+
e_mail: "joram.soch@bccn-berlin.de"
8+
date: 2021-11-09 15:19:00
9+
10+
title: "Transformation matrices for simple linear regression"
11+
chapter: "Statistical Models"
12+
section: "Univariate normal data"
13+
topic: "Simple linear regression"
14+
theorem: "Transformation matrices"
15+
16+
sources:
17+
18+
proof_id: "P285"
19+
shortcut: "slr-mat"
20+
username: "JoramSoch"
21+
---
22+
23+
24+
**Theorem:** Under [ordinary least squares](/P/slr-ols) for [simple linear regression](/D/slr), [estimation](/D/emat), [projection](/D/pmat) and [residual-forming](/D/rfmat) matrices are given by
25+
26+
$$ \label{eq:slr-mat}
27+
\begin{split}
28+
E &= \frac{1}{(n-1)\,s_x^2} \left[ \begin{matrix} (x^\mathrm{T} x/n) \, 1_n^\mathrm{T} - \bar{x} \, x^\mathrm{T} \\ - \bar{x} \, 1_n^\mathrm{T} + x^\mathrm{T} \end{matrix} \right] \\
29+
P &= \frac{1}{(n-1)\,s_x^2} \left[ \begin{matrix} (x^\mathrm{T} x/n) - 2 \bar{x} x_1 + x_1^2 & \cdots & (x^\mathrm{T} x/n) - \bar{x} (x_1 + x_n) + x_1 x_n \\ \vdots & \ddots & \vdots \\ (x^\mathrm{T} x/n) - \bar{x} (x_1 + x_n) + x_1 x_n & \cdots & (x^\mathrm{T} x/n) - 2 \bar{x} x_n + x_n^2 \end{matrix} \right] \\
30+
R &= \frac{1}{(n-1)\,s_x^2} \left[ \begin{matrix} (n-1) (x^\mathrm{T} x/n) + \bar{x} (2 x_1 - n\bar{x}) - x_1^2 & \cdots & -(x^\mathrm{T} x/n) + \bar{x} (x_1 + x_n) - x_1 x_n \\ \vdots & \ddots & \vdots \\ -(x^\mathrm{T} x/n) + \bar{x} (x_1 + x_n) - x_1 x_n & \cdots & (n-1) (x^\mathrm{T} x/n) + \bar{x} (2 x_n - n\bar{x}) - x_n^2 \end{matrix} \right]
31+
\end{split}
32+
$$
33+
34+
where $1_n$ is an $n \times 1$ vector of ones, $x$ is the $n \times 1$ single predictor variable, $\bar{x}$ is the [sample mean](/D/mean-samp) of $x$ and $s_x^2$ is the [sample variance](/D/var-samp) of $x$.
35+
36+
37+
**Proof:** [Simple linear regression is a special case of multiple linear regression](/P/slr-mlr) with
38+
39+
$$ \label{eq:slr-mlr}
40+
X = \left[ 1_n, \, x \right] \quad \text{and} \quad \beta = \left[ \begin{matrix} \beta_0 \\ \beta_1 \end{matrix} \right] \; ,
41+
$$
42+
43+
such that the simple linear regression model can also be written as
44+
45+
$$ \label{eq:mlr}
46+
y = X\beta + \varepsilon, \; \varepsilon \sim \mathcal{N}(0, \sigma^2 I_n) \; .
47+
$$
48+
49+
Moreover, we [note the following equality](/P/slr-olsdist):
50+
51+
$$ \label{eq:b-est-cov-den}
52+
x^\mathrm{T} x - n\bar{x}^2 = (n-1) \, s_x^2 \; .
53+
$$
54+
55+
<br>
56+
1) The [estimation matrix is given by](/P/mlr-mat)
57+
58+
$$ \label{eq:E}
59+
E = (X^\mathrm{T} X)^{-1} X^\mathrm{T}
60+
$$
61+
62+
which is a $2 \times n$ matrix and can be reformulated as follows:
63+
64+
$$ \label{eq:E-qed}
65+
\begin{split}
66+
E &= (X^\mathrm{T} X)^{-1} X^\mathrm{T} \\
67+
&= \left( \left[ \begin{matrix} 1_n^\mathrm{T} \\ x^\mathrm{T} \end{matrix} \right] \left[ 1_n, \, x \right] \right)^{-1} \left[ \begin{matrix} 1_n^\mathrm{T} \\ x^\mathrm{T} \end{matrix} \right] \\
68+
&= \left( \left[ \begin{matrix} n & n\bar{x} \\ n\bar{x} & x^\mathrm{T} x \end{matrix} \right] \right)^{-1} \left[ \begin{matrix} 1_n^\mathrm{T} \\ x^\mathrm{T} \end{matrix} \right] \\
69+
&= \frac{1}{n x^\mathrm{T} x - (n\bar{x})^2} \left[ \begin{matrix} x^\mathrm{T} x & -n\bar{x} \\ -n\bar{x} & n \end{matrix} \right] \left[ \begin{matrix} 1_n^\mathrm{T} \\ x^\mathrm{T} \end{matrix} \right] \\
70+
&= \frac{1}{x^\mathrm{T} x - n\bar{x}^2} \left[ \begin{matrix} x^\mathrm{T} x/n & -\bar{x} \\ -\bar{x} & 1 \end{matrix} \right] \left[ \begin{matrix} 1_n^\mathrm{T} \\ x^\mathrm{T} \end{matrix} \right] \\
71+
&= \frac{1}{(n-1)\,s_x^2} \left[ \begin{matrix} (x^\mathrm{T} x/n) \, 1_n^\mathrm{T} - \bar{x} \, x^\mathrm{T} \\ - \bar{x} \, 1_n^\mathrm{T} + x^\mathrm{T} \end{matrix} \right] \; .
72+
\end{split}
73+
$$
74+
75+
<br>
76+
2) The [projection matrix is given by](/P/mlr-mat)
77+
78+
$$ \label{eq:P}
79+
P = X (X^\mathrm{T} X)^{-1} X^\mathrm{T} = X \, E
80+
$$
81+
82+
which is an $n \times n$ matrix and can be reformulated as follows:
83+
84+
$$ \label{eq:P-qed}
85+
\begin{split}
86+
P &= X \, E = \left[ \begin{matrix} 1_n & x \end{matrix} \right] \left[ \begin{matrix} e_1 \\ e_2 \end{matrix} \right] \\
87+
&= \frac{1}{(n-1)\,s_x^2} \left[ \begin{matrix} 1 & x_1 \\ \vdots & \vdots \\ 1 & x_n \end{matrix} \right] \left[ \begin{matrix} (x^\mathrm{T} x/n) - \bar{x} x_1 & \cdots & (x^\mathrm{T} x/n) - \bar{x} x_n \\ -\bar{x} + x_1 & \cdots & -\bar{x} + x_n \end{matrix} \right] \\
88+
&= \frac{1}{(n-1)\,s_x^2} \left[ \begin{matrix} (x^\mathrm{T} x/n) - 2 \bar{x} x_1 + x_1^2 & \cdots & (x^\mathrm{T} x/n) - \bar{x} (x_1 + x_n) + x_1 x_n \\ \vdots & \ddots & \vdots \\ (x^\mathrm{T} x/n) - \bar{x} (x_1 + x_n) + x_1 x_n & \cdots & (x^\mathrm{T} x/n) - 2 \bar{x} x_n + x_n^2 \end{matrix} \right] \; .
89+
\end{split}
90+
$$
91+
92+
<br>
93+
3) The [residual-forming matrix is given by](/P/mlr-mat)
94+
95+
$$ \label{eq:R}
96+
R = I_n - X (X^\mathrm{T} X)^{-1} X^\mathrm{T} = I_n - P
97+
$$
98+
99+
which also is an $n \times n$ matrix and can be reformulated as follows:
100+
101+
$$ \label{eq:R-qed}
102+
\begin{split}
103+
R &= I_n - P = \left[ \begin{matrix} 1 & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & 1 \end{matrix} \right] - \left[ \begin{matrix} p_{11} & \cdots & p_{1n} \\ \vdots & \ddots & \vdots \\ p_{n1} & \cdots & p_{nn} \end{matrix} \right] \\
104+
&= \frac{1}{(n-1)\,s_x^2} \left[ \begin{matrix} x^\mathrm{T} x - n\bar{x}^2 & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & x^\mathrm{T} x - n\bar{x}^2 \end{matrix} \right] \\
105+
&- \frac{1}{(n-1)\,s_x^2} \left[ \begin{matrix} (x^\mathrm{T} x/n) - 2 \bar{x} x_1 + x_1^2 & \cdots & (x^\mathrm{T} x/n) - \bar{x} (x_1 + x_n) + x_1 x_n \\ \vdots & \ddots & \vdots \\ (x^\mathrm{T} x/n) - \bar{x} (x_1 + x_n) + x_1 x_n & \cdots & (x^\mathrm{T} x/n) - 2 \bar{x} x_n + x_n^2 \end{matrix} \right] \\
106+
&= \frac{1}{(n-1)\,s_x^2} \left[ \begin{matrix} (n-1) (x^\mathrm{T} x/n) + \bar{x} (2 x_1 - n\bar{x}) - x_1^2 & \cdots & -(x^\mathrm{T} x/n) + \bar{x} (x_1 + x_n) - x_1 x_n \\ \vdots & \ddots & \vdots \\ -(x^\mathrm{T} x/n) + \bar{x} (x_1 + x_n) - x_1 x_n & \cdots & (n-1) (x^\mathrm{T} x/n) + \bar{x} (2 x_n - n\bar{x}) - x_n^2 \end{matrix} \right] \; .
107+
\end{split}
108+
$$

P/slr-mlr.md

Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,58 @@
1+
---
2+
layout: proof
3+
mathjax: true
4+
5+
author: "Joram Soch"
6+
affiliation: "BCCN Berlin"
7+
e_mail: "joram.soch@bccn-berlin.de"
8+
date: 2021-11-09 07:57:00
9+
10+
title: "Simple linear regression is a special case of multiple linear regression"
11+
chapter: "Statistical Models"
12+
section: "Univariate normal data"
13+
topic: "Simple linear regression"
14+
theorem: "Special case of multiple linear regression"
15+
16+
sources:
17+
18+
proof_id: "P281"
19+
shortcut: "slr-mlr"
20+
username: "JoramSoch"
21+
---
22+
23+
24+
**Theorem:** [Simple linear regression](/D/slr) is a special case of [multiple linear regression](/D/mlr) with design matrix $X$ and regression coefficients $\beta$
25+
26+
$$ \label{eq:slr-mlr}
27+
X = \left[ 1_n, \, x \right] \quad \text{and} \quad \beta = \left[ \begin{matrix} \beta_0 \\ \beta_1 \end{matrix} \right]
28+
$$
29+
30+
where $1_n$ is an $n \times 1$ vector of ones, $x$ is the $n \times 1$ single predictor variable, $\beta_0$ is the intercept and $\beta_1$ is the slope of the [regression line](/D/regline).
31+
32+
33+
**Proof:** Without loss of generality, consider the [simple linear regression case with uncorrelated errors](/D/slr):
34+
35+
$$ \label{eq:slr}
36+
y_i = \beta_0 + \beta_1 x_i + \varepsilon_i, \; \varepsilon_i \sim \mathcal{N}(0, \sigma^2) \; .
37+
$$
38+
39+
In matrix notation and using the [multivariate normal distribution](/D/mvn), this can also be written as
40+
41+
$$ \label{eq:slr-mlr-s1}
42+
\begin{split}
43+
y &= \beta_0 1_n + \beta_1 x + \varepsilon, \; \varepsilon \sim \mathcal{N}(0, I_n) \\
44+
y &= \left[ 1_n, \, x \right] \left[ \begin{matrix} \beta_0 \\ \beta_1 \end{matrix} \right] + \varepsilon, \; \varepsilon \sim \mathcal{N}(0, I_n) \; .
45+
\end{split}
46+
$$
47+
48+
Comparing with the [multiple linear regression equations for uncorrelated errors](/D/mlr), we finally note:
49+
50+
$$ \label{eq:slr-mlr-s3}
51+
y = X\beta + \varepsilon \quad \text{with} \quad X = \left[ 1_n, \, x \right] \quad \text{and} \quad \beta = \left[ \begin{matrix} \beta_0 \\ \beta_1 \end{matrix} \right] \; .
52+
$$
53+
54+
In the [case of correlated observations](/D/slr), the [error distribution changes to](/D/mlr):
55+
56+
$$ \label{eq:mlr-noise}
57+
\varepsilon \sim \mathcal{N}(0, \sigma^2 V) \; .
58+
$$

P/slr-olsdist.md

Lines changed: 108 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,108 @@
1+
---
2+
layout: proof
3+
mathjax: true
4+
5+
author: "Joram Soch"
6+
affiliation: "BCCN Berlin"
7+
e_mail: "joram.soch@bccn-berlin.de"
8+
date: 2021-11-09 09:09:00
9+
10+
title: "Distribution of parameter estimates for simple linear regression"
11+
chapter: "Statistical Models"
12+
section: "Univariate normal data"
13+
topic: "Simple linear regression"
14+
theorem: "Distribution of estimates"
15+
16+
sources:
17+
- authors: "Wikipedia"
18+
year: 2021
19+
title: "Proofs involving ordinary least squares"
20+
in: "Wikipedia, the free encyclopedia"
21+
pages: "retrieved on 2021-11-09"
22+
url: "https://en.wikipedia.org/wiki/Proofs_involving_ordinary_least_squares#Unbiasedness_and_variance_of_%7F'%22%60UNIQ--postMath-00000037-QINU%60%22'%7F"
23+
24+
proof_id: "P282"
25+
shortcut: "slr-olsdist"
26+
username: "JoramSoch"
27+
---
28+
29+
30+
**Theorem:** Assume a [simple linear regression model](/D/slr) with independent observations
31+
32+
$$ \label{eq:slr}
33+
y = \beta_0 + \beta_1 x + \varepsilon, \; \varepsilon_i \sim \mathcal{N}(0, \sigma^2)
34+
$$
35+
36+
and consider estimation using [ordinary least squares](/P/slr-ols). Then, the estimated parameters are [normally distributed](/D/mvn) as
37+
38+
$$ \label{eq:slr-olsdist}
39+
\left[ \begin{matrix} \hat{\beta}_0 \\ \hat{\beta}_1 \end{matrix} \right] \sim \mathcal{N}\left( \left[ \begin{matrix} \beta_0 \\ \beta_1 \end{matrix} \right], \, \frac{\sigma^2}{(n-1) \, s_x^2} \cdot \left[ \begin{matrix} x^\mathrm{T}x/n & -\bar{x} \\ -\bar{x} & 1 \end{matrix} \right] \right)
40+
$$
41+
42+
where $s_x^2$ is the [sample variance](/D/var-samp) of $x$.
43+
44+
45+
**Proof:** [Simple linear regression is a special case of multiple linear regression](/P/slr-mlr) with
46+
47+
$$ \label{eq:slr-mlr}
48+
X = \left[ 1_n, \, x \right] \quad \text{and} \quad \beta = \left[ \begin{matrix} \beta_0 \\ \beta_1 \end{matrix} \right] \; ,
49+
$$
50+
51+
such that \eqref{eq:slr} can also be written as
52+
53+
$$ \label{eq:mlr}
54+
y = X\beta + \varepsilon, \; \varepsilon \sim \mathcal{N}(0, \sigma^2 I_n)
55+
$$
56+
57+
and [ordinary least sqaures estimates](/P/mlr-ols) are given by
58+
59+
$$ \label{eq:mlr-ols}
60+
\hat{\beta} = (X^\mathrm{T} X)^{-1} X^\mathrm{T} y \; .
61+
$$
62+
63+
From \eqref{eq:mlr} and the [linear transformation theorem for the multivariate normal distribution](/P/mvn-ltt), it follows that
64+
65+
$$ \label{eq:y-dist}
66+
y \sim \mathcal{N}\left( X\beta, \, \sigma^2 I_n \right) \; .
67+
$$
68+
69+
From \eqref{eq:mlr-ols}, in combination with \eqref{eq:y-dist} and the [transformation theorem](/P/mvn-ltt), it follows that
70+
71+
$$ \label{eq:b-est-dist}
72+
\begin{split}
73+
\hat{\beta} &\sim \mathcal{N}\left( (X^\mathrm{T} X)^{-1} X^\mathrm{T} X\beta, \, \sigma^2 (X^\mathrm{T} X)^{-1} X^\mathrm{T} I_n X (X^\mathrm{T} X)^{-1} \right) \\
74+
&\sim \mathcal{N}\left( \beta, \, \sigma^2 (X^\mathrm{T} X)^{-1} \right) \; .
75+
\end{split}
76+
$$
77+
78+
Applying \eqref{eq:slr-mlr}, the [covariance matrix](/D/mvn) can be further developed as follows:
79+
80+
$$ \label{eq:b-est-cov}
81+
\begin{split}
82+
\sigma^2 (X^\mathrm{T} X)^{-1} &= \sigma^2 \left( \left[ \begin{matrix} 1_n^\mathrm{T} \\ x^\mathrm{T} \end{matrix} \right] \left[ 1_n, \, x \right] \right)^{-1} \\
83+
&= \sigma^2 \left( \left[ \begin{matrix} n & n\bar{x} \\ n\bar{x} & x^\mathrm{T} x \end{matrix} \right] \right)^{-1} \\
84+
&= \frac{\sigma^2}{n x^\mathrm{T} x - (n\bar{x})^2} \left[ \begin{matrix} x^\mathrm{T} x & -n\bar{x} \\ -n\bar{x} & n \end{matrix} \right] \\
85+
&= \frac{\sigma^2}{x^\mathrm{T} x - n\bar{x}^2} \left[ \begin{matrix} x^\mathrm{T} x/n & -\bar{x} \\ -\bar{x} & 1 \end{matrix} \right] \; .
86+
\end{split}
87+
$$
88+
89+
Note that the denominator in the first factor is equal to
90+
91+
$$ \label{eq:b-est-cov-den}
92+
\begin{split}
93+
x^\mathrm{T} x - n\bar{x}^2 &= x^\mathrm{T} x - 2 n\bar{x}^2 + n\bar{x}^2 \\
94+
&= \sum_{i=1}^{n} x_i^2 - 2 n \bar{x} \frac{1}{n} \sum_{i=1}^{n} x_i + \sum_{i=1}^{n} \bar{x}^2 \\
95+
&= \sum_{i=1}^{n} x_i^2 - 2 \sum_{i=1}^{n} x_i \bar{x} + \sum_{i=1}^{n} \bar{x}^2 \\
96+
&= \sum_{i=1}^{n} \left( x_i^2 - 2 x_i \bar{x} + \bar{x}^2 \right) \\
97+
&= \sum_{i=1}^{n} \left( x_i^2 - \bar{x} \right)^2 \\
98+
&= (n-1) \, s_x^2 \; .
99+
\end{split}
100+
$$
101+
102+
Thus, combining \eqref{eq:b-est-dist}, \eqref{eq:b-est-cov} and \eqref{eq:b-est-cov-den}, we have
103+
104+
$$ \label{eq:slr-olsdist-qed}
105+
\hat{\beta} \sim \mathcal{N}\left( \beta, \, \frac{\sigma^2}{(n-1) \, s_x^2} \cdot \left[ \begin{matrix} x^\mathrm{T}x/n & -\bar{x} \\ -\bar{x} & 1 \end{matrix} \right] \right)
106+
$$
107+
108+
which is equivalent to equation \eqref{eq:slr-olsdist}.

P/slr-proj.md

Lines changed: 103 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,103 @@
1+
---
2+
layout: proof
3+
mathjax: true
4+
5+
author: "Joram Soch"
6+
affiliation: "BCCN Berlin"
7+
e_mail: "joram.soch@bccn-berlin.de"
8+
date: 2021-11-09 10:16:00
9+
10+
title: "Projection of a data point to the regression line"
11+
chapter: "Statistical Models"
12+
section: "Univariate normal data"
13+
topic: "Simple linear regression"
14+
theorem: "Projection of data point to regression line"
15+
16+
sources:
17+
- authors: "Penny, William"
18+
year: 2006
19+
title: "Projects"
20+
in: "Mathematics for Brain Imaging"
21+
pages: "ch. 1.4.10, pp. 34-35, eqs. 1.87/1.88"
22+
url: "https://ueapsylabs.co.uk/sites/wpenny/mbi/mbi_course.pdf"
23+
24+
proof_id: "P283"
25+
shortcut: "slr-proj"
26+
username: "JoramSoch"
27+
---
28+
29+
30+
**Theorem:** Consider [simple linear regression](/D/slr) and an [estimated regression line](/D/regline) specified by
31+
32+
$$ \label{eq:slr-regline}
33+
y = \hat{\beta}_0 + \hat{\beta}_1 x \quad \text{where} \quad x,y \in \mathbb{R} \; .
34+
$$
35+
36+
For any given data point $O(x_o \vert y_o)$, the point on the regression line $P(x_p \vert y_p)$ that is closest to this data point is given by:
37+
38+
$$ \label{eq:slr-proj}
39+
P\left(w \mid \hat{\beta}_0 + \hat{\beta}_1 w\right) \quad \text{with} \quad w = \frac{x_0 + (y_o - \hat{\beta}_0) \hat{\beta}_1}{1 + \hat{\beta}_1^2} \; .
40+
$$
41+
42+
43+
**Proof:** The intersection point of the regression line with the y-axis is
44+
45+
$$ \label{eq:S}
46+
S(0 \vert \hat{\beta}_0) \; .
47+
$$
48+
49+
Let $a$ be a vector describing the direction of the regression line, let $b$ be the vector pointing from $S$ to $O$ and let $p$ be the vector pointing from $S$ to $P$.
50+
51+
Because $\hat{\beta}_1$ is the slope of the regression line, we have
52+
53+
$$ \label{eq:a}
54+
a = \left( \begin{matrix} 1 \\ \hat{\beta}_1 \end{matrix} \right) \; .
55+
$$
56+
57+
Moreover, with the points $O$ and $S$, we have
58+
59+
$$ \label{eq:b}
60+
b = \left( \begin{matrix} x_o \\ y_o \end{matrix} \right) - \left( \begin{matrix} 0 \\ \hat{\beta}_0 \end{matrix} \right) = \left( \begin{matrix} x_o \\ y_o - \hat{\beta}_0 \end{matrix} \right) \; .
61+
$$
62+
63+
Because $P$ is located on the regression line, $p$ is collinear with $a$ and thus a scalar multiple of this vector:
64+
65+
$$ \label{eq:p}
66+
p = w \cdot a \; .
67+
$$
68+
69+
Moreover, as $P$ is the point on the regression line which is closest to $O$, this means that the vector $b-p$ is orthogonal to $a$, such that the inner product of these two vectors is equal to zero:
70+
71+
$$ \label{eq:a-b-p-orth}
72+
a^\mathrm{T} (b-p) = 0 \; .
73+
$$
74+
75+
Rearranging this equation gives
76+
77+
$$ \label{eq:w}
78+
\begin{split}
79+
a^\mathrm{T} (b-p) &= 0 \\
80+
a^\mathrm{T} (b - w \cdot a) &= 0 \\
81+
a^\mathrm{T} b - w \cdot a^\mathrm{T} a &= 0 \\
82+
w \cdot a^\mathrm{T} a &= a^\mathrm{T} b \\
83+
w &= \frac{a^\mathrm{T} b}{a^\mathrm{T} a} \; .
84+
\end{split}
85+
$$
86+
87+
With \eqref{eq:a} and \eqref{eq:b}, $w$ can be calculated as
88+
89+
$$ \label{eq:w-qed}
90+
\begin{split}
91+
w &= \frac{a^\mathrm{T} b}{a^\mathrm{T} a} \\
92+
&= \frac{\left( \begin{matrix} 1 \\ \hat{\beta}_1 \end{matrix} \right)^\mathrm{T} \left( \begin{matrix} x_o \\ y_o - \hat{\beta}_0 \end{matrix} \right)}{\left( \begin{matrix} 1 \\ \hat{\beta}_1 \end{matrix} \right)^\mathrm{T} \left( \begin{matrix} 1 \\ \hat{\beta}_1 \end{matrix} \right)} \\
93+
&= \frac{x_0 + (y_o - \hat{\beta}_0) \hat{\beta}_1}{1 + \hat{\beta}_1^2}
94+
\end{split}
95+
$$
96+
97+
Finally, with the point $S$ \eqref{eq:S} and the vector $p$ \eqref{eq:p}, the coordinates of $P$ are obtained as
98+
99+
$$ \label{eq:P-qed}
100+
\left( \begin{matrix} x_p \\ y_p \end{matrix} \right) = \left( \begin{matrix} 0 \\ \hat{\beta}_0 \end{matrix} \right) + w \cdot \left( \begin{matrix} 1 \\ \hat{\beta}_1 \end{matrix} \right) = \left( \begin{matrix} w \\ \hat{\beta}_0 + \hat{\beta}_1 w \end{matrix} \right) \; .
101+
$$
102+
103+
Together, \eqref{eq:P-qed} and \eqref{eq:w-qed} constitute the proof of \eqref{eq:slr-proj}.

0 commit comments

Comments
 (0)