Skip to content

Commit 74c5c7b

Browse files
authored
added 2 definitions
1 parent 56121a0 commit 74c5c7b

2 files changed

Lines changed: 122 additions & 0 deletions

File tree

D/regline.md

Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
---
2+
layout: definition
3+
mathjax: true
4+
5+
author: "Joram Soch"
6+
affiliation: "BCCN Berlin"
7+
e_mail: "joram.soch@bccn-berlin.de"
8+
date: 2021-10-27 07:30:00
9+
10+
title: "Regression line"
11+
chapter: "Statistical Models"
12+
section: "Univariate normal data"
13+
topic: "Simple linear regression"
14+
definition: "Regression line"
15+
16+
sources:
17+
- authors: "Wikipedia"
18+
year: 2021
19+
title: "Simple linear regression"
20+
in: "Wikipedia, the free encyclopedia"
21+
pages: "retrieved on 2021-10-27"
22+
url: "https://en.wikipedia.org/wiki/Simple_linear_regression#Fitting_the_regression_line"
23+
24+
def_id: "D164"
25+
shortcut: "regline"
26+
username: "JoramSoch"
27+
---
28+
29+
30+
**Definition:** Let there be a [simple linear regression with independent observations](/D/slr) using dependent variable $y$ and independent variable $x$:
31+
32+
$$ \label{eq:slr}
33+
y_i = \beta_0 + \beta_1 x_i + \varepsilon_i, \; \varepsilon_i \sim \mathcal{N}(0, \sigma^2) \; .
34+
$$
35+
36+
Then, given some parameters $\beta_0, \beta_1 \in \mathbb{R}$, the set
37+
38+
$$ \label{eq:regline}
39+
L(\beta_0, \beta_1) = \left\lbrace (x,y) \in \mathbb{R}^2 \mid y = \beta_0 + \beta_1 x \right\rbrace
40+
$$
41+
42+
is called a "regression line" and the set
43+
44+
$$ \label{eq:regline-ols}
45+
L(\hat{\beta}_0, \hat{\beta}_1) = \left\lbrace (x,y) \in \mathbb{R}^2 \mid y = \hat{\beta}_0 + \hat{\beta}_1 x \right\rbrace
46+
$$
47+
48+
is called the "fitted regression line", with estimated regression coefficients $\hat{\beta}_0, \hat{\beta}_1$, e.g. obtained via [ordinary least squares](/P/slr-ols).

D/slr.md

Lines changed: 74 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,74 @@
1+
---
2+
layout: definition
3+
mathjax: true
4+
5+
author: "Joram Soch"
6+
affiliation: "BCCN Berlin"
7+
e_mail: "joram.soch@bccn-berlin.de"
8+
date: 2021-10-27 07:07:00
9+
10+
title: "Simple linear regression"
11+
chapter: "Statistical Models"
12+
section: "Univariate normal data"
13+
topic: "Simple linear regression"
14+
definition: "Definition"
15+
16+
sources:
17+
- authors: "Wikipedia"
18+
year: 2021
19+
title: "Simple linear regression"
20+
in: "Wikipedia, the free encyclopedia"
21+
pages: "retrieved on 2021-10-27"
22+
url: "https://en.wikipedia.org/wiki/Simple_linear_regression#Fitting_the_regression_line"
23+
24+
def_id: "D163"
25+
shortcut: "slr"
26+
username: "JoramSoch"
27+
---
28+
29+
30+
**Definition:** Let $y$ and $x$ be two $n \times 1$ vectors.
31+
32+
Then, a statement asserting a linear relationship between $x$ and $y$
33+
34+
$$ \label{eq:slr-model}
35+
y = \beta_0 + \beta_1 x + \varepsilon \; ,
36+
$$
37+
38+
together with a statement asserting a [normal distribution](/D/mvn) for $\varepsilon$
39+
40+
$$ \label{eq:slr-noise}
41+
\varepsilon \sim \mathcal{N}(0, \sigma^2 V)
42+
$$
43+
44+
is called a univariate simple regression model or simply, "simple linear regression".
45+
46+
* $y$ is called "dependent variable", "measured data" or "signal";
47+
48+
* $x$ is called "independent variable", "predictor" or "covariate";
49+
50+
* $V$ is called "covariance matrix" or "covariance structure";
51+
52+
* $\beta_1$ is called "slope of the [regression line](/D/regline)";
53+
54+
* $\beta_0$ is called "intercept of the [regression line](/D/regline)";
55+
56+
* $\varepsilon$ is called "noise", "errors" or "error terms";
57+
58+
* $\sigma^2$ is called "noise variance" or "error variance";
59+
60+
* $n$ is the number of observations.
61+
62+
When the covariance structure $V$ is equal to the $n \times n$ identity matrix, this is called simple linear regression with independent and identically distributed (i.i.d.) observations:
63+
64+
$$ \label{eq:mlr-noise-iid}
65+
V = I_n \quad \Rightarrow \quad \varepsilon \sim \mathcal{N}(0, \sigma^2 I_n) \quad \Rightarrow \quad \varepsilon_i \overset{\text{i.i.d.}}{\sim} \mathcal{N}(0, \sigma^2) \; .
66+
$$
67+
68+
In this case, the linear regression model can also be written as
69+
70+
$$ \label{eq:slr-model-sum}
71+
y_i = \beta_0 + \beta_1 x_i + \varepsilon_i, \; \varepsilon_i \sim \mathcal{N}(0, \sigma^2) \; .
72+
$$
73+
74+
Otherwise, it is called simple linear regression with correlated observations.

0 commit comments

Comments
 (0)