Skip to content

Commit 4f96182

Browse files
authored
added 3 definitions
1 parent 3fb0f45 commit 4f96182

3 files changed

Lines changed: 132 additions & 0 deletions

File tree

D/eblme.md

Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,46 @@
1+
---
2+
layout: definition
3+
mathjax: true
4+
5+
author: "Joram Soch"
6+
affiliation: "BCCN Berlin"
7+
e_mail: "joram.soch@bccn-berlin.de"
8+
date: 2020-11-25 07:43:00
9+
10+
title: "Empirical Bayesian log model evidence"
11+
chapter: "Model Selection"
12+
section: "Bayesian model selection"
13+
topic: "Log model evidence"
14+
definition: "Empirical Bayesian log model evidence"
15+
16+
sources:
17+
- authors: "Wikipedia"
18+
year: 2020
19+
title: "Empirical Bayes method"
20+
in: "Wikipedia, the free encyclopedia"
21+
pages: "retrieved on 2020-11-25"
22+
url: "https://en.wikipedia.org/wiki/Empirical_Bayes_method#Introduction"
23+
24+
def_id: "D114"
25+
shortcut: "eblme"
26+
username: "JoramSoch"
27+
---
28+
29+
30+
**Definition:** Let $m$ be a [generative model](/D/gm) with model parameters $\theta$ and hyper-parameters $\lambda$ implying the [likelihood function](/D/lf) $p(y \vert \theta, \lambda, m)$ and [prior distribution](/D/prior) $p(\theta \vert \lambda, m)$. Then, the [Empirical Bayesian](/D/eb) [log model evidence](/D/lme) is the logarithm of the [marginal likelihood](/D/ml), maximized with respect to the hyper-parameters:
31+
32+
$$ \label{eq:ebLME}
33+
\mathrm{ebLME}(m) = \log p(y \vert \hat{\lambda}, m)
34+
$$
35+
36+
where
37+
38+
$$ \label{eq:ML}
39+
p(y \vert \lambda, m) = \int p(y \vert \theta, \lambda, m) \, (\theta \vert \lambda, m) \, \mathrm{d}\theta
40+
$$
41+
42+
and
43+
44+
$$ \label{eq:EB}
45+
\hat{\lambda} = \operatorname*{arg\,max}_{\lambda} \log p(y \vert \lambda, m) \; .
46+
$$

D/uplme.md

Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
---
2+
layout: definition
3+
mathjax: true
4+
5+
author: "Joram Soch"
6+
affiliation: "BCCN Berlin"
7+
e_mail: "joram.soch@bccn-berlin.de"
8+
date: 2020-11-25 07:28:00
9+
10+
title: "Uniform-prior log model evidence"
11+
chapter: "Model Selection"
12+
section: "Bayesian model selection"
13+
topic: "Log model evidence"
14+
definition: "Uniform-prior log model evidence"
15+
16+
sources:
17+
- authors: "Wikipedia"
18+
year: 2020
19+
title: "Lindley's paradox"
20+
in: "Wikipedia, the free encyclopedia"
21+
pages: "retrieved on 2020-11-25"
22+
url: "https://en.wikipedia.org/wiki/Lindley%27s_paradox#Bayesian_approach"
23+
24+
def_id: "D113"
25+
shortcut: "uplme"
26+
username: "JoramSoch"
27+
---
28+
29+
30+
**Definition:** Assume a [generative model](/D/gm) $m$ with [likelihood function](/D/lf) $p(y \vert \theta, m)$ and a [uniform](/D/prior-uni) [prior distribution](/D/prior) $p_{\mathrm{uni}}(\theta \vert m)$. Then, the [log model evidence](/D/lme) of this model is called "log model evidence with uniform prior" or "uniform-prior log model evidence" (upLME):
31+
32+
$$ \label{eq:upLME}
33+
\mathrm{upLME}(m) = \log \int p(y \vert \theta, m) \, p_{\mathrm{uni}}(\theta \vert m) \, \mathrm{d}\theta \; .
34+
$$

D/vblme.md

Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,52 @@
1+
---
2+
layout: definition
3+
mathjax: true
4+
5+
author: "Joram Soch"
6+
affiliation: "BCCN Berlin"
7+
e_mail: "joram.soch@bccn-berlin.de"
8+
date: 2020-11-25 08:10:00
9+
10+
title: "Variational Bayesian log model evidence"
11+
chapter: "Model Selection"
12+
section: "Bayesian model selection"
13+
topic: "Log model evidence"
14+
definition: "Variational Bayesian log model evidence"
15+
16+
sources:
17+
- authors: "Wikipedia"
18+
year: 2020
19+
title: "Variational Bayesian methods"
20+
in: "Wikipedia, the free encyclopedia"
21+
pages: "retrieved on 2020-11-25"
22+
url: "https://en.wikipedia.org/wiki/Variational_Bayesian_methods#Evidence_lower_bound"
23+
- authors: "Bishop CM"
24+
year: 2006
25+
title: "Variational Inference"
26+
in: "Pattern Recognition for Machine Learning"
27+
pages: "pp. 462-474, eqs. 10.2-10.4"
28+
url: "https://www.springer.com/gp/book/9780387310732"
29+
30+
def_id: "D115"
31+
shortcut: "vblme"
32+
username: "JoramSoch"
33+
---
34+
35+
36+
**Definition:** Let $m$ be a [generative model](/D/gm) with model parameters $\theta$ implying the [likelihood function](/D/lf) $p(y \vert \theta, m)$. Moreover, assume a [prior distribution](/D/prior) $p(\theta \vert m)$, a resulting [posterior distribution](/D/post) $p(\theta \vert y, m)$ and an [approximate](/D/vb) [posterior distribution](/D/post) $q(\theta)$. Then, the [Variational Bayesian](/D/vb) [log model evidence](/D/lme) is the expectation of the [log-likelihood function](/D/llf) with respect to the approximate posterior, minus the [Kullback-Leibler divergence](/D/kl) between approximate posterior and true posterior distribution:
37+
38+
$$ \label{eq:vbLME}
39+
\mathrm{vbLME}(m) = \mathcal{L}\left[q(\theta)\right] - \mathrm{KL}\left[q(\theta) || p(\theta \vert y)\right]
40+
$$
41+
42+
where
43+
44+
$$ \label{eq:ELL}
45+
\mathcal{L}\left[q(\theta)\right] = \int q(\theta) \log \frac{p(y,\theta|m)}{q(\theta)} \, \mathrm{d}\theta
46+
$$
47+
48+
and
49+
50+
$$ \label{eq:KL}
51+
\mathrm{KL}\left[q(\theta) || p(\theta \vert y)\right] = \int q(\theta) \log \frac{q(\theta)}{p(\theta|y,m)} \, \mathrm{d}\theta \; .
52+
$$

0 commit comments

Comments
 (0)