|
| 1 | +--- |
| 2 | +layout: definition |
| 3 | +mathjax: true |
| 4 | + |
| 5 | +author: "Joram Soch" |
| 6 | +affiliation: "BCCN Berlin" |
| 7 | +e_mail: "joram.soch@bccn-berlin.de" |
| 8 | +date: 2020-11-19 04:55:00 |
| 9 | + |
| 10 | +title: "Cross-validated log model evidence" |
| 11 | +chapter: "Model Selection" |
| 12 | +section: "Bayesian model selection" |
| 13 | +topic: "Log model evidence" |
| 14 | +definition: "Cross-validated log model evidence" |
| 15 | + |
| 16 | +sources: |
| 17 | + - authors: "Soch J, Allefeld C, Haynes JD" |
| 18 | + year: 2016 |
| 19 | + title: "How to avoid mismodelling in GLM-based fMRI data analysis: cross-validated Bayesian model selection" |
| 20 | + in: "NeuroImage" |
| 21 | + pages: "vol. 141, pp. 469-489, eqs. 13-15" |
| 22 | + url: "https://www.sciencedirect.com/science/article/pii/S1053811916303615" |
| 23 | + doi: "10.1016/j.neuroimage.2016.07.047" |
| 24 | + - authors: "Soch J, Meyer AP, Allefeld C, Haynes JD" |
| 25 | + year: 2017 |
| 26 | + title: "How to improve parameter estimates in GLM-based fMRI data analysis: cross-validated Bayesian model averaging" |
| 27 | + in: "NeuroImage" |
| 28 | + pages: "vol. 158, pp. 186-195, eq. 6" |
| 29 | + url: "https://www.sciencedirect.com/science/article/pii/S105381191730527X" |
| 30 | + doi: "10.1016/j.neuroimage.2017.06.056" |
| 31 | + - authors: "Soch J, Allefeld C" |
| 32 | + year: 2018 |
| 33 | + title: "MACS – a new SPM toolbox for model assessment, comparison and selection" |
| 34 | + in: "Journal of Neuroscience Methods" |
| 35 | + pages: "vol. 306, pp. 19-31, eqs. 14-15" |
| 36 | + url: "https://www.sciencedirect.com/science/article/pii/S0165027018301468" |
| 37 | + doi: "10.1016/j.jneumeth.2018.05.017" |
| 38 | + - authors: "Soch J" |
| 39 | + year: 2018 |
| 40 | + title: "cvBMS and cvBMA: filling in the gaps" |
| 41 | + in: "arXiv stat.ME" |
| 42 | + pages: "arXiv:1807.01585" |
| 43 | + url: "https://arxiv.org/abs/1807.01585" |
| 44 | + |
| 45 | +def_id: "D111" |
| 46 | +shortcut: "cvlme" |
| 47 | +username: "JoramSoch" |
| 48 | +--- |
| 49 | + |
| 50 | + |
| 51 | +**Definition:** Let there be a [data set](/D/data) $y$ with mutually exclusive and collectively exhaustive subsets $y_1, \ldots, y_S$. Assume a [generative model](/D/gm) $m$ with model parameters $\theta$ implying a [likelihood function](/D/lf) $p(y \vert \theta, m)$ and a [non-informative](/D/prior-inf) [prior density](/D/prior) $p(\theta \vert m)$. |
| 52 | + |
| 53 | +Then, the cross-validated log model evidence of $m$ is given by |
| 54 | + |
| 55 | +$$ \label{eq:cvLME} |
| 56 | +\mathrm{cvLME}(m) = \sum_{i=1}^{S} \log \int p( y_i \vert \theta, m ) \, p( \theta \vert y_{\neg i}, m ) \, \mathrm{d}\theta |
| 57 | +$$ |
| 58 | + |
| 59 | +where $y_{\neg i} = \bigcup_{j \neq i} y_j$ is the union of all data subsets except $y_i$ and $p( \theta \vert y_{\neg i}, m )$ is the [posterior distribution](/D/post) obtained from $y_{\neg i}$ when using the [prior distribution](/D/prior) $p(\theta \vert m)$: |
| 60 | + |
| 61 | +$$ \label{eq:post} |
| 62 | +p( \theta \vert y_{\neg i}, m ) = \frac{p( y_{\neg i} \vert \theta, m ) \, p(\theta \vert m)}{p( y_{\neg i} \vert m )} \; . |
| 63 | +$$ |
0 commit comments