Skip to content

Commit de25013

Browse files
authored
Merge pull request #150 from StatProofBook/master
update to master
2 parents f79277c + d2facb4 commit de25013

21 files changed

Lines changed: 151 additions & 65 deletions

D/cvlme.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ username: "JoramSoch"
5050

5151
**Definition:** Let there be a [data set](/D/data) $y$ with mutually exclusive and collectively exhaustive subsets $y_1, \ldots, y_S$. Assume a [generative model](/D/gm) $m$ with model parameters $\theta$ implying a [likelihood function](/D/lf) $p(y \vert \theta, m)$ and a [non-informative](/D/prior-inf) [prior density](/D/prior) $p_{\mathrm{ni}}(\theta \vert m)$.
5252

53-
Then, the cross-validated log model evidence of $m$ is given by
53+
Then, the cross-validated log model evidence (cvLME) of $m$ is given by
5454

5555
$$ \label{eq:cvLME}
5656
\mathrm{cvLME}(m) = \sum_{i=1}^{S} \log \int p( y_i \vert \theta, m ) \, p( \theta \vert y_{\neg i}, m ) \, \mathrm{d}\theta
@@ -60,4 +60,10 @@ where $y_{\neg i} = \bigcup_{j \neq i} y_j$ is the union of all data subsets exc
6060

6161
$$ \label{eq:post}
6262
p( \theta \vert y_{\neg i}, m ) = \frac{p( y_{\neg i} \vert \theta, m ) \, p_{\mathrm{ni}}(\theta \vert m)}{p( y_{\neg i} \vert m )} \; .
63+
$$
64+
65+
One addend of the cvLME is referred to as the out-of-sample log model evidence (oosLME) of $m$ for the $i$-th data subset:
66+
67+
$$ \label{eq:oosLME}
68+
\mathrm{oosLME}_i(m) = \log \int p( y_i \vert \theta, m ) \, p( \theta \vert y_{\neg i}, m ) \, \mathrm{d}\theta \; .
6369
$$

I/PbA.md

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,11 @@ title: "Proof by Author"
88

99
- [Covariance matrix of the multinomial distribution](/P/mult-cov)
1010

11-
### JoramSoch (433 proofs)
11+
### aloctavodia (1 proof)
12+
13+
- [Posterior predictive distribution is a marginal distribution of the joint likelihood](/P/postpred-jl)
14+
15+
### JoramSoch (434 proofs)
1216

1317
- [Accuracy and complexity for Bayesian linear regression](/P/blr-anc)
1418
- [Accuracy and complexity for Bayesian linear regression with known covariance](/P/blrkc-anc)
@@ -196,7 +200,7 @@ title: "Proof by Author"
196200
- [Marginal distributions for the matrix-normal distribution](/P/matn-marg)
197201
- [Marginal distributions of the multivariate normal distribution](/P/mvn-marg)
198202
- [Marginal distributions of the normal-gamma distribution](/P/ng-marg)
199-
- [Marginal likelihood is a definite integral of joint likelihood](/P/ml-jl)
203+
- [Marginal likelihood is a definite integral of the joint likelihood](/P/ml-jl)
200204
- [Maximum likelihood estimation can result in biased estimates](/P/mle-bias)
201205
- [Maximum likelihood estimation for binomial observations](/P/bin-mle)
202206
- [Maximum likelihood estimation for Dirichlet-distributed data](/P/dir-mle)
@@ -438,6 +442,7 @@ title: "Proof by Author"
438442
- [Variance of the normal distribution](/P/norm-var)
439443
- [Variance of the Poisson distribution](/P/poiss-var)
440444
- [Variance of the sum of two random variables](/P/var-sum)
445+
- [Weak law of large numbers](/P/mean-wlln)
441446
- [Weighted least squares for multiple linear regression](/P/mlr-wls)
442447
- [Weighted least squares for multiple linear regression](/P/mlr-wls2)
443448
- [Weighted least squares for simple linear regression](/P/slr-wls)
@@ -469,6 +474,10 @@ title: "Proof by Author"
469474
- [Variance of the exponential distribution](/P/exp-var)
470475
- [Variance of the log-normal distribution](/P/lognorm-var)
471476

477+
### salbalkus (1 proof)
478+
479+
- [The expected value minimizes the mean squared error](/P/mean-mse)
480+
472481
### StatProofBook (1 proof)
473482

474483
- [Proof Template](/P/-temp-)

I/PbN.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,7 @@ title: "Proof by Number"
9797
| P88 | mvn-cond | [Conditional distributions of the multivariate normal distribution](/P/mvn-cond) | JoramSoch | 2020-03-20 |
9898
| P89 | jl-lfnprior | [Joint likelihood is the product of likelihood function and prior density](/P/jl-lfnprior) | JoramSoch | 2020-05-05 |
9999
| P90 | post-jl | [Posterior density is proportional to joint likelihood](/P/post-jl) | JoramSoch | 2020-05-05 |
100-
| P91 | ml-jl | [Marginal likelihood is a definite integral of joint likelihood](/P/ml-jl) | JoramSoch | 2020-05-05 |
100+
| P91 | ml-jl | [Marginal likelihood is a definite integral of the joint likelihood](/P/ml-jl) | JoramSoch | 2020-05-05 |
101101
| P92 | mvn-kl | [Kullback-Leibler divergence for the multivariate normal distribution](/P/mvn-kl) | JoramSoch | 2020-05-05 |
102102
| P93 | gam-kl | [Kullback-Leibler divergence for the gamma distribution](/P/gam-kl) | JoramSoch | 2020-05-05 |
103103
| P94 | beta-pdf | [Probability density function of the beta distribution](/P/beta-pdf) | JoramSoch | 2020-05-05 |
@@ -472,3 +472,6 @@ title: "Proof by Number"
472472
| P464 | prob-emp2 | [Probability of the empty set](/P/prob-emp2) | JoramSoch | 2024-08-08 |
473473
| P465 | prob-mon2 | [Monotonicity of probability](/P/prob-mon2) | JoramSoch | 2024-08-08 |
474474
| P466 | cdf-probexc | [Exceedance probability for a random variable in terms of cumulative distribution function](/P/cdf-probexc) | JoramSoch | 2024-09-06 |
475+
| P467 | postpred-jl | [Posterior predictive distribution is a marginal distribution of the joint likelihood](/P/postpred-jl) | aloctavodia | 2024-09-11 |
476+
| P468 | mean-wlln | [Weak law of large numbers](/P/mean-wlln) | JoramSoch | 2024-09-13 |
477+
| P469 | mean-mse | [The expected value minimizes the mean squared error](/P/mean-mse) | salbalkus | 2024-09-13 |

I/PbT.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -230,7 +230,7 @@ title: "Proof by Topic"
230230
- [Marginal distributions for the matrix-normal distribution](/P/matn-marg)
231231
- [Marginal distributions of the multivariate normal distribution](/P/mvn-marg)
232232
- [Marginal distributions of the normal-gamma distribution](/P/ng-marg)
233-
- [Marginal likelihood is a definite integral of joint likelihood](/P/ml-jl)
233+
- [Marginal likelihood is a definite integral of the joint likelihood](/P/ml-jl)
234234
- [Maximum likelihood estimation can result in biased estimates](/P/mle-bias)
235235
- [Maximum likelihood estimation for binomial observations](/P/bin-mle)
236236
- [Maximum likelihood estimation for Dirichlet-distributed data](/P/dir-mle)
@@ -361,6 +361,7 @@ title: "Proof by Topic"
361361
- [Posterior model probabilities in terms of Bayes factors](/P/pmp-bf)
362362
- [Posterior model probabilities in terms of log model evidences](/P/pmp-lme)
363363
- [Posterior model probability in terms of log Bayes factor](/P/pmp-lbf)
364+
- [Posterior predictive distribution is a marginal distribution of the joint likelihood](/P/postpred-jl)
364365
- [Posterior probability of the alternative hypothesis for Bayesian linear regression](/P/blr-pp)
365366
- [Posterior probability of the alternative model for binomial observations](/P/bin-pp)
366367
- [Posterior probability of the alternative model for multinomial observations](/P/mult-pp)
@@ -493,6 +494,7 @@ title: "Proof by Topic"
493494

494495
- [t-distribution is a special case of multivariate t-distribution](/P/t-mvt)
495496
- [t-test for multiple linear regression using contrast-based inference](/P/mlr-t)
497+
- [The expected value minimizes the mean squared error](/P/mean-mse)
496498
- [The log probability scoring rule is a strictly proper scoring rule](/P/lpsr-spsr)
497499
- [The p-value follows a uniform distribution under the null hypothesis](/P/pval-h0)
498500
- [The regression line goes through the center of mass point](/P/slr-comp)
@@ -527,6 +529,7 @@ title: "Proof by Topic"
527529

528530
### W
529531

532+
- [Weak law of large numbers](/P/mean-wlln)
530533
- [Weighted least squares for multiple linear regression](/P/mlr-wls)
531534
- [Weighted least squares for multiple linear regression](/P/mlr-wls2)
532535
- [Weighted least squares for simple linear regression](/P/slr-wls)

I/PwS.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ title: "Proofs without Source"
7070
- [Marginal distributions for the matrix-normal distribution](/P/matn-marg)
7171
- [Marginal distributions of the multivariate normal distribution](/P/mvn-marg)
7272
- [Marginal distributions of the normal-gamma distribution](/P/ng-marg)
73-
- [Marginal likelihood is a definite integral of joint likelihood](/P/ml-jl)
73+
- [Marginal likelihood is a definite integral of the joint likelihood](/P/ml-jl)
7474
- [Maximum likelihood estimation can result in biased estimates](/P/mle-bias)
7575
- [Maximum likelihood estimation for multinomial observations](/P/mult-mle)
7676
- [Maximum likelihood estimation for multiple linear regression](/P/mlr-mle)
@@ -113,6 +113,7 @@ title: "Proofs without Source"
113113
- [Ordinary least squares for the general linear model](/P/glm-ols)
114114
- [Parameter estimates for simple linear regression are uncorrelated after mean-centering](/P/slr-olscorr)
115115
- [Posterior density is proportional to joint likelihood](/P/post-jl)
116+
- [Posterior predictive distribution is a marginal distribution of the joint likelihood](/P/postpred-jl)
116117
- [Posterior probability of the alternative model for binomial observations](/P/bin-pp)
117118
- [Posterior probability of the alternative model for multinomial observations](/P/mult-pp)
118119
- [Probability density function of the beta distribution](/P/beta-pdf)

I/ToC.md

Lines changed: 30 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -10,11 +10,9 @@ title: "Table of Contents"
1010

1111

1212
<br>
13-
<section class="chapter" id="General Theorems">
14-
<h3>Chapter I: General Theorems</h3>
15-
</section>
13+
<h3 id="General Theorems">Chapter I: General Theorems</h3>
1614

17-
1. Probability theory
15+
1. <p id="Probability theory">Probability theory</p>
1816

1917
1.1. Random experiments <br>
2018
&emsp;&ensp; 1.1.1. *[Random experiment](/D/rexp)* <br>
@@ -119,11 +117,12 @@ title: "Table of Contents"
119117
&emsp;&ensp; 1.10.8. **[Expectation of a trace](/P/mean-tr)** <br>
120118
&emsp;&ensp; 1.10.9. **[Expectation of a quadratic form](/P/mean-qf)** <br>
121119
&emsp;&ensp; 1.10.10. **[Squared expectation of a product](/P/mean-prodsqr)** <br>
122-
&emsp;&ensp; 1.10.11. **[Law of total expectation](/P/mean-tot)** <br>
123-
&emsp;&ensp; 1.10.12. **[Law of the unconscious statistician](/P/mean-lotus)** <br>
124-
&emsp;&ensp; 1.10.13. **[Weak law of large numbers](/P/mean-wlln)** <br>
125-
&emsp;&ensp; 1.10.14. *[Expected value of a random vector](/D/mean-rvec)* <br>
126-
&emsp;&ensp; 1.10.15. *[Expected value of a random matrix](/D/mean-rmat)* <br>
120+
&emsp;&ensp; 1.10.11. **[Expected value minimizes squared error](/P/mean-mse)** <br>
121+
&emsp;&ensp; 1.10.12. **[Law of total expectation](/P/mean-tot)** <br>
122+
&emsp;&ensp; 1.10.13. **[Law of the unconscious statistician](/P/mean-lotus)** <br>
123+
&emsp;&ensp; 1.10.14. **[Weak law of large numbers](/P/mean-wlln)** <br>
124+
&emsp;&ensp; 1.10.15. *[Expected value of a random vector](/D/mean-rvec)* <br>
125+
&emsp;&ensp; 1.10.16. *[Expected value of a random matrix](/D/mean-rmat)* <br>
127126

128127
1.11. Variance <br>
129128
&emsp;&ensp; 1.11.1. *[Definition](/D/var)* <br>
@@ -197,7 +196,7 @@ title: "Table of Contents"
197196
&emsp;&ensp; 1.18.8. **[Second central moment is variance](/P/momcent-2nd)** <br>
198197
&emsp;&ensp; 1.18.9. *[Standardized moment](/D/mom-stand)* <br>
199198

200-
2. Information theory
199+
2. <p id="Information theory">Information theory</p>
201200

202201
2.1. Shannon entropy <br>
203202
&emsp;&ensp; 2.1.1. *[Definition](/D/ent)* <br>
@@ -244,7 +243,7 @@ title: "Table of Contents"
244243
&emsp;&ensp; 2.5.8. **[Relation to discrete entropy](/P/kl-ent)** <br>
245244
&emsp;&ensp; 2.5.9. **[Relation to differential entropy](/P/kl-dent)** <br>
246245

247-
3. Estimation theory
246+
3. <p id="Estimation theory">Estimation theory</p>
248247

249248
3.1. Point estimates <br>
250249
&emsp;&ensp; 3.1.1. *[Mean squared error](/D/mse)* <br>
@@ -254,7 +253,7 @@ title: "Table of Contents"
254253
&emsp;&ensp; 3.2.1. *[Confidence interval](/D/ci)* <br>
255254
&emsp;&ensp; 3.2.2. **[Construction of confidence intervals using Wilks' theorem](/P/ci-wilks)** <br>
256255

257-
4. Frequentist statistics
256+
4. <p id="Frequentist statistics">Frequentist statistics</p>
258257

259258
4.1. Likelihood theory <br>
260259
&emsp;&ensp; 4.1.1. *[Likelihood function](/D/lf)* <br>
@@ -285,7 +284,7 @@ title: "Table of Contents"
285284
&emsp;&ensp; 4.3.10. *[p-value](/D/pval)* <br>
286285
&emsp;&ensp; 4.3.11. **[Distribution of p-value under null hypothesis](/P/pval-h0)** <br>
287286

288-
5. Bayesian statistics
287+
5. <p id="Bayesian statistics">Bayesian statistics</p>
289288

290289
5.1. Probabilistic modeling <br>
291290
&emsp;&ensp; 5.1.1. *[Generative model](/D/gm)* <br>
@@ -319,8 +318,8 @@ title: "Table of Contents"
319318
&emsp;&ensp; 5.3.2. **[Bayes' rule](/P/bayes-rule)** <br>
320319
&emsp;&ensp; 5.3.3. *[Empirical Bayes](/D/eb)* <br>
321320
&emsp;&ensp; 5.3.4. *[Variational Bayes](/D/vb)* <br>
322-
323-
6. Machine learning
321+
322+
6. <p id="Machine learning">Machine learning</p>
324323

325324
6.1. Scoring rules <br>
326325
&emsp;&ensp; 6.1.1. *[Scoring rule](/D/sr)* <br>
@@ -333,11 +332,9 @@ title: "Table of Contents"
333332

334333

335334
<br>
336-
<section class="chapter" id="Probability Distributions">
337-
<h3>Chapter II: Probability Distributions</h3>
338-
</section>
335+
<h3 id="Probability Distributions">Chapter II: Probability Distributions</h3>
339336

340-
1. Univariate discrete distributions
337+
1. <p id="Univariate discrete distributions">Univariate discrete distributions</p>
341338

342339
1.1. Discrete uniform distribution <br>
343340
&emsp;&ensp; 1.1.1. *[Definition](/D/duni)* <br>
@@ -380,7 +377,7 @@ title: "Table of Contents"
380377
&emsp;&ensp; 1.5.3. **[Mean](/P/poiss-mean)** <br>
381378
&emsp;&ensp; 1.5.4. **[Variance](/P/poiss-var)** <br>
382379

383-
2. Multivariate discrete distributions
380+
2. <p id="Multivariate discrete distributions">Multivariate discrete distributions</p>
384381

385382
2.1. Categorical distribution <br>
386383
&emsp;&ensp; 2.1.1. *[Definition](/D/cat)* <br>
@@ -396,7 +393,7 @@ title: "Table of Contents"
396393
&emsp;&ensp; 2.2.4. **[Covariance](/P/mult-cov)** <br>
397394
&emsp;&ensp; 2.2.5. **[Shannon entropy](/P/mult-ent)** <br>
398395

399-
3. Univariate continuous distributions
396+
3. <p id="Univariate continuous distributions">Univariate continuous distributions</p>
400397

401398
3.1. Continuous uniform distribution <br>
402399
&emsp;&ensp; 3.1.1. *[Definition](/D/cuni)* <br>
@@ -525,7 +522,7 @@ title: "Table of Contents"
525522
&emsp;&ensp; 3.11.6. **[Skewness](/P/exg-skew)** <br>
526523
&emsp;&ensp; 3.11.7. **[Method of moments](/P/exg-mome)** <br>
527524

528-
4. Multivariate continuous distributions
525+
4. <p id="Multivariate continuous distributions">Multivariate continuous distributions</p>
529526

530527
4.1. Multivariate normal distribution <br>
531528
&emsp;&ensp; 4.1.1. *[Definition](/D/mvn)* <br>
@@ -569,7 +566,7 @@ title: "Table of Contents"
569566
&emsp;&ensp; 4.4.3. **[Kullback-Leibler divergence](/P/dir-kl)** <br>
570567
&emsp;&ensp; 4.4.4. **[Exceedance probabilities](/P/dir-ep)** <br>
571568

572-
5. Matrix-variate continuous distributions
569+
5. <p id="Matrix-variate continuous distributions">Matrix-variate continuous distributions</p>
573570

574571
5.1. Matrix-normal distribution <br>
575572
&emsp;&ensp; 5.1.1. *[Definition](/D/matn)* <br>
@@ -595,11 +592,9 @@ title: "Table of Contents"
595592

596593

597594
<br>
598-
<section class="chapter" id="Statistical Models">
599-
<h3>Chapter III: Statistical Models</h3>
600-
</section>
595+
<h3 id="Statistical Models">Chapter III: Statistical Models</h3>
601596

602-
1. Univariate normal data
597+
1. <p id="Univariate normal data">Univariate normal data</p>
603598

604599
1.1. Univariate Gaussian <br>
605600
&emsp;&ensp; 1.1.1. *[Definition](/D/ug)* <br>
@@ -732,7 +727,7 @@ title: "Table of Contents"
732727
&emsp;&ensp; 1.7.3. **[Log model evidence](/P/blrkc-lme)** <br>
733728
&emsp;&ensp; 1.7.4. **[Accuracy and complexity](/P/blrkc-anc)** <br>
734729

735-
2. Multivariate normal data
730+
2. <p id="Multivariate normal data">Multivariate normal data</p>
736731

737732
2.1. General linear model <br>
738733
&emsp;&ensp; 2.1.1. *[Definition](/D/glm)* <br>
@@ -763,7 +758,7 @@ title: "Table of Contents"
763758
&emsp;&ensp; 2.4.2. **[Posterior distribution](/P/mblr-post)** <br>
764759
&emsp;&ensp; 2.4.3. **[Log model evidence](/P/mblr-lme)** <br>
765760

766-
3. Count data
761+
3. <p id="Count data">Count data</p>
767762

768763
3.1. Binomial observations <br>
769764
&emsp;&ensp; 3.1.1. *[Definition](/D/bin-data)* <br>
@@ -803,7 +798,7 @@ title: "Table of Contents"
803798
&emsp;&ensp; 3.4.4. **[Posterior distribution](/P/poissexp-post)** <br>
804799
&emsp;&ensp; 3.4.5. **[Log model evidence](/P/poissexp-lme)** <br>
805800

806-
4. Frequency data
801+
4. <p id="Frequency data">Frequency data</p>
807802

808803
4.1. Beta-distributed data <br>
809804
&emsp;&ensp; 4.1.1. *[Definition](/D/beta-data)* <br>
@@ -817,7 +812,7 @@ title: "Table of Contents"
817812
&emsp;&ensp; 4.3.1. *[Definition](/D/betabin-data)* <br>
818813
&emsp;&ensp; 4.3.2. **[Method of moments](/P/betabin-mome)** <br>
819814

820-
5. Categorical data
815+
5. <p id="Categorical data">Categorical data</p>
821816

822817
5.1. Logistic regression <br>
823818
&emsp;&ensp; 5.1.1. *[Definition](/D/logreg)* <br>
@@ -826,11 +821,9 @@ title: "Table of Contents"
826821

827822

828823
<br>
829-
<section class="chapter" id="Model Selection">
830-
<h3>Chapter IV: Model Selection</h3>
831-
</section>
824+
<h3 id="Model Selection">Chapter IV: Model Selection</h3>
832825

833-
1. Goodness-of-fit measures
826+
1. <p id="Goodness-of-fit measures">Goodness-of-fit measures</p>
834827

835828
1.1. Residual variance <br>
836829
&emsp;&ensp; 1.1.1. *[Definition](/D/resvar)* <br>
@@ -856,7 +849,7 @@ title: "Table of Contents"
856849
&emsp;&ensp; 1.4.2. **[Relationship to coefficient of determination](/P/snr-rsq)** <br>
857850
&emsp;&ensp; 1.4.3. **[Relationship to maximum log-likelihood](/P/snr-mll)** <br>
858851

859-
1. Classical information criteria
852+
2. <p id="Classical information criteria">Classical information criteria</p>
860853

861854
2.1. Akaike information criterion <br>
862855
&emsp;&ensp; 2.1.1. *[Definition](/D/aic)* <br>
@@ -872,7 +865,7 @@ title: "Table of Contents"
872865
&emsp;&ensp; 2.3.1. *[Definition](/D/dic)* <br>
873866
&emsp;&ensp; 2.3.2. *[Deviance](/D/dev)* <br>
874867

875-
2. Bayesian model selection
868+
3. <p id="Bayesian model selection">Bayesian model selection</p>
876869

877870
3.1. Model evidence <br>
878871
&emsp;&ensp; 3.1.1. *[Definition](/D/me)* <br>

P/beta-cdf.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -42,10 +42,10 @@ $$
4242
Then, the [cumulative distribution function](/D/cdf) of $X$ is
4343

4444
$$ \label{eq:beta-cdf}
45-
F_X(x) = \frac{B(x; \alpha, \beta)}{B(\alpha, \beta)}
45+
F_X(x) = \frac{\mathrm{B}(x; \alpha, \beta)}{\mathrm{B}(\alpha, \beta)}
4646
$$
4747

48-
where $B(a,b)$ is the beta function and $B(x;a,b)$ is the incomplete gamma function.
48+
where $\mathrm{B}(a,b)$ is the beta function and $\mathrm{B}(x;a,b)$ is the incomplete gamma function.
4949

5050

5151
**Proof:** The [probability density function of the beta distribution](/P/beta-pdf) is:
@@ -67,11 +67,11 @@ $$
6767
With the definition of the incomplete beta function
6868

6969
$$ \label{eq:inc-beta-fct}
70-
B(x;a,b) = \int_{0}^{x} t^{a-1} \, (1-t)^{b-1} \, \mathrm{d}t \; ,
70+
\mathrm{B}(x;a,b) = \int_{0}^{x} t^{a-1} \, (1-t)^{b-1} \, \mathrm{d}t \; ,
7171
$$
7272

7373
we arrive at the final result given by equation \eqref{eq:beta-cdf}:
7474

7575
$$ \label{eq:beta-cdf-qed}
76-
F_X(x) = \frac{B(x; \alpha, \beta)}{B(\alpha, \beta)} \; .
76+
F_X(x) = \frac{\mathrm{B}(x; \alpha, \beta)}{\mathrm{B}(\alpha, \beta)} \; .
7777
$$

P/blr-lbf.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ username: "JoramSoch"
2121
---
2222

2323

24-
**Theorem:** Let $y = \left[ y_1, \ldots, y_n \right]^\mathrm{T}$ be an $n \times 1$ vector of a [measured univariate signal](/D/data) and consider two [linear regression models](/D/mlr) with [design matrices](/D/mlr) $X_1, X_2$ and [precision matrices](/P/blr-prior) $P_1, P_2$, entailing potentially different [regression coefficients](/D/mlr) $\beta_1, \beta_2$ and [noise precisions](/D/blr-prior) $\tau_1, \tau_2$:
24+
**Theorem:** Let $y = \left[ y_1, \ldots, y_n \right]^\mathrm{T}$ be an $n \times 1$ vector of a [measured univariate signal](/D/data) and consider two [linear regression models](/D/mlr) with [design matrices](/D/mlr) $X_1, X_2$ and [precision matrices](/P/blr-prior) $P_1, P_2$, entailing potentially different [regression coefficients](/D/mlr) $\beta_1, \beta_2$ and [noise precisions](/P/blr-prior) $\tau_1, \tau_2$:
2525

2626
$$ \label{eq:GLM-NG-12}
2727
\begin{split}

P/blr-postind.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ username: "JoramSoch"
2121
---
2222

2323

24-
**Theorem:** Let $y = \left\lbrace y_1, \ldots, y_S \right\rbrace$ be a set of $S$ [conditionally independent data sets](/D/ind-cond) assumed to follow [linear regression models](/D/mlr) with [design matrices](/D/mlr) $X_1, \ldots, X_S$, [number of data points](/D/mlr) $n_1, \ldots, n_S$ and [precision matrices](/P/blr-prior) $P_1, \ldots, P_n$, governed by identical [regression coefficients](/D/mlr) $\beta$ and identical [noise precision](/D/blr-prior) $\tau$:
24+
**Theorem:** Let $y = \left\lbrace y_1, \ldots, y_S \right\rbrace$ be a set of $S$ [conditionally independent data sets](/D/ind-cond) assumed to follow [linear regression models](/D/mlr) with [design matrices](/D/mlr) $X_1, \ldots, X_S$, [number of data points](/D/mlr) $n_1, \ldots, n_S$ and [precision matrices](/P/blr-prior) $P_1, \ldots, P_n$, governed by identical [regression coefficients](/D/mlr) $\beta$ and identical [noise precision](/P/blr-prior) $\tau$:
2525

2626
$$ \label{eq:GLM-NG-S}
2727
\begin{split}

0 commit comments

Comments
 (0)