You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thus, $f''_X(x)$ is negative for $x = \frac{\alpha-1}{\alpha+\beta-2}$, demonstrating that this is a maximum. To summarize:
220
+
Thus, $f'\'\_X(x)$ is negative for $x = \frac{\alpha-1}{\alpha+\beta-2}$, demonstrating that this is a maximum. To summarize:
221
221
222
222
* If $\alpha < 1$ and $\beta < 1$, then $f_X(x)$ diverges at both ends and both values from the set $\left\lbrace 0, 1 \right\rbrace$ can be seen as the mode of $X$.
223
223
224
224
* If $\alpha < 1$ or $\beta < 1$ (but not $\alpha < 1$ and $\beta < 1$), then the mode of $X$ is 0 or 1, because $f_X(x)$ tends towards infinity at $x = 0$ or $x = 1$.
225
225
226
226
* If $\alpha = 1$ and $\beta = 1$, then $f_X(x)$ is constant and any value in the interval $\left[ 0,1 \right]$ can be seen as the mode of $X$.
227
227
228
-
* If $\alpha \geq 1$ and $\beta \geq 1$ (but not $\alpha = 1$ and $\beta = 1$), then $0 < x = < 1$ and $f'_X(x) = 0$ and $f''_X(x) < 0$, such that $f_X(x)$ reaches its machimum at $\mathrm{mode}(X) = \frac{\alpha-1}{\alpha+\beta-2}$.
228
+
* If $\alpha \geq 1$ and $\beta \geq 1$ (but not $\alpha = 1$ and $\beta = 1$), then $0 < x = < 1$ and $f'_X(x) = 0$ and $f'\'\_X(x) < 0$, such that $f_X(x)$ reaches its machimum at $\mathrm{mode}(X) = \frac{\alpha-1}{\alpha+\beta-2}$.
Copy file name to clipboardExpand all lines: P/blr-map.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -45,7 +45,7 @@ $$
45
45
where $n$ is the [number of data points](/D/mlr).
46
46
47
47
48
-
**Proof:** Given the [prior distribution](/D/prior) in \eqref{eq:GLM-NG-prior}, the [posterior distribution](/D/post) for [multiple linear regression](/D/mlr)[is also a normal-gamma distribution](/P/blr-post)
48
+
**Proof:** Given the [generative model](/D/gm) in \eqref{eq:GLM} and the [prior distribution](/D/prior) in \eqref{eq:GLM-NG-prior}, the [posterior distribution](/D/post) for [multiple linear regression](/D/mlr)[is also a normal-gamma distribution](/P/blr-post)
**Proof:** Combining the [definition of the raw moment](/D/mom-raw) with the [probability density function of the chi-squared distribution](/P/chi2-pdf), we have:
this leads to the desired result when $m > -k/2$. Observe that, if $m$ is a nonnegative integer, then $m > -k/2$ is always true. Therefore, all [moments](/D/mom) of a [chi-squared distribution](/D/chi2) exist and the $m$-th raw moment is given by the equation above.
68
+
this leads to the desired result when $m > -k/2$. Observe that, if $m$ is a non-negative integer, then $m > -k/2$ is always true. Therefore, all [moments](/D/mom) of a [chi-squared distribution](/D/chi2) exist and the $m$-th raw moment is given by the equation above.
Copy file name to clipboardExpand all lines: P/dir-kl.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,12 +27,12 @@ username: "JoramSoch"
27
27
---
28
28
29
29
30
-
**Theorem:** Let $x$ be an $1 \times k$ [random vector](/D/rvec). Assume two [Dirichlet distributions](/D/dir) $P$ and $Q$ specifying the probability distribution of $x$ as
30
+
**Theorem:** Let $X$ be an $1 \times k$ [random vector](/D/rvec). Assume two [Dirichlet distributions](/D/dir) $P$ and $Q$ specifying the probability distribution of $X$ as
31
31
32
32
$$ \label{eq:dirs}
33
33
\begin{split}
34
-
P: \; x &\sim \mathrm{Dir}(\alpha_1) \\
35
-
Q: \; x &\sim \mathrm{Dir}(\alpha_2) \; .
34
+
P: \; X &\sim \mathrm{Dir}(\alpha_1) \\
35
+
Q: \; X &\sim \mathrm{Dir}(\alpha_2) \; .
36
36
\end{split}
37
37
$$
38
38
@@ -74,7 +74,7 @@ $$
74
74
Using the [expected value of a logarithmized Dirichlet variate](/P/dir-logmean)
Copy file name to clipboardExpand all lines: P/dir-mle.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -108,7 +108,7 @@ $$ \label{eq:Dir-dLLdaj-0}
108
108
\end{split}
109
109
$$
110
110
111
-
In the following, we will use a fixed-point iteration to maximize $\mathrm{LL}(\alpha)$. Given an initial guess for $\alpha$, we construct a lower bound on the likelihood function \eqref{eq:Dir-LL-der} which is tight at $\alpha$. The maximum of this bound is computed and it becomes the new guess. Because the [Dirichlet distribution](/D/dir) belongs to the [exponential family](/D/dist-expfam), the log-likelihood function is convex in $\alpha$ ánd the maximum is the only stationary point, such that the procedure is guaranteed to converge to the maximum.
111
+
In the following, we will use a fixed-point iteration to maximize $\mathrm{LL}(\alpha)$. Given an initial guess for $\alpha$, we construct a lower bound on the likelihood function \eqref{eq:Dir-LL-der} which is tight at $\alpha$. The maximum of this bound is computed and it becomes the new guess. Because the [Dirichlet distribution](/D/dir) belongs to the [exponential family](/D/dist-expfam), the log-likelihood function is convex in $\alpha$ and the maximum is the only stationary point, such that the procedure is guaranteed to converge to the maximum.
0 commit comments