Skip to content

Commit b524cad

Browse files
committed
ENH: Update to support latex2e modern recommendations
Search for and replace obsolete font commands in all .tex - Replace `{\bf ...}` with `\textbf{...}` - Replace `{\it ...}` with `\textit{...}` - Replace `{\tt ...}` with `\texttt{...}` - Replace `{\rm ...}` with `\textrm{...}` - Replace `{\sc ...}` with `\textsc{...}` - Replace `{\sf ...}` with `\textsf{...}` - Replace `{\sl ...}` with `\textsl{...}` - Replace `{\cal ...}` with `\mathcal{...}` - Replace `{\em ...}` with `\emph{...}` Search for and replace other obsolete commands: - Replace `\over` with `\frac{...}{...}` or `\overline{...}`:
1 parent c9e9bec commit b524cad

6 files changed

Lines changed: 35 additions & 39 deletions

File tree

SoftwareGuide/Latex/04-Contributors.tex

Lines changed: 24 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -16,65 +16,64 @@ \chapter*{Contributors}
1616
guide and their contributions.
1717

1818

19-
{\bf Luis Ib\'{a}\~{n}ez} is principal author of this text.
19+
\textbf{Luis Ib\'{a}\~{n}ez} is principal author of this text.
2020
He assisted in the design and layout of the text, implemented the bulk of
2121
the \LaTeX{} and CMake build process, and was responsible for the bulk of
2222
the content. He also developed most of the example code found in the
2323
\code{Insight/Examples} directory.
2424

25-
{\bf Will Schroeder} helped design and establish the organization
25+
\textbf{Will Schroeder} helped design and establish the organization
2626
of this text and the \code{Insight/Examples} directory. He is principal
2727
content editor, and has authored several chapters.
2828

29-
{\bf Lydia Ng} authored the description for the registration framework
29+
\textbf{Lydia Ng} authored the description for the registration framework
3030
and its components, the section on the multiresolution framework, and
3131
the section on deformable registration methods. She also edited the
3232
section on the resampling image filter and the sections on various
3333
level set segmentation algorithms.
3434

35-
{\bf Joshua Cates} authored the iterators chapter and the text and examples
35+
\textbf{Joshua Cates} authored the iterators chapter and the text and examples
3636
describing watershed segmentation. He also co-authored the level-set
3737
segmentation material.
3838

39-
{\bf Jisung Kim} authored the chapter on the statistics framework.
39+
\textbf{Jisung Kim} authored the chapter on the statistics framework.
4040

41-
{\bf Julien Jomier} contributed the chapter on spatial objects and examples on
41+
\textbf{Julien Jomier} contributed the chapter on spatial objects and examples on
4242
model-based registration using spatial objects.
4343

44-
{\bf Karthik Krishnan} reconfigured the process for automatically generating
44+
\textbf{Karthik Krishnan} reconfigured the process for automatically generating
4545
images from all the examples. Added a large number of new examples and updated
4646
the Filtering and Segmentation chapters for the second edition.
4747

48-
{\bf Stephen Aylward} contributed material describing spatial objects and
48+
\textbf{Stephen Aylward} contributed material describing spatial objects and
4949
their application.
5050

51-
{\bf Tessa Sundaram} contributed the section on deformable registration using
51+
\textbf{Tessa Sundaram} contributed the section on deformable registration using
5252
the finite element method.
5353

54-
{\bf Mark Foskey} contributed the examples on the
54+
\textbf{Mark Foskey} contributed the examples on the
5555
\doxygen{AutomaticTopologyMeshSource} class.
5656

57-
{\bf Mathieu Malaterre} contributed the entire section on the description and
57+
\textbf{Mathieu Malaterre} contributed the entire section on the description and
5858
use of DICOM readers and writers based on the GDCM library. He also contributed
5959
an example on the use of the VTKImageIO class.
6060

61-
{\bf Gavin Baker} contributed the section on how to write composite filters.
61+
\textbf{Gavin Baker} contributed the section on how to write composite filters.
6262
Also known as minipipeline filters.
6363

6464
Since the software guide is generated in part from the ITK source code
6565
itself, many ITK developers have been involved in updating and
66-
extending the ITK documentation. These include {\bf David Doria},
67-
{\bf Bradley Lowekamp}, {\bf Mark Foskey}, {\bf Ga\"{e}tan Lehmann},
68-
{\bf Andreas Schuh}, {\bf Tom Vercauteren}, {\bf Cory Quammen}, {\bf Daniel Blezek},
69-
{\bf Paul Hughett}, {\bf Matthew McCormick}, {\bf Josh Cates}, {\bf Arnaud Gelas},
70-
{\bf Jim Miller}, {\bf Brad King}, {\bf Gabe Hart}, {\bf Hans Johnson}.
71-
72-
{\bf Hans Johnson}, {\bf Kent Williams}, {\bf Constantine Zakkaroff}, {\bf
73-
Xiaoxiao Liu}, {\bf Ali Ghayoor}, and {\bf Matthew McCormick} updated
66+
extending the ITK documentation. These include \textbf{David Doria},
67+
\textbf{Bradley Lowekamp}, \textbf{Mark Foskey}, \textbf{Ga\"{e}tan Lehmann},
68+
\textbf{Andreas Schuh}, \textbf{Tom Vercauteren}, \textbf{Cory Quammen}, \textbf{Daniel Blezek},
69+
\textbf{Paul Hughett}, \textbf{Matthew McCormick}, \textbf{Josh Cates}, \textbf{Arnaud Gelas},
70+
\textbf{Jim Miller}, \textbf{Brad King}, \textbf{Gabe Hart}, \textbf{Hans Johnson}.
71+
72+
\textbf{Hans Johnson}, \textbf{Kent Williams}, \textbf{Constantine Zakkaroff}, \textbf{Xiaoxiao Liu}, \textbf{Ali Ghayoor}, and \textbf{Matthew McCormick} updated
7473
the documentation for the initial ITK Version 4 release.
7574

76-
{\bf Luis Ib\'{a}\~{n}ez} and {\bf S\'{e}bastien Barr\'{e}} designed the
77-
original Book 1 cover. {\bf Xiaoxiao Liu}, {\bf Bill Lorensen},
78-
{\bf Luis Ib\'{a}\~{n}ez}, and {\bf Matthew McCormick} created the 3D printed anatomical
79-
objects that were photographed by {\bf S\'{e}bastien Barr\'{e}} for the Book 2 cover.
80-
{\bf Steve Jordan} designed the layout of the covers.
75+
\textbf{Luis Ib\'{a}\~{n}ez} and \textbf{S\'{e}bastien Barr\'{e}} designed the
76+
original Book 1 cover. \textbf{Xiaoxiao Liu}, \textbf{Bill Lorensen},
77+
\textbf{Luis Ib\'{a}\~{n}ez}, and \textbf{Matthew McCormick} created the 3D printed anatomical
78+
objects that were photographed by \textbf{S\'{e}bastien Barr\'{e}} for the Book 2 cover.
79+
\textbf{Steve Jordan} designed the layout of the covers.

SoftwareGuide/Latex/CellularAggregates.tex

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -47,10 +47,10 @@ \section{Gene Network Modeling}
4747
expression
4848

4949
\begin{equation}
50-
\frac{\partial{G}}{\partial t} = \left[ ABCDE + A\over{B} \right]
50+
\frac{\partial{G}}{\partial t} = \left[ ABCDE + A\overline{B} \right]
5151
\end{equation}
5252

53-
Where the $\over{B}$ represents the repressors and the normal letters represents
53+
Where the $\overline{B}$ represents the repressors and the normal letters represents
5454
the enhancers. It is known from boolean algebra that any boolean polynomial can
5555
be expressed as a sum of products composed by the polynomial terms and their
5656
negations.

SoftwareGuide/Latex/DesignAndFunctionality/AnisotropicDiffusionFiltering.tex

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -36,8 +36,7 @@
3636
conductance at areas of large $|\nabla g|$, and can be any one of a number of
3737
functions. The literature has shown \begin{equation} c(|\nabla g|) =
3838
e^{-\frac{|\nabla g|^{2}}{2k^{2}}} \end{equation} to be quite effective.
39-
Notice that conductance term introduces a free parameter $k$, the {\em
40-
conductance parameter}, that controls the sensitivity of the process to edge
39+
Notice that conductance term introduces a free parameter $k$, the \emph{conductance parameter}, that controls the sensitivity of the process to edge
4140
contrast. Thus, anisotropic diffusion entails two free parameters: the
4241
conductance parameter, $k$, and the time parameter, $t$, that is analogous to
4342
$\sigma$, the effective width of the filter when using Gaussian kernels.
@@ -63,9 +62,8 @@
6362
data (such as the color cryosection data of the Visible Human Project).
6463

6564
For a vector-valued input $\vec{F}:U \mapsto \Re^{m}$ the process takes the
66-
form \begin{equation} \vec{F}_{t} = \nabla \cdot c({\cal D}\vec{F}) \vec{F},
67-
\label{eq:vector_diff} \end{equation} where ${\cal D}\vec{F}$ is a {\em
68-
dissimilarity} measure of $\vec{F}$, a generalization of the gradient magnitude
65+
form \begin{equation} \vec{F}_{t} = \nabla \cdot c(\mathcal{D}\vec{F}) \vec{F},
66+
\label{eq:vector_diff} \end{equation} where $\mathcal{D}\vec{F}$ is a \emph{dissimilarity} measure of $\vec{F}$, a generalization of the gradient magnitude
6967
to vector-valued images, that can incorporate linear and nonlinear coordinate
7068
transformations on the range of $\vec{F}$. In this way, the smoothing of the
7169
multiple images associated with vector-valued data is coupled through the

SoftwareGuide/Latex/DesignAndFunctionality/ConfidenceConnectedOnBrainWeb.tex

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
\subsubsection{Application of the Confidence Connected filter on the Brain Web Data}
2-
This section shows some results obtained by applying the Confidence Connected filter on the BrainWeb database. The filter was applied on a 181 $\times$ 217 $\times$ 181 crosssection of the {\it brainweb165a10f17} dataset. The data is a MR T1 acquisition, with an intensity non-uniformity of 20\% and a slice thickness 1mm. The dataset may be obtained from
2+
This section shows some results obtained by applying the Confidence Connected filter on the BrainWeb database. The filter was applied on a 181 $\times$ 217 $\times$ 181 crosssection of the \textit{brainweb165a10f17} dataset. The data is a MR T1 acquisition, with an intensity non-uniformity of 20\% and a slice thickness 1mm. The dataset may be obtained from
33
\code{https://www.bic.mni.mcgill.ca/brainweb/} or
44
\code{https://data.kitware.com/\#folder/5882712d8d777f4f3f3072df}
55

SoftwareGuide/Latex/DesignAndFunctionality/VisualizingDeformationFieldsUsingParaview.tex

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -16,9 +16,9 @@ \subsection{Visualizing 2D deformation fields}
1616

1717
Load the Deformation field in Paraview. (The deformation field must be capable of handling vector data, such as MetaImages). Paraview shows a color map of the magnitudes of the deformation fields as shown in \ref{fig:ParaviewScreenshot1}.
1818

19-
Convert the deformation field to 3D vector data using a {\it Calculator}. The Calculator may be found in the {\it Filter} pull down menu. A screenshot of the calculator tab is shown in Figure \ref{fig:ParaviewScreenshot2}. Although the deformation field is a 2D vector, we will generate a 3D vector with the third component set to 0 since Paraview generates glyphs only for 3D vectors. You may now apply a glyph of arrows to the resulting 3D vector field by using {\it Glyph} on the menu bar. The glyphs obtained will be very dense since a glyph is generated for each point in the data set. To better visualize the deformation field, you may adopt one of the following approaches.
19+
Convert the deformation field to 3D vector data using a \textit{Calculator}. The Calculator may be found in the \textit{Filter} pull down menu. A screenshot of the calculator tab is shown in Figure \ref{fig:ParaviewScreenshot2}. Although the deformation field is a 2D vector, we will generate a 3D vector with the third component set to 0 since Paraview generates glyphs only for 3D vectors. You may now apply a glyph of arrows to the resulting 3D vector field by using \textit{Glyph} on the menu bar. The glyphs obtained will be very dense since a glyph is generated for each point in the data set. To better visualize the deformation field, you may adopt one of the following approaches.
2020

21-
Reduce the number of glyphs by reducing the number in {\it Max. Number of Glyphs} to a reasonable amount. This uniformly downsamples the number of glyphs. Alternatively, you may apply a {\it Threshold} filter to the {\it Magnitude} of the vector dataset and then glyph the vector data that lie above the threshold. This eliminates the smaller deformation fields that clutter the display. You may now reduce the number of glyphs to a reasonable value.
21+
Reduce the number of glyphs by reducing the number in \textit{Max. Number of Glyphs} to a reasonable amount. This uniformly downsamples the number of glyphs. Alternatively, you may apply a \textit{Threshold} filter to the \textit{Magnitude} of the vector dataset and then glyph the vector data that lie above the threshold. This eliminates the smaller deformation fields that clutter the display. You may now reduce the number of glyphs to a reasonable value.
2222

2323
Figure \ref{fig:ParaviewScreenshot3} shows the vector field visualized using Paraview by thresholding the vector magnitudes by 2.1 and restricting the number of glyphs to 100.
2424

@@ -46,7 +46,7 @@ \subsection{Visualizing 2D deformation fields}
4646

4747

4848
\subsection{Visualizing 3D deformation fields}
49-
Let us create a 3D deformation field. We will use Thin Plate Splines to warp a 3D dataset and create a deformation field. We will pick a set of point landmarks and translate them to provide a specification of correspondences at point landmarks. Note that the landmarks have been picked randomly for purposes of illustration and are not intended to portray a true deformation. The landmarks may be used to produce a deformation field in several ways. Most techniques minimize some regularizing functional representing the irregularity of the deformation field, which is usually some function of the spatial derivatives of the field. Here will we use {\it thin plate splines}. Thin plate splines minimize the regularizing functional
49+
Let us create a 3D deformation field. We will use Thin Plate Splines to warp a 3D dataset and create a deformation field. We will pick a set of point landmarks and translate them to provide a specification of correspondences at point landmarks. Note that the landmarks have been picked randomly for purposes of illustration and are not intended to portray a true deformation. The landmarks may be used to produce a deformation field in several ways. Most techniques minimize some regularizing functional representing the irregularity of the deformation field, which is usually some function of the spatial derivatives of the field. Here will we use \textit{thin plate splines}. Thin plate splines minimize the regularizing functional
5050

5151
\begin{equation}
5252
I[f(x,y)] = \iint (f^2_{xx} + 2 f^2_{xy} + f^2_{yy}) dx dy

SoftwareGuide/Latex/DesignAndFunctionality/WatershedSegmentation.tex

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -87,8 +87,7 @@ \subsection{Overview}
8787
operators, and the $f$ in question will therefore have floating point
8888
values. The bottom-up strategy starts with seeds at the local minima in the
8989
image and grows regions outward and upward at discrete intensity levels
90-
(equivalent to a sequence of morphological operations and sometimes called {\em
91-
morphological watersheds} \cite{Serra1982}.) This limits the accuracy by
90+
(equivalent to a sequence of morphological operations and sometimes called \emph{morphological watersheds} \cite{Serra1982}.) This limits the accuracy by
9291
enforcing a set of discrete gray levels on the image.
9392

9493
\begin{figure}
@@ -109,7 +108,7 @@ \subsection{Overview}
109108
initial segmentation is passed to a second sub-filter that generates a
110109
hierarchy of basins to a user-specified maximum watershed depth. The
111110
relabeler object at the end of the mini-pipeline uses the hierarchy and the
112-
initial segmentation to produce an output image at any scale {\em below} the
111+
initial segmentation to produce an output image at any scale \emph{below} the
113112
user-specified maximum. Data objects are cached in the mini-pipeline so that
114113
changing watershed depths only requires a (fast) relabeling of the basic
115114
segmentation. The three parameters that control the filter are shown in

0 commit comments

Comments
 (0)