You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+5-7Lines changed: 5 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,18 +16,16 @@ This toolbox implements the analyses described in the above NeurIPS paper. We es
16
16
17
17
## Usage
18
18
19
-
To demonstrate the application of the toolbox, two Jupyter notebooks have been provided in the <ahref="https://github.com/naplab/PyTCI/tree/main/Examples">`Examples/`</a> directory:
19
+
To demonstrate the application of the toolbox, two Jupyter notebooks have been provided in the "Examples" directory:
20
20
21
-
1. <ahref="https://github.com/naplab/PyTCI/blob/main/Examples/Example-Toy.ipynb"><strong>Basic</strong></a>: Shows how to apply the TCI method to a toy model that integrates sound energy within a gamma-distributed window.
21
+
1. <ahref="https://nbviewer.org/github/naplab/PyTCI/blob/main/Examples/Example-Toy.ipynb"><strong>Example-Toy</strong></a>: Shows how to apply the TCI method to a toy model that integrates sound energy within a gamma-distributed window. Covers most of the functionality of the toolbox.
22
22
23
-
2. <ahref="https://github.com/naplab/PyTCI/blob/main/Examples/Example-DeepSpeech.ipynb"><strong>Advanced</strong></a>: Shows how to use the TCI method to estimate integration windows from the DeepSpeech2 model described in the paper, implemented in PyTorch. This notebook requires the pretrained model and speech audio clips in `Examples/resources.tar` to be extracted and placed in a directory named `Examples/resources`.
23
+
2. <ahref="https://nbviewer.org/github/naplab/PyTCI/blob/main/Examples/Example-DeepSpeech.ipynb"><strong>Example-DeepSpeech</strong></a>: Shows how to use the TCI method to estimate integration windows from the DeepSpeech2 model described in the paper, implemented in PyTorch. This notebook requires the pretrained model and speech audio clips in `Examples/resources.tar` to be extracted and placed in a directory named `Examples/resources`. It also has extra dependencies that need to be installed.
24
24
25
-
<strong>NOTE</strong>: These notebooks might not render as intended on GitHub. For correct rendering, open them locally in a Jupyter notebook.
25
+
<strong>NOTE</strong>: These notebooks might not render as intended on GitHub. For correct rendering, open them locally in a Jupyter notebook or using nbviewer.
26
26
27
27
## Installation
28
28
29
-
To install this package through pip, run the following command:
29
+
To install or update this package through pip, run the following command:
0 commit comments