Skip to content

Commit 14b8c1f

Browse files
committed
updated table of contents
1 parent 8af7225 commit 14b8c1f

1 file changed

Lines changed: 6 additions & 4 deletions

File tree

README.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@
2424
- **`2024/10/22`**: LoRA support for Hyper SDXL.
2525
- **`2024/8/1`**: Orbax is the new default checkpointer. You can still use `pipeline.save_pretrained` after training to save in diffusers format.
2626
- **`2024/7/20`**: Dreambooth training for Stable Diffusion 1.x,2.x is now supported.
27-
- **`2025/7/29`**: LTX-Video text2video generation is now supported.
27+
- **`2025/7/29`**: LTX-Video text2vid generation is now supported.
2828

2929
# Overview
3030

@@ -42,7 +42,7 @@ MaxDiffusion supports
4242
* Load Multiple LoRA (SDXL inference).
4343
* ControlNet inference (Stable Diffusion 1.4 & SDXL).
4444
* Dreambooth training support for Stable Diffusion 1.x,2.x.
45-
* LTX-Video (inference).
45+
* LTX-Video text2vid (inference).
4646

4747

4848
# Table of Contents
@@ -55,6 +55,7 @@ MaxDiffusion supports
5555
- [Training](#training)
5656
- [Dreambooth](#dreambooth)
5757
- [Inference](#inference)
58+
- [LTX-Video](#ltx-video)
5859
- [Flux](#flux)
5960
- [Fused Attention for GPU:](#fused-attention-for-gpu)
6061
- [Hyper SDXL LoRA](#hyper-sdxl-lora)
@@ -173,7 +174,7 @@ To generate images, run the following command:
173174
```bash
174175
python -m src.maxdiffusion.generate src/maxdiffusion/configs/base21.yml run_name="my_run"
175176
```
176-
- **LTX Video**
177+
## LTX-Video
177178
- In the folder src/maxdiffusion/models/ltx_video/utils, run:
178179
```bash
179180
python convert_torch_weights_to_jax.py --ckpt_path [LOCAL DIRECTORY FOR WEIGHTS] --transformer_config_path ../xora_v1.2-13B-balanced-128.json
@@ -216,7 +217,6 @@ To generate images, run the following command:
216217
```bash
217218
python src/maxdiffusion/generate_flux.py src/maxdiffusion/configs/base_flux_schnell.yml jax_cache_dir=/tmp/cache_dir run_name=flux_test output_dir=/tmp/ prompt="photograph of an electronics chip in the shape of a race car with trillium written on its side" per_device_batch_size=1 ici_data_parallelism=1 ici_fsdp_parallelism=-1 offload_encoders=False
218219
```
219-
220220
## Fused Attention for GPU:
221221
Fused Attention for GPU is supported via TransformerEngine. Installation instructions:
222222

@@ -333,3 +333,5 @@ This script will automatically format your code with `pyink` and help you identi
333333
334334
335335
The full suite of -end-to end tests is in `tests` and `src/maxdiffusion/tests`. We run them with a nightly cadance.
336+
337+

0 commit comments

Comments
 (0)