Skip to content

Commit 011729d

Browse files
committed
fix
1 parent 627ef88 commit 011729d

3 files changed

Lines changed: 33 additions & 2 deletions

File tree

README.md

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,8 @@
1717
[![Unit Tests](https://github.com/AI-Hypercomputer/maxdiffusion/actions/workflows/UnitTests.yml/badge.svg)](https://github.com/AI-Hypercomputer/maxdiffusion/actions/workflows/UnitTests.yml)
1818

1919
# What's new?
20+
- **`2026/03/25`**: Wan2.1 and Wan2.2 Magcache inference is now supported
21+
- **`2026/03/25`**: LTX-2 Video Inference is now supported
2022
- **`2026/01/29`**: Wan LoRA for inference is now supported
2123
- **`2026/01/15`**: Wan2.1 and Wan2.2 Img2vid generation is now supported
2224
- **`2025/11/11`**: Wan2.2 txt2vid generation is now supported
@@ -49,6 +51,7 @@ MaxDiffusion supports
4951
* ControlNet inference (Stable Diffusion 1.4 & SDXL).
5052
* Dreambooth training support for Stable Diffusion 1.x,2.x.
5153
* LTX-Video text2vid, img2vid (inference).
54+
* LTX-2 Video text2vid (inference).
5255
* Wan2.1 text2vid (training and inference).
5356
* Wan2.2 text2vid (inference).
5457

@@ -73,6 +76,7 @@ MaxDiffusion supports
7376
- [Inference](#inference)
7477
- [Wan](#wan-models)
7578
- [LTX-Video](#ltx-video)
79+
- [LTX-2 Video](#ltx-2-video)
7680
- [Flux](#flux)
7781
- [Fused Attention for GPU](#fused-attention-for-gpu)
7882
- [SDXL](#stable-diffusion-xl)
@@ -497,6 +501,33 @@ To generate images, run the following command:
497501

498502
Add conditioning image path as conditioning_media_paths in the form of ["IMAGE_PATH"] along with other generation parameters in the ltx_video.yml file. Then follow same instruction as above.
499503

504+
## LTX-2 Video
505+
506+
Although not required, attaching an external disk is recommended as weights take up a lot of disk space. [Follow these instructions if you would like to attach an external disk](https://cloud.google.com/tpu/docs/attach-durable-block-storage).
507+
508+
The following command will run LTX-2 T2V:
509+
510+
```bash
511+
HF_HUB_CACHE=/mnt/disks/external_disk/maxdiffusion_hf_cache/ \
512+
LIBTPU_INIT_ARGS="--xla_tpu_enable_async_collective_fusion=true \
513+
--xla_tpu_enable_async_collective_fusion_fuse_all_reduce=true \
514+
--xla_tpu_enable_async_collective_fusion_multiple_steps=true \
515+
--xla_tpu_overlap_compute_collective_tc=true \
516+
--xla_enable_async_all_reduce=true" \
517+
HF_HUB_ENABLE_HF_TRANSFER=1 \
518+
python src/maxdiffusion/generate_ltx2.py \
519+
src/maxdiffusion/configs/ltx2_video.yml \
520+
attention="flash" \
521+
num_inference_steps=40 \
522+
num_frames=121 \
523+
width=768 \
524+
height=512 \
525+
per_device_batch_size=.125 \
526+
ici_data_parallelism=2 \
527+
ici_context_parallelism=4 \
528+
run_name=ltx2-inference
529+
```
530+
500531
## Wan Models
501532

502533
Although not required, attaching an external disk is recommended as weights take up a lot of disk space. [Follow these instructions if you would like to attach an external disk](https://cloud.google.com/tpu/docs/attach-durable-block-storage).

src/maxdiffusion/pipelines/wan/wan_pipeline_2_2.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -470,7 +470,7 @@ def run_inference_2_2(
470470
mag_ratios,
471471
split_step,
472472
model_type,
473-
) = init_magcache(num_inference_steps, retention_ratio, mag_ratios_base, split_step=split_step, model_type=self.config.model_type)
473+
) = init_magcache(num_inference_steps, retention_ratio, mag_ratios_base, split_step=split_step, model_type="T2V")
474474

475475
prompt_embeds_combined = jnp.concatenate([prompt_embeds, negative_prompt_embeds], axis=0)
476476

src/maxdiffusion/pipelines/wan/wan_pipeline_i2v_2p2.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -463,7 +463,7 @@ def run_inference_2_2_i2v(
463463
mag_ratios,
464464
split_step,
465465
model_type,
466-
) = init_magcache(num_inference_steps, retention_ratio, mag_ratios_base, split_step=split_step, model_type=self.config.model_type)
466+
) = init_magcache(num_inference_steps, retention_ratio, mag_ratios_base, split_step=split_step, model_type="I2V")
467467

468468

469469
prompt_embeds_combined = jnp.concatenate([prompt_embeds, negative_prompt_embeds], axis=0)

0 commit comments

Comments
 (0)