You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+4Lines changed: 4 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -53,6 +53,7 @@ MaxDiffusion supports
53
53
-[Dreambooth](#dreambooth)
54
54
-[Inference](#inference)
55
55
-[Flux](#flux)
56
+
-[Flash Attention for GPU:](#flash-attention-for-gpu)
56
57
-[Hyper SDXL LoRA](#hyper-sdxl-lora)
57
58
-[Load Multiple LoRA](#load-multiple-lora)
58
59
-[SDXL Lightning](#sdxl-lightning)
@@ -175,6 +176,9 @@ To generate images, run the following command:
175
176
python src/maxdiffusion/generate_flux.py src/maxdiffusion/configs/base_flux_schnell.yml jax_cache_dir=/tmp/cache_dir run_name=flux_test output_dir=/tmp/ prompt="photograph of an electronics chip in the shape of a race car with trillium written on its side" per_device_batch_size=1 ici_data_parallelism=1 ici_fsdp_parallelism=-1 offload_encoders=False
176
177
```
177
178
179
+
### Flash Attention for GPU:
180
+
Flash Attention for GPU is supported via TransformerEngine, make sure it is installed and then specify attention=cudnn_flash_te when running the above commands.
181
+
178
182
## Flux LoRA
179
183
180
184
Disclaimer: not all LoRA formats have been tested. If there is a specific LoRA that doesn't load, please let us know.
0 commit comments