Skip to content

Commit a71eb74

Browse files
committed
Test on attention type and automatically modify flash block sizes object when 'tokamax_flash' requested
Signed-off-by: Kunjan Patel <kunjanp@google.com>
1 parent 8d8d1a0 commit a71eb74

1 file changed

Lines changed: 0 additions & 1 deletion

File tree

src/maxdiffusion/tests/wan_transformer_test.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -234,7 +234,6 @@ def test_wan_attention(self):
234234
)
235235
config = pyconfig.config
236236
with mesh, nn_partitioning.axis_rules(config.logical_axis_rules):
237-
config.attention = attention_kernel
238237
flash_block_sizes = get_flash_block_sizes(config)
239238
attention = FlaxWanAttention(
240239
rngs=rngs,

0 commit comments

Comments
 (0)