Skip to content

[WIP] Add Support for LTX-2.3 Models#13217

Draft
dg845 wants to merge 2 commits intomainfrom
ltx2-3-pipeline
Draft

[WIP] Add Support for LTX-2.3 Models#13217
dg845 wants to merge 2 commits intomainfrom
ltx2-3-pipeline

Conversation

@dg845
Copy link
Collaborator

@dg845 dg845 commented Mar 6, 2026

What does this PR do?

This PR adds support for LTX-2.3 (official code, model weights), a new model in the LTX-2.X family of audio-video models. LTX-2.3 has improved audio and visual quality and prompt adherence as compared to LTX-2.0.

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@yiyixuxu
@sayakpaul

return hidden_states


class LTX2PerturbedAttnProcessor:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Looking at the code, it's unclear to me whether SkipLayerGuidance currently works for LTX-2.3 for the following reasons:

  1. Not attention backend agnostic: if I understand correctly, STG is implemented through AttentionProcessorSkipHook, which uses AttentionScoreSkipFunctionMode to intercept calls to torch.nn.functional.scaled_dot_product_attention to simply return the value:
    if func is torch.nn.functional.scaled_dot_product_attention:
    But I think other attention backends like flash-attn won't call that function and thus will not work with SkipLayerGuidance.
  2. LTX-2.3 does additional computation on the values: LTX-2.3 additionally processes the values using learned per-head gates before sending it to the attention output projection to_out. This is not supported by the current SkipLayerGuidance implementation.

I'm not sure whether these issues can be resolved with changes to the SkipLayerGuidance implementation or whether something like a new attention processor would make more sense here.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have opened a PR with a possible modification to SkipLayerGuidance to allow it to better support LTX-2.3 at #13220.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good callout! From my understanding, guider as a component doesn't change much. LTX-2 is probably an exception. If more models start to do their own form of SLG, we could think of giving them their own guider classes / attention processors. But for now, I think modifications to the existing SLG class make more sense.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants