Attention fallback chain: SageAttention (2-5x faster, INT8 quantized) > xformers > PyTorch native SDPA. SageAttention is optional — install with `pip install sageattention` for a speed boost. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
12 KiB
12 KiB