Files
Comfyui-STAR/inference.py
Ethanfel cf74b587ec Add SageAttention as preferred attention backend when available
Attention fallback chain: SageAttention (2-5x faster, INT8
quantized) > xformers > PyTorch native SDPA. SageAttention is
optional — install with `pip install sageattention` for a speed
boost.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-15 00:00:55 +01:00

21 KiB
Executable File