Commit Graph

4 Commits

Author SHA1 Message Date
82d7f4997a Add configurable attention backend with SageAttention variant support
Replace the auto-detect xformers shim with a runtime dispatcher that
always intercepts xformers.ops.memory_efficient_attention. A new
dropdown on STARModelLoader (and --attention CLI arg) lets users
explicitly select: sdpa (default), xformers, sageattn, or specific
SageAttention kernels (fp16 triton/cuda, fp8 cuda). Only backends
that successfully import appear as options.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-15 00:12:26 +01:00
cf74b587ec Add SageAttention as preferred attention backend when available
Attention fallback chain: SageAttention (2-5x faster, INT8
quantized) > xformers > PyTorch native SDPA. SageAttention is
optional — install with `pip install sageattention` for a speed
boost.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-15 00:00:55 +01:00
5de26d8ead Add xformers compatibility shim using PyTorch native SDPA
Avoids requiring xformers installation by shimming
xformers.ops.memory_efficient_attention with
torch.nn.functional.scaled_dot_product_attention when
xformers is not available.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-14 23:58:55 +01:00
6cf314baf4 Add standalone CLI for memory-efficient video upscaling
Standalone inference script that works outside ComfyUI — just activate
the same Python venv. Streams output frames to ffmpeg so peak RAM stays
bounded regardless of video length. Supports video files, image
sequences, and single images. Audio is automatically preserved from
input videos.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-14 23:46:58 +01:00