Fix FlashVSR attention mask and output quality
- Use generate_draft_block_mask_refined for sparse attention mask (matches naxci1's generate_draft_block_mask_sage with proper half-block key scoring) - Remove spurious repeat_interleave(2, dim=-1) from generate_draft_block_mask that doubled the key dimension incorrectly - Add torch.clamp(0, 1) to _to_frames output (matches naxci1's tensor2video) - Add .to(self.device) on LQ video slices in streaming loop for all pipelines Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -805,7 +805,7 @@ class FlashVSRModel:
|
||||
from einops import rearrange
|
||||
v = video.squeeze(0) if video.dim() == 5 else video
|
||||
v = rearrange(v, "C F H W -> F H W C")
|
||||
return (v.float() + 1.0) / 2.0
|
||||
return torch.clamp((v.float() + 1.0) / 2.0, 0.0, 1.0)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Main upscale method
|
||||
|
||||
Reference in New Issue
Block a user