Add EMA-VFI (CVPR 2023) frame interpolation support

Integrate EMA-VFI alongside existing BIM-VFI with three new ComfyUI nodes:
Load EMA-VFI Model, EMA-VFI Interpolate, and EMA-VFI Segment Interpolate.

Architecture files vendored from MCG-NJU/EMA-VFI with device-awareness
fixes (removed hardcoded .cuda() calls), warp cache management, and
relative imports. InputPadder extended to support EMA-VFI's replicate
center-symmetric padding. Auto-installs timm dependency on first load.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-12 22:30:06 +01:00
parent 0133f61d47
commit 1de086569c
11 changed files with 1334 additions and 18 deletions

3
.gitignore vendored Normal file
View File

@@ -0,0 +1,3 @@
__pycache__/
*.pyc
*.pyo

View File

@@ -1,10 +1,12 @@
# ComfyUI BIM-VFI # ComfyUI BIM-VFI + EMA-VFI
ComfyUI custom nodes for video frame interpolation using [BiM-VFI](https://github.com/KAIST-VICLab/BiM-VFI) (CVPR 2025). Designed for long videos with thousands of frames — processes them without running out of VRAM. ComfyUI custom nodes for video frame interpolation using [BiM-VFI](https://github.com/KAIST-VICLab/BiM-VFI) (CVPR 2025) and [EMA-VFI](https://github.com/MCG-NJU/EMA-VFI) (CVPR 2023). Designed for long videos with thousands of frames — processes them without running out of VRAM.
## Nodes ## Nodes
### Load BIM-VFI Model ### BIM-VFI
#### Load BIM-VFI Model
Loads the BiM-VFI checkpoint. Auto-downloads from Google Drive on first use to `ComfyUI/models/bim-vfi/`. Loads the BiM-VFI checkpoint. Auto-downloads from Google Drive on first use to `ComfyUI/models/bim-vfi/`.
@@ -14,7 +16,7 @@ Loads the BiM-VFI checkpoint. Auto-downloads from Google Drive on first use to `
| **auto_pyr_level** | Auto-select pyramid level by resolution (&lt;540p=3, 540p=5, 1080p=6, 4K=7) | | **auto_pyr_level** | Auto-select pyramid level by resolution (&lt;540p=3, 540p=5, 1080p=6, 4K=7) |
| **pyr_level** | Manual pyramid level (3-7), only used when auto is off | | **pyr_level** | Manual pyramid level (3-7), only used when auto is off |
### BIM-VFI Interpolate #### BIM-VFI Interpolate
Interpolates frames from an image batch. Interpolates frames from an image batch.
@@ -24,12 +26,47 @@ Interpolates frames from an image batch.
| **model** | Model from the loader node | | **model** | Model from the loader node |
| **multiplier** | 2x, 4x, or 8x frame rate (recursive 2x passes) | | **multiplier** | 2x, 4x, or 8x frame rate (recursive 2x passes) |
| **batch_size** | Frame pairs processed simultaneously (higher = faster, more VRAM) | | **batch_size** | Frame pairs processed simultaneously (higher = faster, more VRAM) |
| **chunk_size** | Process in segments of N input frames (0 = disabled). Bounds memory for very long videos. Result is identical to processing all at once | | **chunk_size** | Process in segments of N input frames (0 = disabled). Bounds VRAM for very long videos. Result is identical to processing all at once |
| **keep_device** | Keep model on GPU between pairs (faster, ~200MB constant VRAM) | | **keep_device** | Keep model on GPU between pairs (faster, ~200MB constant VRAM) |
| **all_on_gpu** | Keep all intermediate frames on GPU (fast, needs large VRAM) | | **all_on_gpu** | Keep all intermediate frames on GPU (fast, needs large VRAM) |
| **clear_cache_after_n_frames** | Clear CUDA cache every N pairs to prevent VRAM buildup | | **clear_cache_after_n_frames** | Clear CUDA cache every N pairs to prevent VRAM buildup |
**Output frame count:** 2x = 2N-1, 4x = 4N-3, 8x = 8N-7 #### BIM-VFI Segment Interpolate
Same as Interpolate but processes a single segment of the input. Chain multiple instances with Save nodes between them to bound peak RAM. The model pass-through output forces sequential execution.
#### BIM-VFI Concat Videos
Concatenates segment video files into a single video using ffmpeg. Connect from the last Segment Interpolate's model output to ensure it runs after all segments are saved.
### EMA-VFI
#### Load EMA-VFI Model
Loads an EMA-VFI checkpoint. Auto-downloads from Google Drive on first use to `ComfyUI/models/ema-vfi/`. Variant (large/small) and timestep support are auto-detected from the filename.
| Input | Description |
|-------|-------------|
| **model_path** | Checkpoint file from `models/ema-vfi/` |
| **tta** | Test-time augmentation: flip input and average with unflipped result (~2x slower, slightly better quality) |
Available checkpoints:
| Checkpoint | Variant | Params | Arbitrary timestep |
|-----------|---------|--------|-------------------|
| `ours_t.pkl` | Large | ~65M | Yes |
| `ours.pkl` | Large | ~65M | No (fixed 0.5) |
| `ours_small_t.pkl` | Small | ~14M | Yes |
| `ours_small.pkl` | Small | ~14M | No (fixed 0.5) |
#### EMA-VFI Interpolate
Interpolates frames from an image batch. Same controls as BIM-VFI Interpolate.
#### EMA-VFI Segment Interpolate
Same as EMA-VFI Interpolate but processes a single segment. Same pattern as BIM-VFI Segment Interpolate.
**Output frame count (both models):** 2x = 2N-1, 4x = 4N-3, 8x = 8N-7
## Installation ## Installation
@@ -40,7 +77,7 @@ cd ComfyUI/custom_nodes
git clone https://github.com/your-user/Comfyui-BIM-VFI.git git clone https://github.com/your-user/Comfyui-BIM-VFI.git
``` ```
Dependencies (`gdown`, `cupy`) are auto-installed on first load. The correct `cupy` variant is detected from your PyTorch CUDA version. Dependencies (`gdown`, `cupy`, `timm`) are auto-installed on first load. The correct `cupy` variant is detected from your PyTorch CUDA version.
> **Warning:** `cupy` is a large package (~800MB) and compilation/installation can take several minutes. The first ComfyUI startup after installing this node may appear to hang while `cupy` installs in the background. Check the console log for progress. If auto-install fails (e.g. missing build tools in Docker), install manually with: > **Warning:** `cupy` is a large package (~800MB) and compilation/installation can take several minutes. The first ComfyUI startup after installing this node may appear to hang while `cupy` installs in the background. Check the console log for progress. If auto-install fails (e.g. missing build tools in Docker), install manually with:
> ```bash > ```bash
@@ -57,7 +94,8 @@ python install.py
### Requirements ### Requirements
- PyTorch with CUDA - PyTorch with CUDA
- `cupy` (matching your CUDA version) - `cupy` (matching your CUDA version, for BIM-VFI)
- `timm` (for EMA-VFI)
- `gdown` (for model auto-download) - `gdown` (for model auto-download)
## VRAM Guide ## VRAM Guide
@@ -71,9 +109,9 @@ python install.py
## Acknowledgments ## Acknowledgments
This project wraps the official [BiM-VFI](https://github.com/KAIST-VICLab/BiM-VFI) implementation by the [KAIST VIC Lab](https://github.com/KAIST-VICLab). The model architecture files in `bim_vfi_arch/` are vendored from their repository with minimal modifications (relative imports, inference-only paths). This project wraps the official [BiM-VFI](https://github.com/KAIST-VICLab/BiM-VFI) implementation by the [KAIST VIC Lab](https://github.com/KAIST-VICLab) and the official [EMA-VFI](https://github.com/MCG-NJU/EMA-VFI) implementation by MCG-NJU. Architecture files in `bim_vfi_arch/` and `ema_vfi_arch/` are vendored from their respective repositories with minimal modifications (relative imports, device-awareness fixes, inference-only paths).
**Paper:** **BiM-VFI:**
> Wonyong Seo, Jihyong Oh, and Munchurl Kim. > Wonyong Seo, Jihyong Oh, and Munchurl Kim.
> "BiM-VFI: Bidirectional Motion Field-Guided Frame Interpolation for Video with Non-uniform Motions." > "BiM-VFI: Bidirectional Motion Field-Guided Frame Interpolation for Video with Non-uniform Motions."
> *IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, 2025. > *IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, 2025.
@@ -88,6 +126,23 @@ This project wraps the official [BiM-VFI](https://github.com/KAIST-VICLab/BiM-VF
} }
``` ```
**EMA-VFI:**
> Guozhen Zhang, Yuhan Zhu, Haonan Wang, Youxin Chen, Gangshan Wu, and Limin Wang.
> "Extracting Motion and Appearance via Inter-Frame Attention for Efficient Video Frame Interpolation."
> *IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, 2023.
> [[arXiv]](https://arxiv.org/abs/2303.00440) [[GitHub]](https://github.com/MCG-NJU/EMA-VFI)
```bibtex
@inproceedings{zhang2023emavfi,
title={Extracting Motion and Appearance via Inter-Frame Attention for Efficient Video Frame Interpolation},
author={Zhang, Guozhen and Zhu, Yuhan and Wang, Haonan and Chen, Youxin and Wu, Gangshan and Wang, Limin},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2023}
}
```
## License ## License
The BiM-VFI model weights and architecture code are provided by KAIST VIC Lab for **research and education purposes only**. Commercial use requires permission from the principal investigator (Prof. Munchurl Kim, mkimee@kaist.ac.kr). See the [original repository](https://github.com/KAIST-VICLab/BiM-VFI) for details. The BiM-VFI model weights and architecture code are provided by KAIST VIC Lab for **research and education purposes only**. Commercial use requires permission from the principal investigator (Prof. Munchurl Kim, mkimee@kaist.ac.kr). See the [original repository](https://github.com/KAIST-VICLab/BiM-VFI) for details.
The EMA-VFI model weights and architecture code are released under the [Apache 2.0 License](https://github.com/MCG-NJU/EMA-VFI/blob/main/LICENSE). See the [original repository](https://github.com/MCG-NJU/EMA-VFI) for details.

View File

@@ -14,6 +14,13 @@ def _auto_install_deps():
logger.info("[BIM-VFI] Installing gdown...") logger.info("[BIM-VFI] Installing gdown...")
subprocess.check_call([sys.executable, "-m", "pip", "install", "gdown"]) subprocess.check_call([sys.executable, "-m", "pip", "install", "gdown"])
# timm (required for EMA-VFI's MotionFormer backbone)
try:
import timm # noqa: F401
except ImportError:
logger.info("[BIM-VFI] Installing timm...")
subprocess.check_call([sys.executable, "-m", "pip", "install", "timm"])
# cupy # cupy
try: try:
import cupy # noqa: F401 import cupy # noqa: F401
@@ -30,13 +37,19 @@ def _auto_install_deps():
_auto_install_deps() _auto_install_deps()
from .nodes import LoadBIMVFIModel, BIMVFIInterpolate, BIMVFISegmentInterpolate, BIMVFIConcatVideos from .nodes import (
LoadBIMVFIModel, BIMVFIInterpolate, BIMVFISegmentInterpolate, BIMVFIConcatVideos,
LoadEMAVFIModel, EMAVFIInterpolate, EMAVFISegmentInterpolate,
)
NODE_CLASS_MAPPINGS = { NODE_CLASS_MAPPINGS = {
"LoadBIMVFIModel": LoadBIMVFIModel, "LoadBIMVFIModel": LoadBIMVFIModel,
"BIMVFIInterpolate": BIMVFIInterpolate, "BIMVFIInterpolate": BIMVFIInterpolate,
"BIMVFISegmentInterpolate": BIMVFISegmentInterpolate, "BIMVFISegmentInterpolate": BIMVFISegmentInterpolate,
"BIMVFIConcatVideos": BIMVFIConcatVideos, "BIMVFIConcatVideos": BIMVFIConcatVideos,
"LoadEMAVFIModel": LoadEMAVFIModel,
"EMAVFIInterpolate": EMAVFIInterpolate,
"EMAVFISegmentInterpolate": EMAVFISegmentInterpolate,
} }
NODE_DISPLAY_NAME_MAPPINGS = { NODE_DISPLAY_NAME_MAPPINGS = {
@@ -44,4 +57,7 @@ NODE_DISPLAY_NAME_MAPPINGS = {
"BIMVFIInterpolate": "BIM-VFI Interpolate", "BIMVFIInterpolate": "BIM-VFI Interpolate",
"BIMVFISegmentInterpolate": "BIM-VFI Segment Interpolate", "BIMVFISegmentInterpolate": "BIM-VFI Segment Interpolate",
"BIMVFIConcatVideos": "BIM-VFI Concat Videos", "BIMVFIConcatVideos": "BIM-VFI Concat Videos",
"LoadEMAVFIModel": "Load EMA-VFI Model",
"EMAVFIInterpolate": "EMA-VFI Interpolate",
"EMAVFISegmentInterpolate": "EMA-VFI Segment Interpolate",
} }

5
ema_vfi_arch/__init__.py Normal file
View File

@@ -0,0 +1,5 @@
from .feature_extractor import feature_extractor
from .flow_estimation import MultiScaleFlow
from .warplayer import clear_warp_cache
__all__ = ['feature_extractor', 'MultiScaleFlow', 'clear_warp_cache']

View File

@@ -0,0 +1,515 @@
import torch
import torch.nn as nn
import math
from timm.models.layers import DropPath, to_2tuple, trunc_normal_
def window_partition(x, window_size):
B, H, W, C = x.shape
x = x.view(B, H // window_size[0], window_size[0], W // window_size[1], window_size[1], C)
windows = (
x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size[0]*window_size[1], C)
)
return windows
def window_reverse(windows, window_size, H, W):
nwB, N, C = windows.shape
windows = windows.view(-1, window_size[0], window_size[1], C)
B = int(nwB / (H * W / window_size[0] / window_size[1]))
x = windows.view(
B, H // window_size[0], W // window_size[1], window_size[0], window_size[1], -1
)
x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
return x
def pad_if_needed(x, size, window_size):
n, h, w, c = size
pad_h = math.ceil(h / window_size[0]) * window_size[0] - h
pad_w = math.ceil(w / window_size[1]) * window_size[1] - w
if pad_h > 0 or pad_w > 0: # center-pad the feature on H and W axes
img_mask = torch.zeros((1, h+pad_h, w+pad_w, 1)) # 1 H W 1
h_slices = (
slice(0, pad_h//2),
slice(pad_h//2, h+pad_h//2),
slice(h+pad_h//2, None),
)
w_slices = (
slice(0, pad_w//2),
slice(pad_w//2, w+pad_w//2),
slice(w+pad_w//2, None),
)
cnt = 0
for h in h_slices:
for w in w_slices:
img_mask[:, h, w, :] = cnt
cnt += 1
mask_windows = window_partition(
img_mask, window_size
) # nW, window_size*window_size, 1
mask_windows = mask_windows.squeeze(-1)
attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
attn_mask = attn_mask.masked_fill(
attn_mask != 0, float(-100.0)
).masked_fill(attn_mask == 0, float(0.0))
return nn.functional.pad(
x,
(0, 0, pad_w // 2, pad_w - pad_w // 2, pad_h // 2, pad_h - pad_h // 2),
), attn_mask
return x, None
def depad_if_needed(x, size, window_size):
n, h, w, c = size
pad_h = math.ceil(h / window_size[0]) * window_size[0] - h
pad_w = math.ceil(w / window_size[1]) * window_size[1] - w
if pad_h > 0 or pad_w > 0: # remove the center-padding on feature
return x[:, pad_h // 2 : pad_h // 2 + h, pad_w // 2 : pad_w // 2 + w, :].contiguous()
return x
class Mlp(nn.Module):
def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
super().__init__()
out_features = out_features or in_features
hidden_features = hidden_features or in_features
self.fc1 = nn.Linear(in_features, hidden_features)
self.dwconv = DWConv(hidden_features)
self.act = act_layer()
self.fc2 = nn.Linear(hidden_features, out_features)
self.drop = nn.Dropout(drop)
self.relu = nn.ReLU(inplace=True)
self.apply(self._init_weights)
def _init_weights(self, m):
if isinstance(m, nn.Linear):
trunc_normal_(m.weight, std=.02)
if isinstance(m, nn.Linear) and m.bias is not None:
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.LayerNorm):
nn.init.constant_(m.bias, 0)
nn.init.constant_(m.weight, 1.0)
elif isinstance(m, nn.Conv2d):
fan_out = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
fan_out //= m.groups
m.weight.data.normal_(0, math.sqrt(2.0 / fan_out))
if m.bias is not None:
m.bias.data.zero_()
def forward(self, x, H, W):
x = self.fc1(x)
x = self.dwconv(x, H, W)
x = self.act(x)
x = self.drop(x)
x = self.fc2(x)
x = self.drop(x)
return x
class InterFrameAttention(nn.Module):
def __init__(self, dim, motion_dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0.):
super().__init__()
assert dim % num_heads == 0, f"dim {dim} should be divided by num_heads {num_heads}."
self.dim = dim
self.motion_dim = motion_dim
self.num_heads = num_heads
head_dim = dim // num_heads
self.scale = qk_scale or head_dim ** -0.5
self.q = nn.Linear(dim, dim, bias=qkv_bias)
self.kv = nn.Linear(dim, dim * 2, bias=qkv_bias)
self.cor_embed = nn.Linear(2, motion_dim, bias=qkv_bias)
self.attn_drop = nn.Dropout(attn_drop)
self.proj = nn.Linear(dim, dim)
self.motion_proj = nn.Linear(motion_dim, motion_dim)
self.proj_drop = nn.Dropout(proj_drop)
self.apply(self._init_weights)
def _init_weights(self, m):
if isinstance(m, nn.Linear):
trunc_normal_(m.weight, std=.02)
if isinstance(m, nn.Linear) and m.bias is not None:
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.LayerNorm):
nn.init.constant_(m.bias, 0)
nn.init.constant_(m.weight, 1.0)
elif isinstance(m, nn.Conv2d):
fan_out = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
fan_out //= m.groups
m.weight.data.normal_(0, math.sqrt(2.0 / fan_out))
if m.bias is not None:
m.bias.data.zero_()
def forward(self, x1, x2, cor, H, W, mask=None):
B, N, C = x1.shape
B, N, C_c = cor.shape
q = self.q(x1).reshape(B, N, self.num_heads, C // self.num_heads).permute(0, 2, 1, 3)
kv = self.kv(x2).reshape(B, -1, 2, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
cor_embed_ = self.cor_embed(cor)
cor_embed = cor_embed_.reshape(B, N, self.num_heads, self.motion_dim // self.num_heads).permute(0, 2, 1, 3)
k, v = kv[0], kv[1]
attn = (q @ k.transpose(-2, -1)) * self.scale
if mask is not None:
nW = mask.shape[0] # mask: nW, N, N
attn = attn.view(B // nW, nW, self.num_heads, N, N) + mask.unsqueeze(
1
).unsqueeze(0)
attn = attn.view(-1, self.num_heads, N, N)
attn = attn.softmax(dim=-1)
else:
attn = attn.softmax(dim=-1)
attn = self.attn_drop(attn)
x = (attn @ v).transpose(1, 2).reshape(B, N, C)
c_reverse = (attn @ cor_embed).transpose(1, 2).reshape(B, N, -1)
motion = self.motion_proj(c_reverse-cor_embed_)
x = self.proj(x)
x = self.proj_drop(x)
return x, motion
class MotionFormerBlock(nn.Module):
def __init__(self, dim, motion_dim, num_heads, window_size=0, shift_size=0, mlp_ratio=4., bidirectional=True, qkv_bias=False, qk_scale=None, drop=0., attn_drop=0.,
drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm,):
super().__init__()
self.window_size = window_size
if not isinstance(self.window_size, (tuple, list)):
self.window_size = to_2tuple(window_size)
self.shift_size = shift_size
if not isinstance(self.shift_size, (tuple, list)):
self.shift_size = to_2tuple(shift_size)
self.bidirectional = bidirectional
self.norm1 = norm_layer(dim)
self.attn = InterFrameAttention(
dim,
motion_dim,
num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale,
attn_drop=attn_drop, proj_drop=drop)
self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
self.norm2 = norm_layer(dim)
mlp_hidden_dim = int(dim * mlp_ratio)
self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
self.apply(self._init_weights)
def _init_weights(self, m):
if isinstance(m, nn.Linear):
trunc_normal_(m.weight, std=.02)
if isinstance(m, nn.Linear) and m.bias is not None:
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.LayerNorm):
nn.init.constant_(m.bias, 0)
nn.init.constant_(m.weight, 1.0)
elif isinstance(m, nn.Conv2d):
fan_out = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
fan_out //= m.groups
m.weight.data.normal_(0, math.sqrt(2.0 / fan_out))
if m.bias is not None:
m.bias.data.zero_()
def forward(self, x, cor, H, W, B):
x = x.view(2*B, H, W, -1)
x_pad, mask = pad_if_needed(x, x.size(), self.window_size)
cor_pad, _ = pad_if_needed(cor, cor.size(), self.window_size)
if self.shift_size[0] or self.shift_size[1]:
_, H_p, W_p, C = x_pad.shape
x_pad = torch.roll(x_pad, shifts=(-self.shift_size[0], -self.shift_size[1]), dims=(1, 2))
cor_pad = torch.roll(cor_pad, shifts=(-self.shift_size[0], -self.shift_size[1]), dims=(1, 2))
if hasattr(self, 'HW') and self.HW.item() == H_p * W_p:
shift_mask = self.attn_mask
else:
shift_mask = torch.zeros((1, H_p, W_p, 1)) # 1 H W 1
h_slices = (slice(0, -self.window_size[0]),
slice(-self.window_size[0], -self.shift_size[0]),
slice(-self.shift_size[0], None))
w_slices = (slice(0, -self.window_size[1]),
slice(-self.window_size[1], -self.shift_size[1]),
slice(-self.shift_size[1], None))
cnt = 0
for h in h_slices:
for w in w_slices:
shift_mask[:, h, w, :] = cnt
cnt += 1
mask_windows = window_partition(shift_mask, self.window_size).squeeze(-1)
shift_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
shift_mask = shift_mask.masked_fill(shift_mask != 0,
float(-100.0)).masked_fill(shift_mask == 0,
float(0.0))
if mask is not None:
shift_mask = shift_mask.masked_fill(mask != 0,
float(-100.0))
self.register_buffer("attn_mask", shift_mask)
self.register_buffer("HW", torch.Tensor([H_p*W_p]))
else:
shift_mask = mask
if shift_mask is not None:
shift_mask = shift_mask.to(x_pad.device)
_, Hw, Ww, C = x_pad.shape
x_win = window_partition(x_pad, self.window_size)
cor_win = window_partition(cor_pad, self.window_size)
nwB = x_win.shape[0]
x_norm = self.norm1(x_win)
x_reverse = torch.cat([x_norm[nwB//2:], x_norm[:nwB//2]])
x_appearence, x_motion = self.attn(x_norm, x_reverse, cor_win, H, W, shift_mask)
x_norm = x_norm + self.drop_path(x_appearence)
x_back = x_norm
x_back_win = window_reverse(x_back, self.window_size, Hw, Ww)
x_motion = window_reverse(x_motion, self.window_size, Hw, Ww)
if self.shift_size[0] or self.shift_size[1]:
x_back_win = torch.roll(x_back_win, shifts=(self.shift_size[0], self.shift_size[1]), dims=(1, 2))
x_motion = torch.roll(x_motion, shifts=(self.shift_size[0], self.shift_size[1]), dims=(1, 2))
x = depad_if_needed(x_back_win, x.size(), self.window_size).view(2*B, H * W, -1)
x_motion = depad_if_needed(x_motion, cor.size(), self.window_size).view(2*B, H * W, -1)
x = x + self.drop_path(self.mlp(self.norm2(x), H, W))
return x, x_motion
class ConvBlock(nn.Module):
def __init__(self, in_dim, out_dim, depths=2,act_layer=nn.PReLU):
super().__init__()
layers = []
for i in range(depths):
if i == 0:
layers.append(nn.Conv2d(in_dim, out_dim, 3,1,1))
else:
layers.append(nn.Conv2d(out_dim, out_dim, 3,1,1))
layers.extend([
act_layer(out_dim),
])
self.conv = nn.Sequential(*layers)
def _init_weights(self, m):
if isinstance(m, nn.Conv2d):
fan_out = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
fan_out //= m.groups
m.weight.data.normal_(0, math.sqrt(2.0 / fan_out))
if m.bias is not None:
m.bias.data.zero_()
def forward(self, x):
x = self.conv(x)
return x
class OverlapPatchEmbed(nn.Module):
def __init__(self, patch_size=7, stride=4, in_chans=3, embed_dim=768):
super().__init__()
patch_size = to_2tuple(patch_size)
self.patch_size = patch_size
self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=stride,
padding=(patch_size[0] // 2, patch_size[1] // 2))
self.norm = nn.LayerNorm(embed_dim)
self.apply(self._init_weights)
def _init_weights(self, m):
if isinstance(m, nn.Linear):
trunc_normal_(m.weight, std=.02)
if isinstance(m, nn.Linear) and m.bias is not None:
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.LayerNorm):
nn.init.constant_(m.bias, 0)
nn.init.constant_(m.weight, 1.0)
elif isinstance(m, nn.Conv2d):
fan_out = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
fan_out //= m.groups
m.weight.data.normal_(0, math.sqrt(2.0 / fan_out))
if m.bias is not None:
m.bias.data.zero_()
def forward(self, x):
x = self.proj(x)
_, _, H, W = x.shape
x = x.flatten(2).transpose(1, 2)
x = self.norm(x)
return x, H, W
class CrossScalePatchEmbed(nn.Module):
def __init__(self, in_dims=[16,32,64], embed_dim=768):
super().__init__()
base_dim = in_dims[0]
layers = []
for i in range(len(in_dims)):
for j in range(2 ** i):
layers.append(nn.Conv2d(in_dims[-1-i], base_dim, 3, 2**(i+1), 1+j, 1+j))
self.layers = nn.ModuleList(layers)
self.proj = nn.Conv2d(base_dim * len(layers), embed_dim, 1, 1)
self.norm = nn.LayerNorm(embed_dim)
self.apply(self._init_weights)
def _init_weights(self, m):
if isinstance(m, nn.Linear):
trunc_normal_(m.weight, std=.02)
if isinstance(m, nn.Linear) and m.bias is not None:
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.LayerNorm):
nn.init.constant_(m.bias, 0)
nn.init.constant_(m.weight, 1.0)
elif isinstance(m, nn.Conv2d):
fan_out = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
fan_out //= m.groups
m.weight.data.normal_(0, math.sqrt(2.0 / fan_out))
if m.bias is not None:
m.bias.data.zero_()
def forward(self, xs):
ys = []
k = 0
for i in range(len(xs)):
for _ in range(2 ** i):
ys.append(self.layers[k](xs[-1-i]))
k += 1
x = self.proj(torch.cat(ys,1))
_, _, H, W = x.shape
x = x.flatten(2).transpose(1, 2)
x = self.norm(x)
return x, H, W
class MotionFormer(nn.Module):
def __init__(self, in_chans=3, embed_dims=[32, 64, 128, 256, 512], motion_dims=64, num_heads=[8, 16],
mlp_ratios=[4, 4], qkv_bias=True, qk_scale=None, drop_rate=0.,
attn_drop_rate=0., drop_path_rate=0., norm_layer=nn.LayerNorm,
depths=[2, 2, 2, 6, 2], window_sizes=[11, 11],**kwarg):
super().__init__()
self.depths = depths
self.num_stages = len(embed_dims)
dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule
cur = 0
self.conv_stages = self.num_stages - len(num_heads)
for i in range(self.num_stages):
if i == 0:
block = ConvBlock(in_chans,embed_dims[i],depths[i])
else:
if i < self.conv_stages:
patch_embed = nn.Sequential(
nn.Conv2d(embed_dims[i-1], embed_dims[i], 3,2,1),
nn.PReLU(embed_dims[i])
)
block = ConvBlock(embed_dims[i],embed_dims[i],depths[i])
else:
if i == self.conv_stages:
patch_embed = CrossScalePatchEmbed(embed_dims[:i],
embed_dim=embed_dims[i])
else:
patch_embed = OverlapPatchEmbed(patch_size=3,
stride=2,
in_chans=embed_dims[i - 1],
embed_dim=embed_dims[i])
block = nn.ModuleList([MotionFormerBlock(
dim=embed_dims[i], motion_dim=motion_dims[i], num_heads=num_heads[i-self.conv_stages], window_size=window_sizes[i-self.conv_stages],
shift_size= 0 if (j % 2) == 0 else window_sizes[i-self.conv_stages] // 2,
mlp_ratio=mlp_ratios[i-self.conv_stages], qkv_bias=qkv_bias, qk_scale=qk_scale,
drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[cur + j], norm_layer=norm_layer)
for j in range(depths[i])])
norm = norm_layer(embed_dims[i])
setattr(self, f"norm{i + 1}", norm)
setattr(self, f"patch_embed{i + 1}", patch_embed)
cur += depths[i]
setattr(self, f"block{i + 1}", block)
self.cor = {}
self.apply(self._init_weights)
def _init_weights(self, m):
if isinstance(m, nn.Linear):
trunc_normal_(m.weight, std=.02)
if isinstance(m, nn.Linear) and m.bias is not None:
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.LayerNorm):
nn.init.constant_(m.bias, 0)
nn.init.constant_(m.weight, 1.0)
elif isinstance(m, nn.Conv2d):
fan_out = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
fan_out //= m.groups
m.weight.data.normal_(0, math.sqrt(2.0 / fan_out))
if m.bias is not None:
m.bias.data.zero_()
def get_cor(self, shape, device):
k = (str(shape), str(device))
if k not in self.cor:
tenHorizontal = torch.linspace(-1.0, 1.0, shape[2], device=device).view(
1, 1, 1, shape[2]).expand(shape[0], -1, shape[1], -1).permute(0, 2, 3, 1)
tenVertical = torch.linspace(-1.0, 1.0, shape[1], device=device).view(
1, 1, shape[1], 1).expand(shape[0], -1, -1, shape[2]).permute(0, 2, 3, 1)
self.cor[k] = torch.cat([tenHorizontal, tenVertical], -1).to(device)
return self.cor[k]
def forward(self, x1, x2):
B = x1.shape[0]
x = torch.cat([x1, x2], 0)
motion_features = []
appearence_features = []
xs = []
for i in range(self.num_stages):
motion_features.append([])
patch_embed = getattr(self, f"patch_embed{i + 1}",None)
block = getattr(self, f"block{i + 1}",None)
norm = getattr(self, f"norm{i + 1}",None)
if i < self.conv_stages:
if i > 0:
x = patch_embed(x)
x = block(x)
xs.append(x)
else:
if i == self.conv_stages:
x, H, W = patch_embed(xs)
else:
x, H, W = patch_embed(x)
cor = self.get_cor((x.shape[0], H, W), x.device)
for blk in block:
x, x_motion = blk(x, cor, H, W, B)
motion_features[i].append(x_motion.reshape(2*B, H, W, -1).permute(0, 3, 1, 2).contiguous())
x = norm(x)
x = x.reshape(2*B, H, W, -1).permute(0, 3, 1, 2).contiguous()
motion_features[i] = torch.cat(motion_features[i], 1)
appearence_features.append(x)
return appearence_features, motion_features
class DWConv(nn.Module):
def __init__(self, dim):
super(DWConv, self).__init__()
self.dwconv = nn.Conv2d(dim, dim, 3, 1, 1, bias=True, groups=dim)
def forward(self, x, H, W):
B, N, C = x.shape
x = x.transpose(1, 2).reshape(B, C, H, W)
x = self.dwconv(x)
x = x.reshape(B, C, -1).transpose(1, 2)
return x
def feature_extractor(**kargs):
model = MotionFormer(**kargs)
return model

View File

@@ -0,0 +1,141 @@
import torch
import torch.nn as nn
import torch.nn.functional as F
from .warplayer import warp
from .refine import *
def conv(in_planes, out_planes, kernel_size=3, stride=1, padding=1, dilation=1):
return nn.Sequential(
nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=stride,
padding=padding, dilation=dilation, bias=True),
nn.PReLU(out_planes)
)
class Head(nn.Module):
def __init__(self, in_planes, scale, c, in_else=17):
super(Head, self).__init__()
self.upsample = nn.Sequential(nn.PixelShuffle(2), nn.PixelShuffle(2))
self.scale = scale
self.conv = nn.Sequential(
conv(in_planes*2 // (4*4) + in_else, c),
conv(c, c),
conv(c, 5),
)
def forward(self, motion_feature, x, flow): # /16 /8 /4
motion_feature = self.upsample(motion_feature) #/4 /2 /1
if self.scale != 4:
x = F.interpolate(x, scale_factor = 4. / self.scale, mode="bilinear", align_corners=False)
if flow != None:
if self.scale != 4:
flow = F.interpolate(flow, scale_factor = 4. / self.scale, mode="bilinear", align_corners=False) * 4. / self.scale
x = torch.cat((x, flow), 1)
x = self.conv(torch.cat([motion_feature, x], 1))
if self.scale != 4:
x = F.interpolate(x, scale_factor = self.scale // 4, mode="bilinear", align_corners=False)
flow = x[:, :4] * (self.scale // 4)
else:
flow = x[:, :4]
mask = x[:, 4:5]
return flow, mask
class MultiScaleFlow(nn.Module):
def __init__(self, backbone, **kargs):
super(MultiScaleFlow, self).__init__()
self.flow_num_stage = len(kargs['hidden_dims'])
self.feature_bone = backbone
self.block = nn.ModuleList([Head( kargs['motion_dims'][-1-i] * kargs['depths'][-1-i] + kargs['embed_dims'][-1-i],
kargs['scales'][-1-i],
kargs['hidden_dims'][-1-i],
6 if i==0 else 17)
for i in range(self.flow_num_stage)])
self.unet = Unet(kargs['c'] * 2)
def warp_features(self, xs, flow):
y0 = []
y1 = []
B = xs[0].size(0) // 2
for x in xs:
y0.append(warp(x[:B], flow[:, 0:2]))
y1.append(warp(x[B:], flow[:, 2:4]))
flow = F.interpolate(flow, scale_factor=0.5, mode="bilinear", align_corners=False, recompute_scale_factor=False) * 0.5
return y0, y1
def calculate_flow(self, imgs, timestep, af=None, mf=None):
img0, img1 = imgs[:, :3], imgs[:, 3:6]
B = img0.size(0)
flow, mask = None, None
# appearence_features & motion_features
if (af is None) or (mf is None):
af, mf = self.feature_bone(img0, img1)
for i in range(self.flow_num_stage):
t = torch.full(mf[-1-i][:B].shape, timestep, dtype=torch.float, device=imgs.device)
if flow != None:
warped_img0 = warp(img0, flow[:, :2])
warped_img1 = warp(img1, flow[:, 2:4])
flow_, mask_ = self.block[i](
torch.cat([t*mf[-1-i][:B],(1-t)*mf[-1-i][B:],af[-1-i][:B],af[-1-i][B:]],1),
torch.cat((img0, img1, warped_img0, warped_img1, mask), 1),
flow
)
flow = flow + flow_
mask = mask + mask_
else:
flow, mask = self.block[i](
torch.cat([t*mf[-1-i][:B],(1-t)*mf[-1-i][B:],af[-1-i][:B],af[-1-i][B:]],1),
torch.cat((img0, img1), 1),
None
)
return flow, mask
def coraseWarp_and_Refine(self, imgs, af, flow, mask):
img0, img1 = imgs[:, :3], imgs[:, 3:6]
warped_img0 = warp(img0, flow[:, :2])
warped_img1 = warp(img1, flow[:, 2:4])
c0, c1 = self.warp_features(af, flow)
tmp = self.unet(img0, img1, warped_img0, warped_img1, mask, flow, c0, c1)
res = tmp[:, :3] * 2 - 1
mask_ = torch.sigmoid(mask)
merged = warped_img0 * mask_ + warped_img1 * (1 - mask_)
pred = torch.clamp(merged + res, 0, 1)
return pred
# Actually consist of 'calculate_flow' and 'coraseWarp_and_Refine'
def forward(self, x, timestep=0.5):
img0, img1 = x[:, :3], x[:, 3:6]
B = x.size(0)
flow_list = []
merged = []
mask_list = []
warped_img0 = img0
warped_img1 = img1
flow = None
# appearence_features & motion_features
af, mf = self.feature_bone(img0, img1)
for i in range(self.flow_num_stage):
t = torch.full(mf[-1-i][:B].shape, timestep, dtype=torch.float, device=x.device)
if flow != None:
flow_d, mask_d = self.block[i]( torch.cat([t*mf[-1-i][:B], (1-timestep)*mf[-1-i][B:],af[-1-i][:B],af[-1-i][B:]],1),
torch.cat((img0, img1, warped_img0, warped_img1, mask), 1), flow)
flow = flow + flow_d
mask = mask + mask_d
else:
flow, mask = self.block[i]( torch.cat([t*mf[-1-i][:B], (1-t)*mf[-1-i][B:],af[-1-i][:B],af[-1-i][B:]],1),
torch.cat((img0, img1), 1), None)
mask_list.append(torch.sigmoid(mask))
flow_list.append(flow)
warped_img0 = warp(img0, flow[:, :2])
warped_img1 = warp(img1, flow[:, 2:4])
merged.append(warped_img0 * mask_list[i] + warped_img1 * (1 - mask_list[i]))
c0, c1 = self.warp_features(af, flow)
tmp = self.unet(img0, img1, warped_img0, warped_img1, mask, flow, c0, c1)
res = tmp[:, :3] * 2 - 1
pred = torch.clamp(merged[-1] + res, 0, 1)
return flow_list, mask_list, merged, pred

70
ema_vfi_arch/refine.py Normal file
View File

@@ -0,0 +1,70 @@
import torch
import torch.nn as nn
import math
from timm.models.layers import trunc_normal_
def conv(in_planes, out_planes, kernel_size=3, stride=1, padding=1, dilation=1):
return nn.Sequential(
nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=stride,
padding=padding, dilation=dilation, bias=True),
nn.PReLU(out_planes)
)
def deconv(in_planes, out_planes, kernel_size=4, stride=2, padding=1):
return nn.Sequential(
torch.nn.ConvTranspose2d(in_channels=in_planes, out_channels=out_planes, kernel_size=4, stride=2, padding=1, bias=True),
nn.PReLU(out_planes)
)
class Conv2(nn.Module):
def __init__(self, in_planes, out_planes, stride=2):
super(Conv2, self).__init__()
self.conv1 = conv(in_planes, out_planes, 3, stride, 1)
self.conv2 = conv(out_planes, out_planes, 3, 1, 1)
def forward(self, x):
x = self.conv1(x)
x = self.conv2(x)
return x
class Unet(nn.Module):
def __init__(self, c, out=3):
super(Unet, self).__init__()
self.down0 = Conv2(17+c, 2*c)
self.down1 = Conv2(4*c, 4*c)
self.down2 = Conv2(8*c, 8*c)
self.down3 = Conv2(16*c, 16*c)
self.up0 = deconv(32*c, 8*c)
self.up1 = deconv(16*c, 4*c)
self.up2 = deconv(8*c, 2*c)
self.up3 = deconv(4*c, c)
self.conv = nn.Conv2d(c, out, 3, 1, 1)
self.apply(self._init_weights)
def _init_weights(self, m):
if isinstance(m, nn.Linear):
trunc_normal_(m.weight, std=.02)
if isinstance(m, nn.Linear) and m.bias is not None:
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.LayerNorm):
nn.init.constant_(m.bias, 0)
nn.init.constant_(m.weight, 1.0)
elif isinstance(m, nn.Conv2d):
fan_out = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
fan_out //= m.groups
m.weight.data.normal_(0, math.sqrt(2.0 / fan_out))
if m.bias is not None:
m.bias.data.zero_()
def forward(self, img0, img1, warped_img0, warped_img1, mask, flow, c0, c1):
s0 = self.down0(torch.cat((img0, img1, warped_img0, warped_img1, mask, flow,c0[0], c1[0]), 1))
s1 = self.down1(torch.cat((s0, c0[1], c1[1]), 1))
s2 = self.down2(torch.cat((s1, c0[2], c1[2]), 1))
s3 = self.down3(torch.cat((s2, c0[3], c1[3]), 1))
x = self.up0(torch.cat((s3, c0[4], c1[4]), 1))
x = self.up1(torch.cat((x, s2), 1))
x = self.up2(torch.cat((x, s1), 1))
x = self.up3(torch.cat((x, s0), 1))
x = self.conv(x)
return torch.sigmoid(x)

25
ema_vfi_arch/warplayer.py Normal file
View File

@@ -0,0 +1,25 @@
import torch
backwarp_tenGrid = {}
def clear_warp_cache():
"""Free all cached grid tensors (call between frame pairs to reclaim VRAM)."""
backwarp_tenGrid.clear()
def warp(tenInput, tenFlow):
k = (str(tenFlow.device), str(tenFlow.size()))
if k not in backwarp_tenGrid:
tenHorizontal = torch.linspace(-1.0, 1.0, tenFlow.shape[3], device=tenFlow.device).view(
1, 1, 1, tenFlow.shape[3]).expand(tenFlow.shape[0], -1, tenFlow.shape[2], -1)
tenVertical = torch.linspace(-1.0, 1.0, tenFlow.shape[2], device=tenFlow.device).view(
1, 1, tenFlow.shape[2], 1).expand(tenFlow.shape[0], -1, -1, tenFlow.shape[3])
backwarp_tenGrid[k] = torch.cat(
[tenHorizontal, tenVertical], 1).to(tenFlow.device)
tenFlow = torch.cat([tenFlow[:, 0:1, :, :] / ((tenInput.shape[3] - 1.0) / 2.0),
tenFlow[:, 1:2, :, :] / ((tenInput.shape[2] - 1.0) / 2.0)], 1)
g = (backwarp_tenGrid[k] + tenFlow).permute(0, 2, 3, 1)
return torch.nn.functional.grid_sample(input=tenInput, grid=g, mode='bilinear', padding_mode='border', align_corners=True)

View File

@@ -1,5 +1,15 @@
import logging
from functools import partial
import torch import torch
import torch.nn as nn
from .bim_vfi_arch import BiMVFI from .bim_vfi_arch import BiMVFI
from .ema_vfi_arch import feature_extractor as ema_feature_extractor
from .ema_vfi_arch import MultiScaleFlow as EMAMultiScaleFlow
from .utils.padder import InputPadder
logger = logging.getLogger("BIM-VFI")
class BiMVFIModel: class BiMVFIModel:
@@ -112,3 +122,163 @@ class BiMVFIModel:
interp = result_dict["imgt_pred"] interp = result_dict["imgt_pred"]
interp = torch.clamp(interp, 0, 1) interp = torch.clamp(interp, 0, 1)
return interp return interp
# ---------------------------------------------------------------------------
# EMA-VFI model wrapper
# ---------------------------------------------------------------------------
def _ema_init_model_config(F=32, W=7, depth=[2, 2, 2, 4, 4]):
"""Build EMA-VFI model config dicts (backbone + multiscale)."""
return {
'embed_dims': [F, 2*F, 4*F, 8*F, 16*F],
'motion_dims': [0, 0, 0, 8*F//depth[-2], 16*F//depth[-1]],
'num_heads': [8*F//32, 16*F//32],
'mlp_ratios': [4, 4],
'qkv_bias': True,
'norm_layer': partial(nn.LayerNorm, eps=1e-6),
'depths': depth,
'window_sizes': [W, W]
}, {
'embed_dims': [F, 2*F, 4*F, 8*F, 16*F],
'motion_dims': [0, 0, 0, 8*F//depth[-2], 16*F//depth[-1]],
'depths': depth,
'num_heads': [8*F//32, 16*F//32],
'window_sizes': [W, W],
'scales': [4, 8, 16],
'hidden_dims': [4*F, 4*F],
'c': F
}
def _ema_detect_variant(filename):
"""Auto-detect model variant and timestep support from filename.
Returns (F, depth, supports_arbitrary_t).
"""
name = filename.lower()
is_small = "small" in name
supports_t = "_t." in name or "_t_" in name or name.endswith("_t")
if is_small:
return 16, [2, 2, 2, 2, 2], supports_t
else:
return 32, [2, 2, 2, 4, 4], supports_t
class EMAVFIModel:
"""Clean inference wrapper around EMA-VFI for ComfyUI integration."""
def __init__(self, checkpoint_path, variant="auto", tta=False, device="cpu"):
import os
filename = os.path.basename(checkpoint_path)
if variant == "auto":
F_dim, depth, self.supports_arbitrary_t = _ema_detect_variant(filename)
elif variant == "small":
F_dim, depth = 16, [2, 2, 2, 2, 2]
self.supports_arbitrary_t = "_t." in filename.lower() or "_t_" in filename.lower()
else: # large
F_dim, depth = 32, [2, 2, 2, 4, 4]
self.supports_arbitrary_t = "_t." in filename.lower() or "_t_" in filename.lower()
self.tta = tta
self.device = device
self.variant_name = "small" if F_dim == 16 else "large"
backbone_cfg, multiscale_cfg = _ema_init_model_config(F=F_dim, depth=depth)
backbone = ema_feature_extractor(**backbone_cfg)
self.model = EMAMultiScaleFlow(backbone, **multiscale_cfg)
self._load_checkpoint(checkpoint_path)
self.model.eval()
self.model.to(device)
def _load_checkpoint(self, checkpoint_path):
"""Load checkpoint with module prefix stripping and buffer filtering."""
state_dict = torch.load(checkpoint_path, map_location="cpu", weights_only=False)
# Handle wrapped checkpoint formats
if isinstance(state_dict, dict):
if "model" in state_dict:
state_dict = state_dict["model"]
elif "state_dict" in state_dict:
state_dict = state_dict["state_dict"]
# Strip "module." prefix and filter out attn_mask/HW buffers
cleaned = {}
for k, v in state_dict.items():
if "attn_mask" in k or k.endswith(".HW"):
continue
key = k
if key.startswith("module."):
key = key[len("module."):]
cleaned[key] = v
self.model.load_state_dict(cleaned)
def to(self, device):
"""Move model to device (returns self for chaining)."""
self.device = device
self.model.to(device)
return self
@torch.no_grad()
def _inference(self, img0, img1, timestep=0.5):
"""Run single inference pass. Inputs already padded, on device."""
B = img0.shape[0]
imgs = torch.cat((img0, img1), 1)
if self.tta:
imgs_ = imgs.flip(2).flip(3)
input_batch = torch.cat((imgs, imgs_), 0)
_, _, _, preds = self.model(input_batch, timestep=timestep)
return (preds[:B] + preds[B:].flip(2).flip(3)) / 2.
else:
_, _, _, pred = self.model(imgs, timestep=timestep)
return pred
@torch.no_grad()
def interpolate_pair(self, frame0, frame1, time_step=0.5):
"""Interpolate a single frame between two input frames.
Args:
frame0: [1, C, H, W] tensor, float32, range [0, 1]
frame1: [1, C, H, W] tensor, float32, range [0, 1]
time_step: float in (0, 1)
Returns:
Interpolated frame as [1, C, H, W] tensor, float32, clamped to [0, 1]
"""
device = next(self.model.parameters()).device
img0 = frame0.to(device)
img1 = frame1.to(device)
padder = InputPadder(img0.shape, divisor=32, mode='replicate', center=True)
img0, img1 = padder.pad(img0, img1)
pred = self._inference(img0, img1, timestep=time_step)
pred = padder.unpad(pred)
return torch.clamp(pred, 0, 1)
@torch.no_grad()
def interpolate_batch(self, frames0, frames1, time_step=0.5):
"""Interpolate multiple frame pairs at once.
Args:
frames0: [B, C, H, W] tensor, float32, range [0, 1]
frames1: [B, C, H, W] tensor, float32, range [0, 1]
time_step: float in (0, 1)
Returns:
Interpolated frames as [B, C, H, W] tensor, float32, clamped to [0, 1]
"""
device = next(self.model.parameters()).device
img0 = frames0.to(device)
img1 = frames1.to(device)
padder = InputPadder(img0.shape, divisor=32, mode='replicate', center=True)
img0, img1 = padder.pad(img0, img1)
pred = self._inference(img0, img1, timestep=time_step)
pred = padder.unpad(pred)
return torch.clamp(pred, 0, 1)

317
nodes.py
View File

@@ -8,20 +8,29 @@ import torch
import folder_paths import folder_paths
from comfy.utils import ProgressBar from comfy.utils import ProgressBar
from .inference import BiMVFIModel from .inference import BiMVFIModel, EMAVFIModel
from .bim_vfi_arch import clear_backwarp_cache from .bim_vfi_arch import clear_backwarp_cache
from .ema_vfi_arch import clear_warp_cache as clear_ema_warp_cache
logger = logging.getLogger("BIM-VFI") logger = logging.getLogger("BIM-VFI")
# Google Drive file ID for the pretrained model # Google Drive file ID for the pretrained BIM-VFI model
GDRIVE_FILE_ID = "18Wre7XyRtu_wtFRzcsit6oNfHiFRt9vC" GDRIVE_FILE_ID = "18Wre7XyRtu_wtFRzcsit6oNfHiFRt9vC"
MODEL_FILENAME = "bim_vfi.pth" MODEL_FILENAME = "bim_vfi.pth"
# Register the model folder with ComfyUI # Google Drive folder ID for EMA-VFI pretrained models
EMA_GDRIVE_FOLDER_ID = "16jUa3HkQ85Z5lb5gce1yoaWkP-rdCd0o"
EMA_DEFAULT_MODEL = "ours_t.pkl"
# Register model folders with ComfyUI
MODEL_DIR = os.path.join(folder_paths.models_dir, "bim-vfi") MODEL_DIR = os.path.join(folder_paths.models_dir, "bim-vfi")
if not os.path.exists(MODEL_DIR): if not os.path.exists(MODEL_DIR):
os.makedirs(MODEL_DIR, exist_ok=True) os.makedirs(MODEL_DIR, exist_ok=True)
EMA_MODEL_DIR = os.path.join(folder_paths.models_dir, "ema-vfi")
if not os.path.exists(EMA_MODEL_DIR):
os.makedirs(EMA_MODEL_DIR, exist_ok=True)
def get_available_models(): def get_available_models():
"""List available checkpoint files in the bim-vfi model directory.""" """List available checkpoint files in the bim-vfi model directory."""
@@ -456,3 +465,305 @@ class BIMVFIConcatVideos:
os.remove(concat_list_path) os.remove(concat_list_path)
return (output_path,) return (output_path,)
# ---------------------------------------------------------------------------
# EMA-VFI nodes
# ---------------------------------------------------------------------------
def get_available_ema_models():
"""List available checkpoint files in the ema-vfi model directory."""
models = []
if os.path.isdir(EMA_MODEL_DIR):
for f in os.listdir(EMA_MODEL_DIR):
if f.endswith((".pkl", ".pth", ".pt", ".ckpt", ".safetensors")):
models.append(f)
if not models:
models.append(EMA_DEFAULT_MODEL) # Will trigger auto-download
return sorted(models)
def download_ema_model_from_gdrive(folder_id, dest_path):
"""Download EMA-VFI model from Google Drive folder using gdown."""
try:
import gdown
except ImportError:
raise RuntimeError(
"gdown is required to auto-download the EMA-VFI model. "
"Install it with: pip install gdown"
)
filename = os.path.basename(dest_path)
url = f"https://drive.google.com/drive/folders/{folder_id}"
logger.info(f"Downloading {filename} from Google Drive folder to {dest_path}...")
os.makedirs(os.path.dirname(dest_path), exist_ok=True)
gdown.download_folder(url, output=os.path.dirname(dest_path), quiet=False, remaining_ok=True)
if not os.path.exists(dest_path):
raise RuntimeError(
f"Failed to download {filename}. Please download manually from "
f"https://drive.google.com/drive/folders/{folder_id} "
f"and place it in {os.path.dirname(dest_path)}"
)
logger.info("Download complete.")
class LoadEMAVFIModel:
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"model_path": (get_available_ema_models(), {
"default": EMA_DEFAULT_MODEL,
"tooltip": "Checkpoint file from models/ema-vfi/. Auto-downloads on first use if missing. "
"Variant (large/small) and timestep support are auto-detected from filename.",
}),
"tta": ("BOOLEAN", {
"default": False,
"tooltip": "Test-time augmentation: flip input and average with unflipped result. "
"~2x slower but slightly better quality. Recommended for large model only.",
}),
}
}
RETURN_TYPES = ("EMA_VFI_MODEL",)
RETURN_NAMES = ("model",)
FUNCTION = "load_model"
CATEGORY = "video/EMA-VFI"
def load_model(self, model_path, tta):
full_path = os.path.join(EMA_MODEL_DIR, model_path)
if not os.path.exists(full_path):
logger.info(f"Model not found at {full_path}, attempting download...")
download_ema_model_from_gdrive(EMA_GDRIVE_FOLDER_ID, full_path)
wrapper = EMAVFIModel(
checkpoint_path=full_path,
variant="auto",
tta=tta,
device="cpu",
)
t_mode = "arbitrary" if wrapper.supports_arbitrary_t else "fixed (0.5)"
logger.info(f"EMA-VFI model loaded (variant={wrapper.variant_name}, timestep={t_mode}, tta={tta})")
return (wrapper,)
class EMAVFIInterpolate:
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"images": ("IMAGE", {
"tooltip": "Input image batch. Output frame count: 2x=(2N-1), 4x=(4N-3), 8x=(8N-7).",
}),
"model": ("EMA_VFI_MODEL", {
"tooltip": "EMA-VFI model from the Load EMA-VFI Model node.",
}),
"multiplier": ([2, 4, 8], {
"default": 2,
"tooltip": "Frame rate multiplier. 2x=one interpolation pass, 4x=two recursive passes, 8x=three. Higher = more frames but longer processing.",
}),
"clear_cache_after_n_frames": ("INT", {
"default": 10, "min": 1, "max": 100, "step": 1,
"tooltip": "Clear CUDA cache every N frame pairs to prevent VRAM buildup. Lower = less VRAM but slower. Ignored when all_on_gpu is enabled.",
}),
"keep_device": ("BOOLEAN", {
"default": True,
"tooltip": "Keep model on GPU between frame pairs. Faster but uses more VRAM constantly. Disable to free VRAM between pairs (slower due to CPU-GPU transfers).",
}),
"all_on_gpu": ("BOOLEAN", {
"default": False,
"tooltip": "Store all intermediate frames on GPU instead of CPU. Much faster (no transfers) but requires enough VRAM for all frames. Recommended for 48GB+ cards.",
}),
"batch_size": ("INT", {
"default": 1, "min": 1, "max": 64, "step": 1,
"tooltip": "Number of frame pairs to process simultaneously. Higher = faster but uses more VRAM. Start with 1, increase until VRAM is full.",
}),
"chunk_size": ("INT", {
"default": 0, "min": 0, "max": 10000, "step": 1,
"tooltip": "Process input frames in chunks of this size (0=disabled). Bounds VRAM usage during processing but the full output is still assembled in RAM. To bound RAM, use the Segment Interpolate node instead.",
}),
}
}
RETURN_TYPES = ("IMAGE",)
RETURN_NAMES = ("images",)
FUNCTION = "interpolate"
CATEGORY = "video/EMA-VFI"
def _interpolate_frames(self, frames, model, num_passes, batch_size,
device, storage_device, keep_device, all_on_gpu,
clear_cache_after_n_frames, pbar, step_ref):
"""Run all interpolation passes on a chunk of frames."""
for pass_idx in range(num_passes):
new_frames = []
num_pairs = frames.shape[0] - 1
pairs_since_clear = 0
for i in range(0, num_pairs, batch_size):
batch_end = min(i + batch_size, num_pairs)
actual_batch = batch_end - i
frames0 = frames[i:batch_end]
frames1 = frames[i + 1:batch_end + 1]
if not keep_device:
model.to(device)
mids = model.interpolate_batch(frames0, frames1, time_step=0.5)
mids = mids.to(storage_device)
if not keep_device:
model.to("cpu")
for j in range(actual_batch):
new_frames.append(frames[i + j:i + j + 1])
new_frames.append(mids[j:j+1])
step_ref[0] += actual_batch
pbar.update_absolute(step_ref[0])
pairs_since_clear += actual_batch
if not all_on_gpu and pairs_since_clear >= clear_cache_after_n_frames and torch.cuda.is_available():
clear_ema_warp_cache()
torch.cuda.empty_cache()
pairs_since_clear = 0
new_frames.append(frames[-1:])
frames = torch.cat(new_frames, dim=0)
if not all_on_gpu and torch.cuda.is_available():
clear_ema_warp_cache()
torch.cuda.empty_cache()
return frames
@staticmethod
def _count_steps(num_frames, num_passes):
"""Count total interpolation steps for a given input frame count."""
n = num_frames
total = 0
for _ in range(num_passes):
total += n - 1
n = 2 * n - 1
return total
def interpolate(self, images, model, multiplier, clear_cache_after_n_frames,
keep_device, all_on_gpu, batch_size, chunk_size):
if images.shape[0] < 2:
return (images,)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
num_passes = {2: 1, 4: 2, 8: 3}[multiplier]
if all_on_gpu:
keep_device = True
storage_device = device if all_on_gpu else torch.device("cpu")
# Convert from ComfyUI [B, H, W, C] to model [B, C, H, W]
all_frames = images.permute(0, 3, 1, 2).to(storage_device)
total_input = all_frames.shape[0]
# Build chunk boundaries (1-frame overlap between consecutive chunks)
if chunk_size < 2 or chunk_size >= total_input:
chunks = [(0, total_input)]
else:
chunks = []
start = 0
while start < total_input - 1:
end = min(start + chunk_size, total_input)
chunks.append((start, end))
start = end - 1 # overlap by 1 frame
if end == total_input:
break
# Calculate total progress steps across all chunks
total_steps = sum(self._count_steps(ce - cs, num_passes) for cs, ce in chunks)
pbar = ProgressBar(total_steps)
step_ref = [0]
if keep_device:
model.to(device)
result_chunks = []
for chunk_idx, (chunk_start, chunk_end) in enumerate(chunks):
chunk_frames = all_frames[chunk_start:chunk_end].clone()
chunk_result = self._interpolate_frames(
chunk_frames, model, num_passes, batch_size,
device, storage_device, keep_device, all_on_gpu,
clear_cache_after_n_frames, pbar, step_ref,
)
# Skip first frame of subsequent chunks (duplicate of previous chunk's last frame)
if chunk_idx > 0:
chunk_result = chunk_result[1:]
# Move completed chunk to CPU to bound memory when chunking
if len(chunks) > 1:
chunk_result = chunk_result.cpu()
result_chunks.append(chunk_result)
result = torch.cat(result_chunks, dim=0)
# Convert back to ComfyUI [B, H, W, C], on CPU
result = result.cpu().permute(0, 2, 3, 1)
return (result,)
class EMAVFISegmentInterpolate(EMAVFIInterpolate):
"""Process a numbered segment of the input batch for EMA-VFI.
Chain multiple instances with Save nodes between them to bound peak RAM.
The model pass-through output forces sequential execution so each segment
saves and frees from RAM before the next starts.
"""
@classmethod
def INPUT_TYPES(cls):
base = EMAVFIInterpolate.INPUT_TYPES()
base["required"]["segment_index"] = ("INT", {
"default": 0, "min": 0, "max": 10000, "step": 1,
"tooltip": "Which segment to process (0-based). Bounds RAM by only producing this segment's output frames, "
"unlike chunk_size which bounds VRAM but still assembles the full output in RAM. "
"Chain the model output to the next Segment Interpolate to force sequential execution.",
})
base["required"]["segment_size"] = ("INT", {
"default": 500, "min": 2, "max": 10000, "step": 1,
"tooltip": "Number of input frames per segment. Adjacent segments overlap by 1 frame for seamless stitching. "
"Smaller = less peak RAM per segment. Save each segment's output to disk before the next runs.",
})
return base
RETURN_TYPES = ("IMAGE", "EMA_VFI_MODEL")
RETURN_NAMES = ("images", "model")
FUNCTION = "interpolate"
CATEGORY = "video/EMA-VFI"
def interpolate(self, images, model, multiplier, clear_cache_after_n_frames,
keep_device, all_on_gpu, batch_size, chunk_size,
segment_index, segment_size):
total_input = images.shape[0]
# Compute segment boundaries (1-frame overlap)
start = segment_index * (segment_size - 1)
end = min(start + segment_size, total_input)
if start >= total_input - 1:
# Past the end — return empty single frame + model
return (images[:1], model)
segment_images = images[start:end]
is_continuation = segment_index > 0
# Delegate to the parent interpolation logic
(result,) = super().interpolate(
segment_images, model, multiplier, clear_cache_after_n_frames,
keep_device, all_on_gpu, batch_size, chunk_size,
)
if is_continuation:
result = result[1:] # skip duplicate boundary frame
return (result, model)

View File

@@ -4,17 +4,22 @@ import torch.nn.functional as F
class InputPadder: class InputPadder:
""" Pads images such that dimensions are divisible by divisor """ """ Pads images such that dimensions are divisible by divisor """
def __init__(self, dims, divisor=16): def __init__(self, dims, divisor=16, mode='constant', center=False):
self.ht, self.wd = dims[-2:] self.ht, self.wd = dims[-2:]
self.mode = mode
pad_ht = (((self.ht // divisor) + 1) * divisor - self.ht) % divisor pad_ht = (((self.ht // divisor) + 1) * divisor - self.ht) % divisor
pad_wd = (((self.wd // divisor) + 1) * divisor - self.wd) % divisor pad_wd = (((self.wd // divisor) + 1) * divisor - self.wd) % divisor
self._pad = [0, pad_wd, 0, pad_ht] if center:
self._pad = [pad_wd // 2, pad_wd - pad_wd // 2,
pad_ht // 2, pad_ht - pad_ht // 2]
else:
self._pad = [0, pad_wd, 0, pad_ht]
def pad(self, *inputs): def pad(self, *inputs):
if len(inputs) == 1: if len(inputs) == 1:
return F.pad(inputs[0], self._pad, mode='constant') return F.pad(inputs[0], self._pad, mode=self.mode)
else: else:
return [F.pad(x, self._pad, mode='constant') for x in inputs] return [F.pad(x, self._pad, mode=self.mode) for x in inputs]
def unpad(self, *inputs): def unpad(self, *inputs):
if len(inputs) == 1: if len(inputs) == 1: