Compare commits
57 Commits
6d6859b844
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| f7c4acfebb | |||
| 6c7f618bb0 | |||
| f02851d88a | |||
| 59bd805920 | |||
| 3297f3a203 | |||
| 3b87ac820f | |||
| b23773c7c2 | |||
| 8560f24d36 | |||
| c40c1fd82c | |||
| 07acefffc1 | |||
| b30e2d0233 | |||
| dd6a9aefd7 | |||
| 60362e3514 | |||
| f03979a767 | |||
| c32a4bcb32 | |||
| 162699a4a2 | |||
| f63b837a2c | |||
| b54b4329ca | |||
| 1bde14bd97 | |||
| a2d79a7e6c | |||
| 24a59a6da2 | |||
| 099ce948ae | |||
| 44f3130a15 | |||
| 178247c79f | |||
| 3d53b94435 | |||
| e762c0b90f | |||
| 6d5c773bea | |||
| 6eccb258ee | |||
| dd95977b2a | |||
| dc77f49d8e | |||
| bf7da8f4b3 | |||
| 1c1f566e5f | |||
| 56e6df35e8 | |||
| 6dbf340171 | |||
| 7639c3c158 | |||
| 7688674563 | |||
| 2a82ef8201 | |||
| 033fc4a626 | |||
| ef92a569a0 | |||
| 008f8edfad | |||
| 610c52c0b8 | |||
| 689c67cd7f | |||
| 50f2080c8b | |||
| 2cc185c000 | |||
| 56629c5490 | |||
| dab38a1fbf | |||
| c0952ecbf1 | |||
| ca7ca06791 | |||
| 27ecd2870a | |||
| 3e69ae3073 | |||
| 2e21da351b | |||
| b37ac40cdb | |||
| 35f5790358 | |||
| dfd12d84e1 | |||
| d4b580445d | |||
| 0df11447ab | |||
| 1471d04016 |
1
.gitignore
vendored
Normal file
1
.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
||||
ffmpeg_bin/
|
||||
142
README.md
Normal file
142
README.md
Normal file
@@ -0,0 +1,142 @@
|
||||
# ComfyUI Sharpness Helper Nodes
|
||||
|
||||
A high-performance custom node suite for **ComfyUI** designed to detect blur, calculate sharpness scores (Laplacian Variance), and efficiently extract or filter the best frames from videos and image batches.
|
||||
|
||||
This pack is for a personnal project:
|
||||
1. **Dataset Creation:** Extracting only the sharpest frames from massive movie files without crashing RAM.
|
||||
2. **Generation Filtering:** Automatically discarding blurry frames from Wan or img2img outputs.
|
||||
|
||||
---
|
||||

|
||||
---
|
||||
|
||||
## 🚀 Key Features
|
||||
|
||||
### 1. Parallel Video Loader (Path-Based)
|
||||
* **Zero-RAM Scanning:** Scans video files directly from disk without decoding every frame to memory.
|
||||
* **Multi-Threaded:** Uses all CPU cores to calculate sharpness scores at high speed (1000s of frames per minute).
|
||||
* **Smart Batching:** Includes an auto-incrementing "Page" system to process long movies in chunks (e.g., minute-by-minute) without restarting ComfyUI.
|
||||
* **Lazy Loading:** Only decodes and loads the final "Best N" frames into ComfyUI tensors.
|
||||
|
||||
### 2. Fast Absolute Saver (Metadata)
|
||||
* **Multi-Threaded Saving:** Spawns parallel workers to saturate SSD write speeds (bypassing standard PIL bottlenecks).
|
||||
* **No UI Lag:** Saves images in the background without trying to render Base64 previews in the browser, preventing interface freezes.
|
||||
* **Metadata Embedding:** Automatically embeds the sharpness score into the PNG/WebP metadata for dataset curation.
|
||||
* **Smart Naming:** Uses original video frame numbers in filenames (e.g., `frame_001450.png`) instead of arbitrary counters.
|
||||
|
||||
### 3. Standard Sharpness Duo (Tensor-Based)
|
||||
* **Workflow Integration:** Works with any node that outputs an `IMAGE` batch (e.g., AnimateDiff, VideoHelperSuite).
|
||||
* **Precision Filtering:** Sorts and filters generated frames before saving or passing to a second pass (img2img).
|
||||
|
||||
---
|
||||
|
||||
## 📦 Installation
|
||||
|
||||
1. Clone this repository into your `custom_nodes` folder:
|
||||
```bash
|
||||
cd ComfyUI/custom_nodes/
|
||||
git clone [https://github.com/YOUR_USERNAME/ComfyUI-Sharpness-Helper.git](https://github.com/YOUR_USERNAME/ComfyUI-Sharpness-Helper.git)
|
||||
```
|
||||
2. Install dependencies (if needed):
|
||||
```bash
|
||||
pip install opencv-python numpy
|
||||
```
|
||||
3. Restart ComfyUI.
|
||||
|
||||
---
|
||||
|
||||
## 🛠️ Node Documentation
|
||||
|
||||
### 1. Parallel Video Loader (Sharpness)
|
||||
**Category:** `BetaHelper/Video`
|
||||
|
||||
This is the recommended node for **Dataset Creation** or finding good frames in **Long Movies**. It inputs a file path, scans it in parallel, and only loads the final "Best N" frames into memory.
|
||||
|
||||
| Input | Description |
|
||||
| :--- | :--- |
|
||||
| **video_path** | Absolute path to your video file (e.g., `D:\Movies\input.mp4`). |
|
||||
| **batch_index** | **Critical.** Connect a **Primitive Node** here set to `increment`. This controls which "chunk" of the video you are viewing. |
|
||||
| **scan_limit** | How many frames to process per batch (e.g., `1440`). |
|
||||
| **frame_scan_step** | Speed up scanning by checking every Nth frame (e.g., `5` checks frames 0, 5, 10...). |
|
||||
| **manual_skip_start** | Global offset (e.g., set to `2000` to always ignore the opening credits). |
|
||||
|
||||
**Outputs:**
|
||||
* `images`: The batch of the sharpest frames found.
|
||||
* `scores_info`: String containing frame indices and scores (Connect to Saver).
|
||||
* `batch_int`: The current batch number.
|
||||
* `batch_status`: Human-readable status (e.g., *"Batch 2: Skipped 2880 frames..."*).
|
||||
|
||||
> **💡 Pro Tip:** To scan a movie continuously, connect a **Primitive Node** to `batch_index`, set it to **increment**, and enable "Auto Queue" in ComfyUI.
|
||||
|
||||
---
|
||||
|
||||
### 2. Fast Absolute Saver (Metadata)
|
||||
**Category:** `BetaHelper/IO`
|
||||
|
||||
A "Pro-Grade" saver designed for speed. It bypasses relative paths and UI previews.
|
||||
|
||||
| Input | Description |
|
||||
| :--- | :--- |
|
||||
| **output_path** | Absolute path to save folder (e.g., `D:\Datasets\Sharp_Output`). |
|
||||
| **filename_prefix** | Base name for files (e.g., `matrix_movie`). |
|
||||
| **max_threads** | **0 = Auto** (Uses all CPU cores). Set manually to limit CPU usage. |
|
||||
| **save_format** | `png` (Fastest) or `webp` (Smaller size). |
|
||||
| **filename_with_score** | If True, appends score to filename: `frame_001450_1500.png`. |
|
||||
| **scores_info** | Connect this to the `scores_info` output of the Parallel Loader to enable smart naming. |
|
||||
|
||||
**Performance Note:**
|
||||
* **PNG:** Uses `compress_level=1` for maximum speed.
|
||||
* **WebP:** Avoid `webp_method=6` unless you need max compression; it is very CPU intensive. `4` is the recommended balance.
|
||||
|
||||
---
|
||||
|
||||
### 3. Sharpness Analyzer & Selector (The Duo)
|
||||
**Category:** `BetaHelper/Image`
|
||||
|
||||
Use these when you already have images inside your workflow (e.g., from a generation or a standard Load Video node).
|
||||
|
||||
#### Node A: Sharpness Analyzer
|
||||
* **Input:** `IMAGE` batch.
|
||||
* **Action:** Calculates the Laplacian Variance for every image in the batch.
|
||||
* **Output:** Passes the images through + a generic score list.
|
||||
|
||||
#### Node B: SharpFrame Selector
|
||||
* **Input:** `IMAGE` batch (from Analyzer).
|
||||
* **Action:** Sorts the batch based on the scores and picks the top N frames.
|
||||
* **Output:** A reduced batch containing only the sharpest images.
|
||||
|
||||
---
|
||||
|
||||
## ⚖️ Which Node Should I Use?
|
||||
|
||||
| Feature | **Parallel Video Loader** | **Standard Duo** |
|
||||
| :--- | :--- | :--- |
|
||||
| **Input Type** | File Path (`String`) | Image Tensor (`IMAGE`) |
|
||||
| **Best For** | **Long Videos / Movies** | **Generations / Short Clips** |
|
||||
| **Memory Usage** | Very Low (Only loads final frames) | High (Loads all frames to RAM first) |
|
||||
| **Speed** | ⚡ **Ultra Fast** (Multi-core) | 🐢 Standard (Single-core) |
|
||||
| **Workflow Stage** | Start of Workflow | Middle/End of Workflow |
|
||||
|
||||
---
|
||||
|
||||
## 📝 Example Workflows
|
||||
|
||||
### Batch Processing a Movie for Training Data
|
||||
1. Add **Parallel Video Loader**.
|
||||
2. Connect a **Primitive Node** to `batch_index` (Control: `increment`).
|
||||
3. Set `scan_limit` to `1000` and `frame_scan_step` to `5`.
|
||||
4. Connect `images` and `scores_info` to **Fast Absolute Saver**.
|
||||
5. Enable **Auto Queue** in ComfyUI extra options.
|
||||
* *Result: ComfyUI will loop through your movie, extracting the 4 sharpest frames from every ~1000 frame chunk automatically.*
|
||||
|
||||
### Filtering AnimateDiff Output
|
||||
1. AnimateDiff Generation -> **Sharpness Analyzer**.
|
||||
2. Analyzer Output -> **SharpFrame Selector** (Select Best 1).
|
||||
3. Selector Output -> **Face Detailer** or **Upscaler**.
|
||||
* *Result: Only the clearest frame from your animation is sent to the upscaler, saving time on blurry frames.*
|
||||
|
||||
---
|
||||
|
||||
## Credits
|
||||
* Built using `opencv-python` for Laplacian Variance calculation.
|
||||
* Parallel processing logic for efficient large-file handling.
|
||||
13
__init__.py
13
__init__.py
@@ -1,13 +1,16 @@
|
||||
from .sharp_node import SharpFrameSelector
|
||||
from .sharp_node import SharpnessAnalyzer, SharpFrameSelector
|
||||
from .parallel_loader import ParallelSharpnessLoader
|
||||
|
||||
# Map the class to a name ComfyUI recognizes
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
"SharpFrameSelector": SharpFrameSelector
|
||||
"SharpnessAnalyzer": SharpnessAnalyzer,
|
||||
"SharpFrameSelector": SharpFrameSelector,
|
||||
"ParallelSharpnessLoader": ParallelSharpnessLoader,
|
||||
}
|
||||
|
||||
# Map the internal name to a human-readable label in the menu
|
||||
NODE_DISPLAY_NAME_MAPPINGS = {
|
||||
"SharpFrameSelector": "Sharp Frame Selector (Video)"
|
||||
"SharpnessAnalyzer": "1. Sharpness Analyzer",
|
||||
"SharpFrameSelector": "2. Sharp Frame Selector",
|
||||
"ParallelSharpnessLoader": "3. Parallel Video Loader (Sharpness)",
|
||||
}
|
||||
|
||||
__all__ = ["NODE_CLASS_MAPPINGS", "NODE_DISPLAY_NAME_MAPPINGS"]
|
||||
BIN
assets/nodes.png
Normal file
BIN
assets/nodes.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 347 KiB |
378
example_workflows/comfyui-sharp-example.json
Normal file
378
example_workflows/comfyui-sharp-example.json
Normal file
@@ -0,0 +1,378 @@
|
||||
{
|
||||
"id": "4fbf6f31-0f7b-4465-8ec8-25df4862e076",
|
||||
"revision": 0,
|
||||
"last_node_id": 35,
|
||||
"last_link_id": 44,
|
||||
"nodes": [
|
||||
{
|
||||
"id": 31,
|
||||
"type": "PrimitiveNode",
|
||||
"pos": [
|
||||
4672,
|
||||
-928
|
||||
],
|
||||
"size": [
|
||||
210,
|
||||
82
|
||||
],
|
||||
"flags": {},
|
||||
"order": 0,
|
||||
"mode": 0,
|
||||
"inputs": [],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "connect to widget input",
|
||||
"type": "*",
|
||||
"links": []
|
||||
}
|
||||
],
|
||||
"properties": {
|
||||
"Run widget replace on values": false,
|
||||
"ue_properties": {
|
||||
"widget_ue_connectable": {},
|
||||
"version": "7.5.2",
|
||||
"input_ue_unconnectable": {}
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": 1,
|
||||
"type": "SharpFrameSelector",
|
||||
"pos": [
|
||||
4992,
|
||||
-704
|
||||
],
|
||||
"size": [
|
||||
288,
|
||||
174
|
||||
],
|
||||
"flags": {},
|
||||
"order": 4,
|
||||
"mode": 0,
|
||||
"inputs": [
|
||||
{
|
||||
"name": "images",
|
||||
"type": "IMAGE",
|
||||
"link": null
|
||||
},
|
||||
{
|
||||
"name": "scores",
|
||||
"type": "SHARPNESS_SCORES",
|
||||
"link": 3
|
||||
}
|
||||
],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "selected_images",
|
||||
"type": "IMAGE",
|
||||
"links": [
|
||||
32
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "count",
|
||||
"type": "INT",
|
||||
"links": null
|
||||
}
|
||||
],
|
||||
"properties": {
|
||||
"aux_id": "ComfyUI-Sharp-Selector.git",
|
||||
"ver": "f30f948c9fa8acf9b7fe09559f172d8a63468c8d",
|
||||
"Node name for S&R": "SharpFrameSelector",
|
||||
"ue_properties": {
|
||||
"widget_ue_connectable": {},
|
||||
"input_ue_unconnectable": {},
|
||||
"version": "7.5.2"
|
||||
}
|
||||
},
|
||||
"widgets_values": [
|
||||
"best_n",
|
||||
144,
|
||||
24,
|
||||
3,
|
||||
0
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 32,
|
||||
"type": "easy showAnything",
|
||||
"pos": [
|
||||
5344,
|
||||
-1024
|
||||
],
|
||||
"size": [
|
||||
448,
|
||||
96
|
||||
],
|
||||
"flags": {},
|
||||
"order": 6,
|
||||
"mode": 0,
|
||||
"inputs": [
|
||||
{
|
||||
"name": "anything",
|
||||
"shape": 7,
|
||||
"type": "*",
|
||||
"link": 39
|
||||
}
|
||||
],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "output",
|
||||
"type": "*",
|
||||
"links": null
|
||||
}
|
||||
],
|
||||
"properties": {
|
||||
"cnr_id": "comfyui-easy-use",
|
||||
"ver": "5dfcbcf51d8a6efed947bc7bdd6797827fecab55",
|
||||
"Node name for S&R": "easy showAnything",
|
||||
"ue_properties": {
|
||||
"widget_ue_connectable": {},
|
||||
"input_ue_unconnectable": {},
|
||||
"version": "7.5.2"
|
||||
}
|
||||
},
|
||||
"widgets_values": [
|
||||
"Batch 1: Skipped 3440 frames. Scanning range 3440 -> 4880."
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 5,
|
||||
"type": "SharpnessAnalyzer",
|
||||
"pos": [
|
||||
4672,
|
||||
-672
|
||||
],
|
||||
"size": [
|
||||
185.9771484375,
|
||||
26
|
||||
],
|
||||
"flags": {},
|
||||
"order": 1,
|
||||
"mode": 0,
|
||||
"inputs": [
|
||||
{
|
||||
"name": "images",
|
||||
"type": "IMAGE",
|
||||
"link": null
|
||||
}
|
||||
],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "scores",
|
||||
"type": "SHARPNESS_SCORES",
|
||||
"links": [
|
||||
3,
|
||||
21
|
||||
]
|
||||
}
|
||||
],
|
||||
"properties": {
|
||||
"aux_id": "ComfyUI-Sharp-Selector.git",
|
||||
"ver": "0df11447abb7f41bf7f12a2906aa868a5d2027b4",
|
||||
"Node name for S&R": "SharpnessAnalyzer",
|
||||
"ue_properties": {
|
||||
"widget_ue_connectable": {},
|
||||
"input_ue_unconnectable": {},
|
||||
"version": "7.5.2"
|
||||
}
|
||||
},
|
||||
"widgets_values": []
|
||||
},
|
||||
{
|
||||
"id": 35,
|
||||
"type": "FastAbsoluteSaver",
|
||||
"pos": [
|
||||
5856,
|
||||
-1024
|
||||
],
|
||||
"size": [
|
||||
306.3776153564453,
|
||||
270
|
||||
],
|
||||
"flags": {},
|
||||
"order": 5,
|
||||
"mode": 0,
|
||||
"inputs": [
|
||||
{
|
||||
"name": "images",
|
||||
"type": "IMAGE",
|
||||
"link": 43
|
||||
},
|
||||
{
|
||||
"name": "scores_info",
|
||||
"shape": 7,
|
||||
"type": "STRING",
|
||||
"link": 44
|
||||
}
|
||||
],
|
||||
"outputs": [],
|
||||
"properties": {
|
||||
"aux_id": "ComfyUI-Sharp-Selector.git",
|
||||
"ver": "162699a4a23219ac5ac75f398a17e67c3767da46",
|
||||
"ue_properties": {
|
||||
"widget_ue_connectable": {},
|
||||
"input_ue_unconnectable": {}
|
||||
},
|
||||
"Node name for S&R": "FastAbsoluteSaver"
|
||||
},
|
||||
"widgets_values": [
|
||||
"D:\\Datasets\\Sharp_Output",
|
||||
"frame",
|
||||
"png",
|
||||
0,
|
||||
false,
|
||||
"sharpness_score",
|
||||
true,
|
||||
100,
|
||||
4
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 29,
|
||||
"type": "ParallelSharpnessLoader",
|
||||
"pos": [
|
||||
4992,
|
||||
-1024
|
||||
],
|
||||
"size": [
|
||||
320,
|
||||
262
|
||||
],
|
||||
"flags": {},
|
||||
"order": 2,
|
||||
"mode": 0,
|
||||
"inputs": [],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "images",
|
||||
"type": "IMAGE",
|
||||
"links": [
|
||||
43
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "scores_info",
|
||||
"type": "STRING",
|
||||
"links": [
|
||||
44
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "batch_int",
|
||||
"type": "INT",
|
||||
"links": null
|
||||
},
|
||||
{
|
||||
"name": "batch_status",
|
||||
"type": "STRING",
|
||||
"links": [
|
||||
39
|
||||
]
|
||||
}
|
||||
],
|
||||
"properties": {
|
||||
"aux_id": "ComfyUI-Sharp-Selector.git",
|
||||
"ver": "dab38a1fbf0077655fe568d500866fce6ecc857d",
|
||||
"Node name for S&R": "ParallelSharpnessLoader",
|
||||
"ue_properties": {
|
||||
"widget_ue_connectable": {},
|
||||
"input_ue_unconnectable": {},
|
||||
"version": "7.5.2"
|
||||
}
|
||||
},
|
||||
"widgets_values": [
|
||||
"C:\\path\\to\\video.mp4",
|
||||
0,
|
||||
1440,
|
||||
1,
|
||||
30,
|
||||
24,
|
||||
2000
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": 34,
|
||||
"type": "Note",
|
||||
"pos": [
|
||||
4224,
|
||||
-1120
|
||||
],
|
||||
"size": [
|
||||
416,
|
||||
736
|
||||
],
|
||||
"flags": {},
|
||||
"order": 3,
|
||||
"mode": 0,
|
||||
"inputs": [],
|
||||
"outputs": [],
|
||||
"properties": {
|
||||
"ue_properties": {
|
||||
"widget_ue_connectable": {},
|
||||
"version": "7.5.2",
|
||||
"input_ue_unconnectable": {}
|
||||
}
|
||||
},
|
||||
"widgets_values": [
|
||||
"📝 Smart Dataset Extraction Workflow\n\n1. Parallel Video Loader (The Source)\n\n What it does: Scans your video file directly from the hard drive using multi-threading. It does not load the whole video into RAM.\n\n Batching: Uses the batch_index (Primitive Node) to \"page\" through the movie.\n\n Example: If scan_limit is 1440, Batch 0 scans frames 0-1440, Batch 1 scans 1440-2880, etc.\n\n Selection: It calculates sharpness (Laplacian Variance) and only decodes the \"Best N\" frames to send downstream.\n\n2. Fast Absolute Saver (The Destination)\n\n What it does: Saves images instantly to your SSD using parallel workers, bypassing the slow ComfyUI preview window.\n\n Smart Naming: Connect scores_info from the Loader to the Saver! This allows files to be named using the original video frame number (e.g., movie_frame_00450.png) rather than a random batch counter.\n\n Metadata: Embeds the sharpness score into the PNG/WebP metadata for future filtering.\n\n⚠️ Usage Tips:\n\n Automation: Set batch_index to \"Increment\" (on the Primitive Node) and enable \"Auto Queue\" in ComfyUI options to process the entire movie automatically.\n\n Monitoring: Watch the Console Window (black command prompt) for progress logs. The saver does not preview images in the UI to prevent lag.\n\n Safety: The saver uses absolute paths and overwrites files with the same name. Use a unique filename_prefix for each new video source."
|
||||
],
|
||||
"color": "#432",
|
||||
"bgcolor": "#653"
|
||||
}
|
||||
],
|
||||
"links": [
|
||||
[
|
||||
3,
|
||||
5,
|
||||
0,
|
||||
1,
|
||||
1,
|
||||
"SHARPNESS_SCORES"
|
||||
],
|
||||
[
|
||||
39,
|
||||
29,
|
||||
3,
|
||||
32,
|
||||
0,
|
||||
"STRING"
|
||||
],
|
||||
[
|
||||
43,
|
||||
29,
|
||||
0,
|
||||
35,
|
||||
0,
|
||||
"IMAGE"
|
||||
],
|
||||
[
|
||||
44,
|
||||
29,
|
||||
1,
|
||||
35,
|
||||
1,
|
||||
"STRING"
|
||||
]
|
||||
],
|
||||
"groups": [],
|
||||
"config": {},
|
||||
"extra": {
|
||||
"workflowRendererVersion": "LG",
|
||||
"ue_links": [],
|
||||
"links_added_by_ue": [],
|
||||
"ds": {
|
||||
"scale": 1.1,
|
||||
"offset": [
|
||||
-3048.698587382934,
|
||||
1363.985488079904
|
||||
]
|
||||
},
|
||||
"frontendVersion": "1.36.14",
|
||||
"VHS_latentpreview": true,
|
||||
"VHS_latentpreviewrate": 0,
|
||||
"VHS_MetadataImage": true,
|
||||
"VHS_KeepIntermediate": true
|
||||
},
|
||||
"version": 0.4
|
||||
}
|
||||
@@ -1,30 +0,0 @@
|
||||
import { app } from "../../scripts/app.js";
|
||||
|
||||
app.registerExtension({
|
||||
name: "SharpFrames.Tooltips",
|
||||
async beforeRegisterNodeDef(nodeType, nodeData, app) {
|
||||
if (nodeData.name === "SharpFrameSelector") {
|
||||
|
||||
// Define your tooltips here
|
||||
const tooltips = {
|
||||
"selection_method": "Strategy:\n'batched' = 1 best frame per time slot (Good for video).\n'best_n' = Top N sharpest frames globally.",
|
||||
"batch_size": "For 'batched' mode only.\nHow many frames to analyze at once.\nExample: 24fps video + batch 24 = 1 output frame per second.",
|
||||
"num_frames": "For 'best_n' mode only.\nTotal number of frames you want to keep."
|
||||
};
|
||||
|
||||
// Hook into the node creation to apply them
|
||||
const onNodeCreated = nodeType.prototype.onNodeCreated;
|
||||
nodeType.prototype.onNodeCreated = function () {
|
||||
onNodeCreated?.apply(this, arguments);
|
||||
|
||||
if (this.widgets) {
|
||||
for (const w of this.widgets) {
|
||||
if (tooltips[w.name]) {
|
||||
w.tooltip = tooltips[w.name];
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
},
|
||||
});
|
||||
128
parallel_loader.py
Normal file
128
parallel_loader.py
Normal file
@@ -0,0 +1,128 @@
|
||||
import cv2
|
||||
import torch
|
||||
import numpy as np
|
||||
import concurrent.futures
|
||||
import os
|
||||
|
||||
class ParallelSharpnessLoader:
|
||||
@classmethod
|
||||
def INPUT_TYPES(s):
|
||||
return {
|
||||
"required": {
|
||||
"video_path": ("STRING", {"default": "C:\\path\\to\\video.mp4"}),
|
||||
|
||||
# BATCHING CONTROLS
|
||||
"batch_index": ("INT", {"default": 0, "min": 0, "max": 10000, "step": 1, "label": "Batch Counter (Auto-Increment)"}),
|
||||
"scan_limit": ("INT", {"default": 1440, "min": 1, "max": 10000000, "step": 1, "label": "Frames per Batch"}),
|
||||
|
||||
# STANDARD CONTROLS
|
||||
"frame_scan_step": ("INT", {"default": 5, "min": 1, "step": 1, "label": "Analyze Every Nth Frame"}),
|
||||
"return_count": ("INT", {"default": 4, "min": 1, "max": 1024, "step": 1, "label": "Best Frames to Return"}),
|
||||
"min_distance": ("INT", {"default": 24, "min": 0, "max": 10000, "step": 1, "label": "Min Distance (Frames)"}),
|
||||
"manual_skip_start": ("INT", {"default": 0, "min": 0, "max": 10000000, "step": 1, "label": "Global Start Offset"}),
|
||||
},
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("IMAGE", "STRING", "INT", "STRING")
|
||||
RETURN_NAMES = ("images", "scores_info", "batch_int", "batch_status")
|
||||
FUNCTION = "load_video"
|
||||
CATEGORY = "BetaHelper/Video"
|
||||
|
||||
def calculate_sharpness(self, frame_data):
|
||||
gray = cv2.cvtColor(frame_data, cv2.COLOR_BGR2GRAY)
|
||||
return cv2.Laplacian(gray, cv2.CV_64F).var()
|
||||
|
||||
def load_video(self, video_path, batch_index, scan_limit, frame_scan_step, return_count, min_distance, manual_skip_start):
|
||||
|
||||
# 1. Validation
|
||||
if not os.path.exists(video_path):
|
||||
video_path = video_path.strip('"')
|
||||
if not os.path.exists(video_path):
|
||||
raise FileNotFoundError(f"Video not found: {video_path}")
|
||||
|
||||
# 2. Calculate Offsets
|
||||
current_skip = (batch_index * scan_limit) + manual_skip_start
|
||||
range_end = current_skip + scan_limit
|
||||
|
||||
status_msg = f"Batch {batch_index}: Skipped {current_skip} frames. Scanning range {current_skip} -> {range_end}."
|
||||
print(f"xx- Parallel Loader | {status_msg}")
|
||||
|
||||
cap = cv2.VideoCapture(video_path)
|
||||
total_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
|
||||
|
||||
# --- STOP CONDITION 1: REACHED END OF VIDEO ---
|
||||
# This stops the queue immediately if we try to read past the end.
|
||||
if current_skip >= total_frames:
|
||||
cap.release()
|
||||
raise ValueError(f"Processing Complete. Batch {batch_index} starts at frame {current_skip}, but video only has {total_frames} frames.")
|
||||
|
||||
# 3. Scanning (Pass 1)
|
||||
if current_skip > 0:
|
||||
cap.set(cv2.CAP_PROP_POS_FRAMES, current_skip)
|
||||
|
||||
frame_scores = []
|
||||
current_frame = current_skip
|
||||
scanned_count = 0
|
||||
|
||||
with concurrent.futures.ThreadPoolExecutor(max_workers=16) as executor:
|
||||
futures = []
|
||||
|
||||
while True:
|
||||
if scanned_count >= scan_limit:
|
||||
break
|
||||
|
||||
ret, frame = cap.read()
|
||||
if not ret: break
|
||||
|
||||
future = executor.submit(self.calculate_sharpness, frame)
|
||||
futures.append((current_frame, future))
|
||||
scanned_count += 1
|
||||
|
||||
# Manual Stepping
|
||||
if frame_scan_step > 1:
|
||||
for _ in range(frame_scan_step - 1):
|
||||
if not cap.grab(): break
|
||||
current_frame += 1
|
||||
|
||||
current_frame += 1
|
||||
|
||||
for idx, future in futures:
|
||||
frame_scores.append((idx, future.result()))
|
||||
|
||||
cap.release()
|
||||
|
||||
# 4. Selection
|
||||
# --- STOP CONDITION 2: NO FRAMES FOUND ---
|
||||
if not frame_scores:
|
||||
raise ValueError(f"No frames found in batch {batch_index} (Range {current_skip}-{range_end}). The video might be corrupted or blank.")
|
||||
|
||||
frame_scores.sort(key=lambda x: x[1], reverse=True)
|
||||
selected = []
|
||||
|
||||
for idx, score in frame_scores:
|
||||
if len(selected) >= return_count: break
|
||||
if all(abs(s[0] - idx) >= min_distance for s in selected):
|
||||
selected.append((idx, score))
|
||||
|
||||
selected.sort(key=lambda x: x[0])
|
||||
|
||||
# 5. Extraction (Pass 2)
|
||||
cap = cv2.VideoCapture(video_path)
|
||||
output_tensors = []
|
||||
info_log = []
|
||||
|
||||
for idx, score in selected:
|
||||
cap.set(cv2.CAP_PROP_POS_FRAMES, idx)
|
||||
ret, frame = cap.read()
|
||||
if ret:
|
||||
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
|
||||
frame = frame.astype(np.float32) / 255.0
|
||||
output_tensors.append(torch.from_numpy(frame))
|
||||
info_log.append(f"F:{idx} (Score:{int(score)})")
|
||||
|
||||
cap.release()
|
||||
|
||||
if not output_tensors:
|
||||
raise ValueError("Frames were selected but could not be loaded. This indicates a file read error.")
|
||||
|
||||
return (torch.stack(output_tensors), ", ".join(info_log), batch_index, status_msg)
|
||||
25
readme
25
readme
@@ -1,25 +0,0 @@
|
||||
# ComfyUI Sharp Frame Selector
|
||||
|
||||
A custom node for [ComfyUI](https://github.com/comfyanonymous/ComfyUI) that automatically filters video frames to select only the sharpest ones.
|
||||
|
||||
This is a ComfyUI implementation of the logic found in [sharp-frames](https://github.com/Reflct/sharp-frames-python). It calculates the Laplacian variance of each frame to determine focus quality and selects the best candidates based on your chosen strategy.
|
||||
|
||||
## Features
|
||||
|
||||
- **No external CLI tools required**: Runs entirely within ComfyUI using OpenCV.
|
||||
- **Batched Selection**: Perfect for videos. Divides the timeline into chunks (e.g., every 1 second) and picks the single sharpest frame from that chunk. Ensures you never miss a scene.
|
||||
- **Best-N Selection**: Simply picks the top N sharpest frames from the entire batch, regardless of when they occur.
|
||||
- **GPU Efficient**: Keeps image data on the GPU where possible, only moving small batches to CPU for the sharpness calculation.
|
||||
|
||||
## Installation
|
||||
|
||||
### Method 1: Manager (Recommended)
|
||||
If this node is available in the ComfyUI Manager, search for "Sharp Frame Selector" and install.
|
||||
|
||||
### Method 2: Manual
|
||||
Clone this repository into your `custom_nodes` folder:
|
||||
|
||||
```bash
|
||||
cd ComfyUI/custom_nodes/
|
||||
git clone [https://github.com/YOUR_USERNAME/ComfyUI-Sharp-Selector.git](https://github.com/YOUR_USERNAME/ComfyUI-Sharp-Selector.git)
|
||||
pip install -r ComfyUI-Sharp-Selector/requirements.txt
|
||||
@@ -2,71 +2,93 @@ import torch
|
||||
import numpy as np
|
||||
import cv2
|
||||
|
||||
# --- NODE 1: ANALYZER (Unchanged) ---
|
||||
class SharpnessAnalyzer:
|
||||
@classmethod
|
||||
def INPUT_TYPES(s):
|
||||
return {"required": {"images": ("IMAGE",)}}
|
||||
|
||||
RETURN_TYPES = ("SHARPNESS_SCORES",)
|
||||
RETURN_NAMES = ("scores",)
|
||||
FUNCTION = "analyze_sharpness"
|
||||
CATEGORY = "SharpFrames"
|
||||
|
||||
def analyze_sharpness(self, images):
|
||||
print(f"[SharpAnalyzer] Calculating scores for {len(images)} frames...")
|
||||
scores = []
|
||||
for i in range(len(images)):
|
||||
img_np = (images[i].cpu().numpy() * 255).astype(np.uint8)
|
||||
gray = cv2.cvtColor(img_np, cv2.COLOR_RGB2GRAY)
|
||||
score = cv2.Laplacian(gray, cv2.CV_64F).var()
|
||||
scores.append(score)
|
||||
return (scores,)
|
||||
|
||||
# --- NODE 2: SELECTOR (Updated with Buffer) ---
|
||||
class SharpFrameSelector:
|
||||
@classmethod
|
||||
def INPUT_TYPES(s):
|
||||
return {
|
||||
"required": {
|
||||
"images": ("IMAGE",),
|
||||
"scores": ("SHARPNESS_SCORES",),
|
||||
"selection_method": (["batched", "best_n"],),
|
||||
"batch_size": ("INT", {"default": 24, "min": 1, "max": 10000, "step": 1}),
|
||||
# NEW: Restored the buffer option
|
||||
"batch_buffer": ("INT", {"default": 0, "min": 0, "max": 10000, "step": 1}),
|
||||
"num_frames": ("INT", {"default": 10, "min": 1, "max": 10000, "step": 1}),
|
||||
"min_sharpness": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 10000.0, "step": 0.1}),
|
||||
}
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("IMAGE", "INT")
|
||||
RETURN_NAMES = ("selected_images", "count")
|
||||
FUNCTION = "process_images"
|
||||
FUNCTION = "select_frames"
|
||||
CATEGORY = "SharpFrames"
|
||||
|
||||
def process_images(self, images, selection_method, batch_size, num_frames):
|
||||
# images is a Tensor: [Batch, Height, Width, Channels] (RGB, 0.0-1.0)
|
||||
|
||||
total_input_frames = len(images)
|
||||
print(f"[SharpSelector] Analyzing {total_input_frames} frames...")
|
||||
|
||||
scores = []
|
||||
|
||||
# We must iterate to calculate score per frame
|
||||
# OpenCV runs on CPU, so we must move frame-by-frame or batch-to-cpu
|
||||
for i in range(total_input_frames):
|
||||
# 1. Grab single frame, move to CPU, convert to numpy
|
||||
# 2. Scale 0.0-1.0 to 0-255
|
||||
img_np = (images[i].cpu().numpy() * 255).astype(np.uint8)
|
||||
|
||||
# 3. Convert RGB to Gray for Laplacian
|
||||
gray = cv2.cvtColor(img_np, cv2.COLOR_RGB2GRAY)
|
||||
|
||||
# 4. Calculate Variance of Laplacian
|
||||
score = cv2.Laplacian(gray, cv2.CV_64F).var()
|
||||
scores.append(score)
|
||||
def select_frames(self, images, scores, selection_method, batch_size, batch_buffer, num_frames, min_sharpness):
|
||||
if len(images) != len(scores):
|
||||
min_len = min(len(images), len(scores))
|
||||
images = images[:min_len]
|
||||
scores = scores[:min_len]
|
||||
|
||||
selected_indices = []
|
||||
|
||||
# --- SELECTION LOGIC ---
|
||||
if selection_method == "batched":
|
||||
# Best frame every N frames
|
||||
for i in range(0, total_input_frames, batch_size):
|
||||
chunk_end = min(i + batch_size, total_input_frames)
|
||||
total_frames = len(scores)
|
||||
|
||||
# THE FIX: Step includes the buffer size
|
||||
# If batch=24 and buffer=2, we jump 26 frames each time
|
||||
step_size = batch_size + batch_buffer
|
||||
|
||||
for i in range(0, total_frames, step_size):
|
||||
# The chunk is strictly the batch_size
|
||||
chunk_end = min(i + batch_size, total_frames)
|
||||
chunk_scores = scores[i : chunk_end]
|
||||
|
||||
# argmax gives relative index (0 to batch_size), add 'i' for absolute
|
||||
best_in_chunk_idx = np.argmax(chunk_scores)
|
||||
selected_indices.append(i + best_in_chunk_idx)
|
||||
if len(chunk_scores) > 0:
|
||||
best_in_chunk_idx = np.argmax(chunk_scores)
|
||||
best_score = chunk_scores[best_in_chunk_idx]
|
||||
|
||||
if best_score >= min_sharpness:
|
||||
selected_indices.append(i + best_in_chunk_idx)
|
||||
|
||||
elif selection_method == "best_n":
|
||||
# Top N sharpest frames globally, sorted by time
|
||||
target_count = min(num_frames, total_input_frames)
|
||||
# (Logic remains the same, buffer applies to Batched only)
|
||||
valid_indices = [i for i, s in enumerate(scores) if s >= min_sharpness]
|
||||
valid_scores = np.array([scores[i] for i in valid_indices])
|
||||
|
||||
# argsort sorts low to high, we take the last N (highest scores)
|
||||
top_indices = np.argsort(scores)[-target_count:]
|
||||
|
||||
# Sort indices to keep original video order
|
||||
selected_indices = sorted(top_indices)
|
||||
if len(valid_scores) > 0:
|
||||
target_count = min(num_frames, len(valid_scores))
|
||||
top_local_indices = np.argsort(valid_scores)[-target_count:]
|
||||
top_global_indices = [valid_indices[i] for i in top_local_indices]
|
||||
selected_indices = sorted(top_global_indices)
|
||||
|
||||
print(f"[SharpSelector] Selected {len(selected_indices)} frames.")
|
||||
|
||||
# Filter the original GPU tensor using the selected indices
|
||||
if len(selected_indices) == 0:
|
||||
h, w = images[0].shape[0], images[0].shape[1]
|
||||
empty = torch.zeros((1, h, w, 3), dtype=images.dtype, device=images.device)
|
||||
return (empty, 0)
|
||||
|
||||
result_images = images[selected_indices]
|
||||
|
||||
return (result_images, len(selected_indices))
|
||||
Reference in New Issue
Block a user