Migrate snapshot storage from IndexedDB to server-side JSON files (v2.0.0)
Some checks failed
Publish to ComfyUI Registry / Publish Custom Node to Registry (push) Has been cancelled

Snapshots are now stored as individual JSON files on the server under
data/snapshots/, making them persistent across browsers and resilient
to browser data loss. Existing IndexedDB data is auto-migrated on
first load.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-24 20:13:23 +01:00
parent 81118f4610
commit d32349bfdf
7 changed files with 373 additions and 121 deletions

1
.gitignore vendored
View File

@@ -1,2 +1,3 @@
__pycache__/
*.pyc
data/

View File

@@ -5,13 +5,13 @@
<p align="center">
<a href="https://registry.comfy.org/publishers/ethanfel/nodes/comfyui-snapshot-manager"><img src="https://img.shields.io/badge/ComfyUI-Registry-blue?logo=data:image/svg%2bxml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCAyNCAyNCI+PHBhdGggZD0iTTEyIDJMMyA3djEwbDkgNSA5LTVWN2wtOS01eiIgZmlsbD0id2hpdGUiLz48L3N2Zz4=" alt="ComfyUI Registry"/></a>
<a href="LICENSE"><img src="https://img.shields.io/badge/license-MIT-green" alt="MIT License"/></a>
<img src="https://img.shields.io/badge/version-1.1.1-blue" alt="Version"/>
<img src="https://img.shields.io/badge/version-2.0.0-blue" alt="Version"/>
<img src="https://img.shields.io/badge/ComfyUI-Extension-purple" alt="ComfyUI Extension"/>
</p>
---
**Workflow Snapshot Manager** automatically captures your ComfyUI workflow as you edit. Browse, name, search, and restore any previous version from a sidebar panel — all stored locally in your browser's IndexedDB.
**Workflow Snapshot Manager** automatically captures your ComfyUI workflow as you edit. Browse, name, search, and restore any previous version from a sidebar panel — stored as JSON files on the server, accessible from any browser.
<p align="center">
<img src="assets/sidebar-preview.png" alt="Sidebar Preview" width="300"/>
@@ -29,7 +29,8 @@
- **Toast notifications** — Visual feedback for save, restore, and error operations
- **Lock/pin snapshots** — Protect important snapshots from auto-pruning and "Clear All" with a single click
- **Concurrency-safe** — Lock guard prevents double-click issues during restore
- **Zero backend** — Pure frontend extension, no server dependencies
- **Server-side storage** — Snapshots persist on the ComfyUI server's filesystem, accessible from any browser
- **Automatic migration** — Existing IndexedDB snapshots are imported to the server on first load
## Installation
@@ -116,16 +117,19 @@ All settings are available in **ComfyUI Settings > Snapshot Manager > Capture Se
1. **Graph edits** trigger a `graphChanged` event
2. A **debounce timer** prevents excessive writes
3. The workflow is serialized and **hash-checked** against the last capture (per-workflow) to avoid duplicates
4. New snapshots are written to **IndexedDB** (browser-local, persistent)
5. The **sidebar panel** reads from IndexedDB and renders the snapshot list
4. New snapshots are sent to the **server** and stored as individual JSON files under `data/snapshots/`
5. The **sidebar panel** fetches snapshots from the server and renders the snapshot list
6. **Restore/Swap** loads graph data back into ComfyUI with a lock guard to prevent concurrent operations
**Storage:** All data stays in your browser's IndexedDB — nothing is sent to any server. Snapshots persist across browser sessions and ComfyUI restarts.
**Storage:** Snapshots are stored as JSON files on the server at `<extension_dir>/data/snapshots/<workflow_key>/<id>.json`. They persist across browser sessions, ComfyUI restarts, and are accessible from any browser connecting to the same server.
## FAQ
**Where are snapshots stored?**
In your browser's IndexedDB under the database `ComfySnapshotManager`. They persist across sessions but are browser-local (not synced between devices).
On the server's filesystem under `<extension_dir>/data/snapshots/`. Each workflow gets its own directory, and each snapshot is an individual JSON file. They persist across browser sessions and are accessible from any browser connecting to the same ComfyUI server.
**I'm upgrading from v1.x — what happens to my existing snapshots?**
On first load after upgrading, the extension automatically migrates all snapshots from your browser's IndexedDB to the server. Once migration succeeds, the old IndexedDB database is deleted. If migration fails (e.g., server unreachable), your old data is preserved and migration will retry on the next load.
**Will this slow down ComfyUI?**
No. Snapshots are captured asynchronously after a debounce delay. The hash check prevents redundant writes.

View File

@@ -2,9 +2,11 @@
ComfyUI Snapshot Manager
Automatically snapshots workflow state as you edit, with a sidebar panel
to browse and restore any previous version. Stored in IndexedDB.
to browse and restore any previous version. Stored in server-side JSON files.
"""
from . import snapshot_routes
WEB_DIRECTORY = "./js"
NODE_CLASS_MAPPINGS = {}
NODE_DISPLAY_NAME_MAPPINGS = {}

View File

@@ -1,19 +1,20 @@
/**
* ComfyUI Snapshot Manager
*
* Automatically captures workflow snapshots as you edit, stores them in
* IndexedDB, and provides a sidebar panel to browse and restore any
* previous version.
* Automatically captures workflow snapshots as you edit, stores them on the
* server as JSON files, and provides a sidebar panel to browse and restore
* any previous version.
*/
import { app } from "../../scripts/app.js";
import { api } from "../../scripts/api.js";
const EXTENSION_NAME = "ComfyUI.SnapshotManager";
const DB_NAME = "ComfySnapshotManager";
const STORE_NAME = "snapshots";
const RESTORE_GUARD_MS = 500;
const INITIAL_CAPTURE_DELAY_MS = 1500;
const MIGRATE_BATCH_SIZE = 10;
const OLD_DB_NAME = "ComfySnapshotManager";
const OLD_STORE_NAME = "snapshots";
// ─── Configurable Settings (updated via ComfyUI settings UI) ────────
@@ -31,48 +32,21 @@ let sidebarRefresh = null; // callback set by sidebar render
let viewingWorkflowKey = null; // null = follow active workflow; string = override
let pickerDirty = true; // forces workflow picker to re-fetch on next expand
// ─── IndexedDB Layer ─────────────────────────────────────────────────
let dbPromise = null;
function openDB() {
if (dbPromise) return dbPromise;
dbPromise = new Promise((resolve, reject) => {
const req = indexedDB.open(DB_NAME, 1);
req.onupgradeneeded = (e) => {
const db = e.target.result;
if (!db.objectStoreNames.contains(STORE_NAME)) {
const store = db.createObjectStore(STORE_NAME, { keyPath: "id" });
store.createIndex("workflowKey", "workflowKey", { unique: false });
store.createIndex("timestamp", "timestamp", { unique: false });
store.createIndex("workflowKey_timestamp", ["workflowKey", "timestamp"], { unique: false });
}
};
req.onsuccess = () => {
const db = req.result;
db.onclose = () => { dbPromise = null; };
db.onversionchange = () => { db.close(); dbPromise = null; };
resolve(db);
};
req.onerror = () => {
dbPromise = null;
reject(req.error);
};
});
return dbPromise;
}
// ─── Server API Layer ───────────────────────────────────────────────
async function db_put(record) {
try {
const db = await openDB();
return new Promise((resolve, reject) => {
const tx = db.transaction(STORE_NAME, "readwrite");
tx.objectStore(STORE_NAME).put(record);
tx.oncomplete = () => resolve();
tx.onerror = () => reject(tx.error);
const resp = await api.fetchApi("/snapshot-manager/save", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ record }),
});
if (!resp.ok) {
const err = await resp.json();
throw new Error(err.error || resp.statusText);
}
} catch (err) {
console.warn(`[${EXTENSION_NAME}] IndexedDB write failed:`, err);
console.warn(`[${EXTENSION_NAME}] Save failed:`, err);
showToast("Failed to save snapshot", "error");
throw err;
}
@@ -80,56 +54,54 @@ async function db_put(record) {
async function db_getAllForWorkflow(workflowKey) {
try {
const db = await openDB();
return new Promise((resolve, reject) => {
const tx = db.transaction(STORE_NAME, "readonly");
const idx = tx.objectStore(STORE_NAME).index("workflowKey_timestamp");
const range = IDBKeyRange.bound([workflowKey, 0], [workflowKey, Infinity]);
const req = idx.getAll(range);
req.onsuccess = () => resolve(req.result);
req.onerror = () => reject(req.error);
const resp = await api.fetchApi("/snapshot-manager/list", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ workflowKey }),
});
if (!resp.ok) {
const err = await resp.json();
throw new Error(err.error || resp.statusText);
}
return await resp.json();
} catch (err) {
console.warn(`[${EXTENSION_NAME}] IndexedDB read failed:`, err);
console.warn(`[${EXTENSION_NAME}] List failed:`, err);
showToast("Failed to read snapshots", "error");
return [];
}
}
async function db_delete(id) {
async function db_delete(workflowKey, id) {
try {
const db = await openDB();
return new Promise((resolve, reject) => {
const tx = db.transaction(STORE_NAME, "readwrite");
tx.objectStore(STORE_NAME).delete(id);
tx.oncomplete = () => resolve();
tx.onerror = () => reject(tx.error);
const resp = await api.fetchApi("/snapshot-manager/delete", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ workflowKey, id }),
});
if (!resp.ok) {
const err = await resp.json();
throw new Error(err.error || resp.statusText);
}
} catch (err) {
console.warn(`[${EXTENSION_NAME}] IndexedDB delete failed:`, err);
console.warn(`[${EXTENSION_NAME}] Delete failed:`, err);
showToast("Failed to delete snapshot", "error");
}
}
async function db_deleteAllForWorkflow(workflowKey) {
try {
const records = await db_getAllForWorkflow(workflowKey);
const toDelete = records.filter(r => !r.locked);
const lockedCount = records.length - toDelete.length;
if (toDelete.length === 0) return { lockedCount };
const db = await openDB();
await new Promise((resolve, reject) => {
const tx = db.transaction(STORE_NAME, "readwrite");
const store = tx.objectStore(STORE_NAME);
for (const r of toDelete) {
store.delete(r.id);
}
tx.oncomplete = () => resolve();
tx.onerror = () => reject(tx.error);
const resp = await api.fetchApi("/snapshot-manager/delete-all", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ workflowKey }),
});
return { lockedCount };
if (!resp.ok) {
const err = await resp.json();
throw new Error(err.error || resp.statusText);
}
return await resp.json();
} catch (err) {
console.warn(`[${EXTENSION_NAME}] IndexedDB bulk delete failed:`, err);
console.warn(`[${EXTENSION_NAME}] Bulk delete failed:`, err);
showToast("Failed to clear snapshots", "error");
throw err;
}
@@ -137,52 +109,89 @@ async function db_deleteAllForWorkflow(workflowKey) {
async function db_getAllWorkflowKeys() {
try {
const db = await openDB();
return new Promise((resolve, reject) => {
const tx = db.transaction(STORE_NAME, "readonly");
const idx = tx.objectStore(STORE_NAME).index("workflowKey");
const req = idx.openKeyCursor();
const counts = new Map();
req.onsuccess = () => {
const cursor = req.result;
if (cursor) {
counts.set(cursor.key, (counts.get(cursor.key) || 0) + 1);
cursor.continue();
} else {
const result = Array.from(counts.entries())
.map(([workflowKey, count]) => ({ workflowKey, count }))
.sort((a, b) => a.workflowKey.localeCompare(b.workflowKey));
resolve(result);
const resp = await api.fetchApi("/snapshot-manager/workflows");
if (!resp.ok) {
const err = await resp.json();
throw new Error(err.error || resp.statusText);
}
};
req.onerror = () => reject(req.error);
});
return await resp.json();
} catch (err) {
console.warn(`[${EXTENSION_NAME}] IndexedDB key scan failed:`, err);
console.warn(`[${EXTENSION_NAME}] Workflow key scan failed:`, err);
return [];
}
}
async function pruneSnapshots(workflowKey) {
try {
const all = await db_getAllForWorkflow(workflowKey);
// Only prune unlocked snapshots; locked ones are protected
const unlocked = all.filter(r => !r.locked);
if (unlocked.length <= maxSnapshots) return;
// sorted ascending by timestamp (index order), oldest first
const toDelete = unlocked.slice(0, unlocked.length - maxSnapshots);
const db = await openDB();
return new Promise((resolve, reject) => {
const tx = db.transaction(STORE_NAME, "readwrite");
const store = tx.objectStore(STORE_NAME);
for (const r of toDelete) {
store.delete(r.id);
}
tx.oncomplete = () => resolve();
tx.onerror = () => reject(tx.error);
const resp = await api.fetchApi("/snapshot-manager/prune", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ workflowKey, maxSnapshots }),
});
if (!resp.ok) {
const err = await resp.json();
throw new Error(err.error || resp.statusText);
}
} catch (err) {
console.warn(`[${EXTENSION_NAME}] IndexedDB prune failed:`, err);
console.warn(`[${EXTENSION_NAME}] Prune failed:`, err);
}
}
// ─── IndexedDB Migration ────────────────────────────────────────────
async function migrateFromIndexedDB() {
try {
// Check if the old database exists (databases() not supported in all browsers)
if (typeof indexedDB.databases === "function") {
const databases = await indexedDB.databases();
if (!databases.some((db) => db.name === OLD_DB_NAME)) return;
}
const db = await new Promise((resolve, reject) => {
const req = indexedDB.open(OLD_DB_NAME, 1);
req.onupgradeneeded = (e) => {
// DB didn't exist before — close and clean up
e.target.transaction.abort();
reject(new Error("no-existing-db"));
};
req.onsuccess = () => resolve(req.result);
req.onerror = () => reject(req.error);
});
const allRecords = await new Promise((resolve, reject) => {
const tx = db.transaction(OLD_STORE_NAME, "readonly");
const req = tx.objectStore(OLD_STORE_NAME).getAll();
req.onsuccess = () => resolve(req.result);
req.onerror = () => reject(req.error);
});
db.close();
if (allRecords.length === 0) {
indexedDB.deleteDatabase(OLD_DB_NAME);
return;
}
// Send in batches
let totalImported = 0;
for (let i = 0; i < allRecords.length; i += MIGRATE_BATCH_SIZE) {
const batch = allRecords.slice(i, i + MIGRATE_BATCH_SIZE);
const resp = await api.fetchApi("/snapshot-manager/migrate", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ records: batch }),
});
if (!resp.ok) throw new Error("Migration batch failed");
const result = await resp.json();
totalImported += result.imported;
}
// Success — delete old database
indexedDB.deleteDatabase(OLD_DB_NAME);
console.log(`[${EXTENSION_NAME}] Migrated ${totalImported} snapshots from IndexedDB to server`);
} catch (err) {
if (err.message === "no-existing-db") return;
console.warn(`[${EXTENSION_NAME}] IndexedDB migration failed (old data preserved):`, err);
}
}
@@ -1016,7 +1025,7 @@ async function buildSidebar(el) {
const confirmed = await showConfirmDialog("This snapshot is locked. Delete anyway?");
if (!confirmed) return;
}
await db_delete(rec.id);
await db_delete(rec.workflowKey, rec.id);
pickerDirty = true;
await refresh();
});
@@ -1113,6 +1122,9 @@ if (window.__COMFYUI_FRONTEND_VERSION__) {
},
async setup() {
// Migrate old IndexedDB data to server on first load
await migrateFromIndexedDB();
// Listen for graph changes (dispatched by ChangeTracker via api)
api.addEventListener("graphChanged", () => {
scheduleCaptureSnapshot();

View File

@@ -1,7 +1,7 @@
[project]
name = "comfyui-snapshot-manager"
description = "Automatically snapshots workflow state with a sidebar to browse and restore previous versions."
version = "1.1.1"
version = "2.0.0"
license = {text = "MIT"}
[project.urls]

111
snapshot_routes.py Normal file
View File

@@ -0,0 +1,111 @@
"""
HTTP route handlers for snapshot storage.
Registers endpoints with PromptServer.instance.routes at import time.
"""
from aiohttp import web
from server import PromptServer
from . import snapshot_storage as storage
routes = PromptServer.instance.routes
@routes.post("/snapshot-manager/save")
async def save_snapshot(request):
try:
data = await request.json()
record = data.get("record")
if not record or "id" not in record or "workflowKey" not in record:
return web.json_response({"error": "Missing record with id and workflowKey"}, status=400)
storage.put(record)
return web.json_response({"ok": True})
except ValueError as e:
return web.json_response({"error": str(e)}, status=400)
except Exception as e:
return web.json_response({"error": str(e)}, status=500)
@routes.post("/snapshot-manager/list")
async def list_snapshots(request):
try:
data = await request.json()
workflow_key = data.get("workflowKey")
if not workflow_key:
return web.json_response({"error": "Missing workflowKey"}, status=400)
records = storage.get_all_for_workflow(workflow_key)
return web.json_response(records)
except Exception as e:
return web.json_response({"error": str(e)}, status=500)
@routes.post("/snapshot-manager/delete")
async def delete_snapshot(request):
try:
data = await request.json()
workflow_key = data.get("workflowKey")
snapshot_id = data.get("id")
if not workflow_key or not snapshot_id:
return web.json_response({"error": "Missing workflowKey or id"}, status=400)
storage.delete(workflow_key, snapshot_id)
return web.json_response({"ok": True})
except ValueError as e:
return web.json_response({"error": str(e)}, status=400)
except Exception as e:
return web.json_response({"error": str(e)}, status=500)
@routes.post("/snapshot-manager/delete-all")
async def delete_all_snapshots(request):
try:
data = await request.json()
workflow_key = data.get("workflowKey")
if not workflow_key:
return web.json_response({"error": "Missing workflowKey"}, status=400)
result = storage.delete_all_for_workflow(workflow_key)
return web.json_response(result)
except Exception as e:
return web.json_response({"error": str(e)}, status=500)
@routes.get("/snapshot-manager/workflows")
async def list_workflows(request):
try:
keys = storage.get_all_workflow_keys()
return web.json_response(keys)
except Exception as e:
return web.json_response({"error": str(e)}, status=500)
@routes.post("/snapshot-manager/prune")
async def prune_snapshots(request):
try:
data = await request.json()
workflow_key = data.get("workflowKey")
max_snapshots = data.get("maxSnapshots")
if not workflow_key or max_snapshots is None:
return web.json_response({"error": "Missing workflowKey or maxSnapshots"}, status=400)
deleted = storage.prune(workflow_key, int(max_snapshots))
return web.json_response({"deleted": deleted})
except Exception as e:
return web.json_response({"error": str(e)}, status=500)
@routes.post("/snapshot-manager/migrate")
async def migrate_snapshots(request):
try:
data = await request.json()
records = data.get("records")
if not isinstance(records, list):
return web.json_response({"error": "Missing records array"}, status=400)
imported = 0
for record in records:
if "id" in record and "workflowKey" in record:
storage.put(record)
imported += 1
return web.json_response({"imported": imported})
except ValueError as e:
return web.json_response({"error": str(e)}, status=400)
except Exception as e:
return web.json_response({"error": str(e)}, status=500)

122
snapshot_storage.py Normal file
View File

@@ -0,0 +1,122 @@
"""
Filesystem storage layer for workflow snapshots.
Stores each snapshot as an individual JSON file under:
<extension_dir>/data/snapshots/<encoded_workflow_key>/<id>.json
Workflow keys are percent-encoded for filesystem safety.
"""
import json
import os
import urllib.parse
_DATA_DIR = os.path.join(os.path.dirname(__file__), "data", "snapshots")
def _workflow_dir(workflow_key):
encoded = urllib.parse.quote(workflow_key, safe="")
return os.path.join(_DATA_DIR, encoded)
def _validate_id(snapshot_id):
if not snapshot_id or "/" in snapshot_id or "\\" in snapshot_id or ".." in snapshot_id:
raise ValueError(f"Invalid snapshot id: {snapshot_id!r}")
def put(record):
"""Write one snapshot record to disk."""
snapshot_id = record["id"]
workflow_key = record["workflowKey"]
_validate_id(snapshot_id)
d = _workflow_dir(workflow_key)
os.makedirs(d, exist_ok=True)
path = os.path.join(d, f"{snapshot_id}.json")
with open(path, "w", encoding="utf-8") as f:
json.dump(record, f, separators=(",", ":"))
def get_all_for_workflow(workflow_key):
"""Return all snapshots for a workflow, sorted ascending by timestamp."""
d = _workflow_dir(workflow_key)
if not os.path.isdir(d):
return []
results = []
for fname in os.listdir(d):
if not fname.endswith(".json"):
continue
path = os.path.join(d, fname)
try:
with open(path, "r", encoding="utf-8") as f:
results.append(json.load(f))
except (json.JSONDecodeError, OSError):
continue
results.sort(key=lambda r: r.get("timestamp", 0))
return results
def delete(workflow_key, snapshot_id):
"""Remove one snapshot file. Cleans up empty workflow dir."""
_validate_id(snapshot_id)
d = _workflow_dir(workflow_key)
path = os.path.join(d, f"{snapshot_id}.json")
if os.path.isfile(path):
os.remove(path)
# Clean up empty directory
if os.path.isdir(d) and not os.listdir(d):
os.rmdir(d)
def delete_all_for_workflow(workflow_key):
"""Delete all unlocked snapshots for a workflow. Returns {lockedCount}."""
records = get_all_for_workflow(workflow_key)
locked_count = 0
for rec in records:
if rec.get("locked"):
locked_count += 1
else:
_validate_id(rec["id"])
path = os.path.join(_workflow_dir(workflow_key), f"{rec['id']}.json")
if os.path.isfile(path):
os.remove(path)
# Clean up empty directory
d = _workflow_dir(workflow_key)
if os.path.isdir(d) and not os.listdir(d):
os.rmdir(d)
return {"lockedCount": locked_count}
def get_all_workflow_keys():
"""Scan subdirs and return [{workflowKey, count}]."""
if not os.path.isdir(_DATA_DIR):
return []
results = []
for encoded_name in os.listdir(_DATA_DIR):
subdir = os.path.join(_DATA_DIR, encoded_name)
if not os.path.isdir(subdir):
continue
count = sum(1 for f in os.listdir(subdir) if f.endswith(".json"))
if count == 0:
continue
workflow_key = urllib.parse.unquote(encoded_name)
results.append({"workflowKey": workflow_key, "count": count})
results.sort(key=lambda r: r["workflowKey"])
return results
def prune(workflow_key, max_snapshots):
"""Delete oldest unlocked snapshots beyond limit. Returns count deleted."""
records = get_all_for_workflow(workflow_key)
unlocked = [r for r in records if not r.get("locked")]
if len(unlocked) <= max_snapshots:
return 0
to_delete = unlocked[: len(unlocked) - max_snapshots]
d = _workflow_dir(workflow_key)
deleted = 0
for rec in to_delete:
_validate_id(rec["id"])
path = os.path.join(d, f"{rec['id']}.json")
if os.path.isfile(path):
os.remove(path)
deleted += 1
return deleted