-
Notifications
You must be signed in to change notification settings - Fork 199
Feature/compute traces joe black #94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Feature/compute traces joe black #94
Conversation
Reviewer's GuideThis PR integrates performance tracing, dynamic checkpoint validation, and workflow dumping on both txt2img and img2img backends, surfaces inference metrics in the React frontend with CSV export, and adds basic integration tests alongside minor configuration fixes. Sequence diagram for inference request with metrics logging and checkpoint validationsequenceDiagram
actor User
participant Frontend
participant Backend
participant ComfyUI
participant CSVFile
User->>Frontend: Submit txt2img/img2img request
Frontend->>Backend: POST /api/txt2img or /api/img2img
Backend->>Backend: Validate checkpoint (dynamic)
alt Checkpoint invalid or missing
Backend->>Backend: Fallback to first available checkpoint
end
Backend->>ComfyUI: Send workflow
ComfyUI-->>Backend: Return generated images
Backend->>Backend: Collect metrics (elapsed time, GPU, etc.)
Backend->>CSVFile: Log metrics to CSV
Backend->>Frontend: Return images and metrics
Frontend->>User: Display images and metrics
User->>Frontend: Download metrics CSV (optional)
Frontend->>User: Provide CSV file
ER diagram for inference trace CSV loggingerDiagram
INFERENCE_TRACE_TXT2IMG {
timestamp float
total_time_s float
images_generated int
time_per_image_s float
gpu_name string
driver_version string
ckpt_name string
}
INFERENCE_TRACE_IMG2IMG {
timestamp float
total_time_s float
images_generated int
time_per_image_s float
gpu_name string
driver_version string
ckpt_name string
}
Class diagram for backend workflow and checkpoint validation utilitiesclassDiagram
class Txt2ImgWorkflow {
+transform_to_txt2img_workflow(data)
+get_available_checkpoints()
-logger
}
class Img2ImgWorkflow {
+transform_to_img2img_workflow(data)
+get_available_checkpoints()
-logger
}
class BackendServer {
+handle_txt2img()
+handle_img2img()
+log_inference_trace(...)
+ensure_csv_exists()
}
Txt2ImgWorkflow <.. BackendServer : uses
Img2ImgWorkflow <.. BackendServer : uses
BackendServer o-- "1" CSVFile : logs metrics
BackendServer o-- "1" ComfyUI : sends workflow
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughPins Python version and adds dynamic checkpoint discovery/validation, NVML GPU/driver detection, timing and CSV inference tracing, and workflow dumps; frontend displays and exports runtime metrics. Also adds test utilities and minor tooling updates. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant FE as Frontend
participant BE as Backend (txt/img)
participant WF as Workflow Builder
participant CU as ComfyUI
participant CSV as Trace CSV
participant NV as NVML
User->>FE: Click Generate
FE->>BE: POST /txt2img or /img2img (payload, optional ckpt_name)
BE->>BE: Discover available checkpoints
BE->>BE: Validate/select ckpt_name (fallback if needed)
BE->>NV: Init NVML, read GPU & driver
BE->>WF: Build workflow (includes ckpt_name, image, seed, options)
BE->>CU: Send workflow
CU-->>BE: Comfy response (images, metadata)
BE->>BE: Measure elapsed, compute per-image time
BE->>CSV: Append trace row (timestamp, times, gpu, ckpt)
BE-->>FE: Return comfy_response + metrics + generated_images
FE->>FE: Update UI, show MetricsBadge, enable CSV export
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. 📜 Recent review detailsConfiguration used: CodeRabbit UI 💡 Knowledge Base configuration:
You can enable these sources in your CodeRabbit configuration. ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (3)
✅ Files skipped from review due to trivial changes (1)
🚧 Files skipped from review as they are similar to previous changes (2)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @jrb00013 - I've reviewed your changes and found some issues that need to be addressed.
Blocking issues:
- Variable 'total_time' is undefined; should use 'elapsed'. (link)
General comments:
- Consider extracting the repeated checkpoint‐validation and CSV logging logic into a shared utility module to reduce code duplication across txt2img and img2img handlers.
- In the txt2img handler,
time_per_image
usestotal_time
which isn’t defined—update this to use the measuredelapsed
variable to avoid reference errors. - The frontend metrics badge and CSV export bits are duplicated in both Txt2ImgPage and Img2ImgPage—consider moving them into a shared React component to keep the UI code DRY.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Consider extracting the repeated checkpoint‐validation and CSV logging logic into a shared utility module to reduce code duplication across txt2img and img2img handlers.
- In the txt2img handler, `time_per_image` uses `total_time` which isn’t defined—update this to use the measured `elapsed` variable to avoid reference errors.
- The frontend metrics badge and CSV export bits are duplicated in both Txt2ImgPage and Img2ImgPage—consider moving them into a shared React component to keep the UI code DRY.
## Individual Comments
### Comment 1
<location> `dream_layer_backend/txt2img_server.py:168` </location>
<code_context>
+ elapsed = time.perf_counter() - start_time
+
+ # Calculate images generated
+ images_generated = len(comfy_response.get("all_images", []))
+ time_per_image = elapsed / images_generated if images_generated > 0 else None
+
</code_context>
<issue_to_address>
Variable 'total_time' is undefined; should use 'elapsed'.
Replace 'total_time' with 'elapsed' in the time_per_image calculation to prevent a NameError.
</issue_to_address>
### Comment 2
<location> `dream_layer_frontend/src/features/Img2Img/Img2ImgPage.tsx:469` </location>
<code_context>
+ >
+ Download CSV
+ </button>
+ <MetricsBadge
+ elapsedTimeSec={metrics.time_per_image_sec ?? (metrics.elapsed_time_sec / (batchSize * batchCount))}
+ gpu={metrics.gpu}
+ driver={metrics.driver_version}
+ />
+ </>
</code_context>
<issue_to_address>
Fallback for time per image may divide by zero if batchSize or batchCount are zero.
Add a check to ensure batchSize * batchCount is not zero before dividing; if zero, display 'N/A' instead.
</issue_to_address>
### Comment 3
<location> `dream_layer_backend/img2img_workflow.py:249` </location>
<code_context>
logger.info("Injecting Refiner parameters...")
workflow = inject_refiner_parameters(workflow, refiner_data)
+ print(f"✅ Workflow transformation complete")
+ # Ensure dump directory exists
+ dump_dir = os.path.join(os.path.dirname(__file__), "workflow_dumps")
+ os.makedirs(dump_dir, exist_ok=True)
+
+ # Save the workflow JSON
+ output_path = os.path.join(dump_dir, "last_workflow.json")
+ with open(output_path, "w") as f:
+ json.dump(workflow, f, indent=2)
+
+ print(f"📋 Generated workflow JSON: {json.dumps(workflow, indent=2)}")
+ print(f"🚀 Workflow JSON saved to {output_path}")
return workflow
</code_context>
<issue_to_address>
Console print statements may clutter logs in production.
Use the logger for these messages to ensure consistent log management and prevent unnecessary output in production.
</issue_to_address>
<suggested_fix>
<<<<<<< SEARCH
print(f"✅ Workflow transformation complete")
# Ensure dump directory exists
dump_dir = os.path.join(os.path.dirname(__file__), "workflow_dumps")
os.makedirs(dump_dir, exist_ok=True)
# Save the workflow JSON
output_path = os.path.join(dump_dir, "last_workflow.json")
with open(output_path, "w") as f:
json.dump(workflow, f, indent=2)
print(f"📋 Generated workflow JSON: {json.dumps(workflow, indent=2)}")
print(f"🚀 Workflow JSON saved to {output_path}")
return workflow
=======
logger.info("✅ Workflow transformation complete")
# Ensure dump directory exists
dump_dir = os.path.join(os.path.dirname(__file__), "workflow_dumps")
os.makedirs(dump_dir, exist_ok=True)
# Save the workflow JSON
output_path = os.path.join(dump_dir, "last_workflow.json")
with open(output_path, "w") as f:
json.dump(workflow, f, indent=2)
logger.info(f"📋 Generated workflow JSON: {json.dumps(workflow, indent=2)}")
logger.info(f"🚀 Workflow JSON saved to {output_path}")
return workflow
>>>>>>> REPLACE
</suggested_fix>
### Comment 4
<location> `dream_layer_backend/test_image/test_img_endpoints.py:29` </location>
<code_context>
+ return []
+ return [f.name for f in CHECKPOINTS_DIR.iterdir() if f.suffix in {".safetensors", ".ckpt"}]
+
+def get_base64_image() -> str:
+ """Return base64 string from .txt if available, else generate from PNG."""
+ if BASE64_PATH.exists():
+ print(f"Reading existing Base64 from {BASE64_PATH}")
+ return BASE64_PATH.read_text().strip()
+ elif PNG_PATH.exists():
+ print(f"Encoding {PNG_PATH} to Base64...")
+ encoded = base64.b64encode(PNG_PATH.read_bytes()).decode("utf-8")
+ BASE64_PATH.write_text(encoded)
+ print(f"Saved Base64 to {BASE64_PATH}")
+ return encoded
+ else:
+ raise FileNotFoundError(f"Neither {PNG_PATH} nor {BASE64_PATH} found.")
+
+def pretty_print_response(title: str, resp: requests.Response):
</code_context>
<issue_to_address>
No test for handling missing image files.
Please add a test to confirm that a FileNotFoundError is raised when both files are missing.
</issue_to_address>
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
def log_inference_trace(total_time, images_generated, gpu_name, driver_version,ckpt_name): | ||
"""Log inference details to CSV and console.""" | ||
# Checking to see if images were generated, finding the time per image | ||
time_per_image = None if images_generated == 0 else total_time / images_generated |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (bug_risk): Variable 'total_time' is undefined; should use 'elapsed'.
Replace 'total_time' with 'elapsed' in the time_per_image calculation to prevent a NameError.
<MetricsBadge | ||
elapsedTimeSec={metrics.time_per_image_sec ?? (metrics.elapsed_time_sec / (batchSize * batchCount))} | ||
gpu={metrics.gpu} | ||
driver={metrics.driver_version} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (bug_risk): Fallback for time per image may divide by zero if batchSize or batchCount are zero.
Add a check to ensure batchSize * batchCount is not zero before dividing; if zero, display 'N/A' instead.
print(f"✅ Workflow transformation complete") | ||
# Ensure dump directory exists | ||
dump_dir = os.path.join(os.path.dirname(__file__), "workflow_dumps") | ||
os.makedirs(dump_dir, exist_ok=True) | ||
|
||
# Save the workflow JSON | ||
output_path = os.path.join(dump_dir, "last_workflow.json") | ||
with open(output_path, "w") as f: | ||
json.dump(workflow, f, indent=2) | ||
|
||
print(f"📋 Generated workflow JSON: {json.dumps(workflow, indent=2)}") | ||
print(f"🚀 Workflow JSON saved to {output_path}") | ||
return workflow |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion: Console print statements may clutter logs in production.
Use the logger for these messages to ensure consistent log management and prevent unnecessary output in production.
print(f"✅ Workflow transformation complete") | |
# Ensure dump directory exists | |
dump_dir = os.path.join(os.path.dirname(__file__), "workflow_dumps") | |
os.makedirs(dump_dir, exist_ok=True) | |
# Save the workflow JSON | |
output_path = os.path.join(dump_dir, "last_workflow.json") | |
with open(output_path, "w") as f: | |
json.dump(workflow, f, indent=2) | |
print(f"📋 Generated workflow JSON: {json.dumps(workflow, indent=2)}") | |
print(f"🚀 Workflow JSON saved to {output_path}") | |
return workflow | |
logger.info("✅ Workflow transformation complete") | |
# Ensure dump directory exists | |
dump_dir = os.path.join(os.path.dirname(__file__), "workflow_dumps") | |
os.makedirs(dump_dir, exist_ok=True) | |
# Save the workflow JSON | |
output_path = os.path.join(dump_dir, "last_workflow.json") | |
with open(output_path, "w") as f: | |
json.dump(workflow, f, indent=2) | |
logger.info(f"📋 Generated workflow JSON: {json.dumps(workflow, indent=2)}") | |
logger.info(f"🚀 Workflow JSON saved to {output_path}") | |
return workflow |
def get_base64_image() -> str: | ||
"""Return base64 string from .txt if available, else generate from PNG.""" | ||
if BASE64_PATH.exists(): | ||
print(f"Reading existing Base64 from {BASE64_PATH}") | ||
return BASE64_PATH.read_text().strip() | ||
elif PNG_PATH.exists(): | ||
print(f"Encoding {PNG_PATH} to Base64...") | ||
encoded = base64.b64encode(PNG_PATH.read_bytes()).decode("utf-8") | ||
BASE64_PATH.write_text(encoded) | ||
print(f"Saved Base64 to {BASE64_PATH}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (testing): No test for handling missing image files.
Please add a test to confirm that a FileNotFoundError is raised when both files are missing.
import { defineConfig } from "file:///Users/clairehudson/DreamLayer/DreamLayer/dream_layer_frontend/node_modules/vite/dist/node/index.js"; | ||
import react from "file:///Users/clairehudson/DreamLayer/DreamLayer/dream_layer_frontend/node_modules/@vitejs/plugin-react-swc/index.mjs"; | ||
import path from "path"; | ||
var __vite_injected_original_dirname = "/Users/clairehudson/DreamLayer/DreamLayer/dream_layer_frontend"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (code-quality): Use const
or let
instead of var
. (avoid-using-var
)
Explanation
`const` is preferred as it ensures you cannot reassign references (which can lead to buggy and confusing code). `let` may be used if you need to reassign references - it's preferred to `var` because it is block- rather than function-scoped.From the Airbnb JavaScript Style Guide
var vite_config_default = defineConfig({ | ||
server: { | ||
host: "::", | ||
port: 8080 | ||
}, | ||
plugins: [ | ||
react() | ||
], | ||
resolve: { | ||
alias: { | ||
"@": path.resolve(__vite_injected_original_dirname, "./src") | ||
} | ||
} | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (code-quality): Use const
or let
instead of var
. (avoid-using-var
)
Explanation
`const` is preferred as it ensures you cannot reassign references (which can lead to buggy and confusing code). `let` may be used if you need to reassign references - it's preferred to `var` because it is block- rather than function-scoped.From the Airbnb JavaScript Style Guide
logger.info(f"Found checkpoint files: {models}") | ||
return models | ||
|
||
def transform_to_img2img_workflow(data): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (code-quality): We've found these issues:
- Replace f-string with no interpolated values with string (
remove-redundant-fstring
) - Low code quality found in transform_to_img2img_workflow - 13% (
low-code-quality
)
Explanation
The quality score for this function is below the quality threshold of 25%.
This score is a combination of the method length, cognitive complexity and working memory.
How can you solve this?
It might be worth refactoring this function to make it shorter and more readable.
- Reduce the function length by extracting pieces of functionality out into
their own functions. This is the most important thing you can do - ideally a
function should be less than 10 lines. - Reduce nesting, perhaps by introducing guard clauses to return early.
- Ensure that variables are tightly scoped, so that code using related concepts
sits together within the function rather than being scattered.
checkpoints = get_checkpoints() | ||
if checkpoints: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (code-quality): Use named expression to simplify assignment and conditional (use-named-expression
)
checkpoints = get_checkpoints() | |
if checkpoints: | |
if checkpoints := get_checkpoints(): |
models = [f.name for f in checkpoints_dir.glob("*") if f.suffix in ['.safetensors', '.ckpt']] | ||
return models |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (code-quality): Inline variable that is immediately returned (inline-immediately-returned-variable
)
models = [f.name for f in checkpoints_dir.glob("*") if f.suffix in ['.safetensors', '.ckpt']] | |
return models | |
return [ | |
f.name | |
for f in checkpoints_dir.glob("*") | |
if f.suffix in ['.safetensors', '.ckpt'] | |
] |
logger.error(f"Failed to list checkpoints: {e}") | ||
return [] | ||
|
||
def transform_to_txt2img_workflow(data): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (code-quality): We've found these issues:
- Replace f-string with no interpolated values with string [×5] (
remove-redundant-fstring
) - Low code quality found in transform_to_txt2img_workflow - 18% (
low-code-quality
)
Explanation
The quality score for this function is below the quality threshold of 25%.
This score is a combination of the method length, cognitive complexity and working memory.
How can you solve this?
It might be worth refactoring this function to make it shorter and more readable.
- Reduce the function length by extracting pieces of functionality out into
their own functions. This is the most important thing you can do - ideally a
function should be less than 10 lines. - Reduce nesting, perhaps by introducing guard clauses to return early.
- Ensure that variables are tightly scoped, so that code using related concepts
sits together within the function rather than being scattered.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 15
🔭 Outside diff range comments (2)
dream_layer_frontend/vite.config.ts.timestamp-1755070710337-0629aa119849b.mjs (1)
1-24
: Remove timestamped Vite artifact exposing local paths and add a proper configThe generated file
dream_layer_frontend/vite.config.ts.timestamp-1755070710337-0629aa119849b.mjs
contains hard-codedfile:///Users/...
imports and__vite_injected_original_dirname
, which:
- Leaks a developer username/absolute paths
- Will break on any other machine/CI
- Bloats the repo
Action items:
- Delete the timestamped
.mjs
file- Commit a clean
dream_layer_frontend/vite.config.ts
(not a transpiled artifact)- Update
.gitignore
to block any future timestamped artifactsProposed
.gitignore
update:# ignore Vite timestamped build artifacts +dream_layer_frontend/*.timestamp-*.mjs +**/*.timestamp-*.mjsProposed clean Vite config at
dream_layer_frontend/vite.config.ts
:import { defineConfig } from "vite"; import react from "@vitejs/plugin-react-swc"; import { fileURLToPath, URL } from "node:url"; export default defineConfig({ server: { host: "::", // or host: true port: 8080, }, plugins: [react()], resolve: { alias: { "@": fileURLToPath(new URL("./src", import.meta.url)), }, }, });dream_layer_backend/test_image/base64conversion.py (1)
1-43
: Avoid top-level side effects; fix misleading comment; make selection deterministic.Importing this module currently executes file I/O and writes artifacts. Wrap in main(), guard with if name == "main", sort files for determinism, and fix the comment.
Apply:
import base64 from pathlib import Path -# Folder containing images -folder = Path(__file__).parent # assuming this script is inside test_image folder - -# Supported image extensions -img_extensions = {".png", ".jpg", ".jpeg", ".bmp", ".gif"} - -# Find first image file in folder -image_files = [f for f in folder.iterdir() if f.suffix.lower() in img_extensions] - -if not image_files: - raise FileNotFoundError(f"No image files found in {folder}") - -input_img_path = image_files[0] -base64_txt_path = folder / "base64_txt_test_image.txt" -output_img_path = folder / "test_image.png" - -print(f"Using input image: {input_img_path.name}") - -# Read image bytes -with open(input_img_path, "rb") as f: - img_bytes = f.read() - -# Encode to base64 string -base64_img = base64.b64encode(img_bytes).decode("utf-8") - -# Save base64 string to 4.txt -with open(base64_txt_path, "w") as f: - f.write(base64_img) - -print(f"Saved base64 string to {base64_txt_path}") - -# Decode base64 string back to bytes -decoded_bytes = base64.b64decode(base64_img) - -# Save decoded bytes as new PNG file -with open(output_img_path, "wb") as f: - f.write(decoded_bytes) - -print(f"Saved decoded image as {output_img_path}") +def main() -> None: + # Folder containing images + folder = Path(__file__).parent # assuming this script is inside test_image folder + + # Supported image extensions + img_extensions = {".png", ".jpg", ".jpeg", ".bmp", ".gif"} + + # Find first image file in folder (sorted for determinism) + image_files = sorted( + f for f in folder.iterdir() if f.is_file() and f.suffix.lower() in img_extensions + ) + if not image_files: + raise FileNotFoundError(f"No image files found in {folder}") + + input_img_path = image_files[0] + base64_txt_path = folder / "base64_txt_test_image.txt" + output_img_path = folder / "test_image.png" + + print(f"Using input image: {input_img_path.name}") + + # Read image bytes + img_bytes = input_img_path.read_bytes() + + # Encode to base64 string + base64_img = base64.b64encode(img_bytes).decode("utf-8") + + # Save base64 string to base64_txt_test_image.txt + base64_txt_path.write_text(base64_img, encoding="utf-8") + print(f"Saved base64 string to {base64_txt_path}") + + # Decode base64 string back to bytes and save as new PNG file + decoded_bytes = base64.b64decode(base64_img) + output_img_path.write_bytes(decoded_bytes) + print(f"Saved decoded image as {output_img_path}") + + +if __name__ == "__main__": + main()
🧹 Nitpick comments (13)
ComfyUI/user/default/comfy.settings.json (1)
2-3
: Committing default user settings may unintentionally override or conflict with local user preferences.Storing user defaults under ComfyUI/user/default can be mistaken for a vendored upstream file and may be overwritten during upgrades. Also, forcing Comfy.TutorialCompleted and a ColorPalette globally could surprise users.
Consider:
- Moving project-level defaults to an app config layer or provisioning step that writes this file at runtime.
- If keeping in-repo, add a README explaining intended behavior and acceptable values for Comfy.ColorPalette (e.g., "light"/"dark"/"system"), and how users can override locally.
dream_layer_backend/img2img_workflow.py (4)
21-21
: Remove duplicate import statementThe
pathlib
module import on line 21 appears after other imports, which disrupts the conventional import ordering where standard library imports should be grouped together at the beginning.Move this import to be grouped with other standard library imports:
import json import os import random import re import logging +from pathlib import Path from dream_layer import get_directories from extras import COMFY_INPUT_DIR -from pathlib import Path
25-39
: Consider simplifying checkpoint discovery logicThe function includes verbose logging for debugging purposes. While helpful during development, this might clutter production logs.
Consider using debug-level logging for verbose path information:
def get_available_checkpoints(): - logger.info(f"Current __file__ path: {__file__}") + logger.debug(f"Current __file__ path: {__file__}") root_dir = Path(__file__).resolve().parent.parent - logger.info(f"Resolved root_dir: {root_dir}") + logger.debug(f"Resolved root_dir: {root_dir}") checkpoints_dir = root_dir / "ComfyUI" / "models" / "checkpoints" - logger.info(f"Looking for checkpoints in: {checkpoints_dir}") + logger.debug(f"Looking for checkpoints in: {checkpoints_dir}") if not checkpoints_dir.exists(): logger.error(f"Checkpoints directory does not exist: {checkpoints_dir}") return [] models = [f.name for f in checkpoints_dir.glob("*") if f.suffix in ['.safetensors', '.ckpt']] - logger.info(f"Found checkpoint files: {models}") + logger.info(f"Found {len(models)} checkpoint files") + logger.debug(f"Checkpoint files: {models}") return models
60-61
: Remove commented-out legacy codeLine 60 contains commented-out code that's no longer needed since the dynamic model resolution is now in place.
- #model_name = data.get('model_name', 'v1-6-pruned-emaonly-fp16.safetensors') # was hardcoded -
108-108
: Remove duplicate commented codeThis line duplicates the commented-out code from line 60.
- #model_name = data.get('model_name', 'v1-6-pruned-emaonly-fp16.safetensors')
dream_layer_frontend/src/features/Img2Img/Img2ImgPage.tsx (1)
272-288
: Consider extracting GPU truncation logic to a utility functionThe MetricsBadge component contains GPU name truncation logic that could be reused elsewhere.
Consider extracting the truncation logic to a utility function:
// In a utils file export const truncateText = (text: string, maxLength: number = 12): string => { return text.length > maxLength ? text.slice(0, maxLength) + "…" : text; }; // Then in the component const shortGpu = truncateText(gpu, 12);dream_layer_backend/txt2img_workflow.py (1)
89-90
: Remove commented legacy codeThe commented line is no longer needed with the dynamic checkpoint resolution in place.
- #model_name = data.get('model_name', 'juggernautXL_v8Rundiffusion.safetensors') # was hardcoded -dream_layer_backend/img2img_server.py (1)
189-190
: Remove commented alternative approachThe commented line shows an alternative API that's not being used and should be removed for clarity.
workflow = transform_to_img2img_workflow(data) - # workflow = transform_to_img2img_workflow(data, ckpt_name=ckpt_name) -dream_layer_backend/test_image/test_img_endpoints.py (3)
1-3
: Fix the docstring to follow standard Python conventions.The docstring should provide a clear description of the module's purpose.
""" -Test file validation logic for testing model endpoint communication +Test script for validating model endpoints and communication with txt2img/img2img servers. """
29-42
: Consider adding error handling for base64 encoding failures.The function should handle potential encoding errors more gracefully, especially for large files or corrupted images.
def get_base64_image() -> str: """Return base64 string from .txt if available, else generate from PNG.""" if BASE64_PATH.exists(): print(f"Reading existing Base64 from {BASE64_PATH}") return BASE64_PATH.read_text().strip() elif PNG_PATH.exists(): - print(f"Encoding {PNG_PATH} to Base64...") - encoded = base64.b64encode(PNG_PATH.read_bytes()).decode("utf-8") - BASE64_PATH.write_text(encoded) - print(f"Saved Base64 to {BASE64_PATH}") - return encoded + try: + print(f"Encoding {PNG_PATH} to Base64...") + image_bytes = PNG_PATH.read_bytes() + encoded = base64.b64encode(image_bytes).decode("utf-8") + BASE64_PATH.write_text(encoded) + print(f"Saved Base64 to {BASE64_PATH}") + return encoded + except Exception as e: + raise RuntimeError(f"Failed to encode {PNG_PATH} to Base64: {e}") else: raise FileNotFoundError(f"Neither {PNG_PATH} nor {BASE64_PATH} found.")
53-105
: Consider making the test script more configurable and reusable.The main block could benefit from being wrapped in a function and accepting command-line arguments for flexibility.
Consider refactoring the script to:
- Accept command-line arguments for API hosts, test image paths, and other parameters
- Wrap the main logic in a
main()
function for better structure- Return exit codes to indicate success/failure for CI/CD integration
Example structure:
import argparse def main(): parser = argparse.ArgumentParser(description='Test DreamLayer API endpoints') parser.add_argument('--txt2img-host', default='http://127.0.0.1:5001') parser.add_argument('--img2img-host', default='http://127.0.0.1:5004') args = parser.parse_args() # ... test logic ... return 0 # Success if __name__ == "__main__": sys.exit(main())dream_layer_backend/txt2img_server.py (2)
172-172
: Remove unused variabletime_per_image_str
.The variable
time_per_image_str
is computed but never used in the response or logging.- # Converting to string format - time_per_image_str = "N/A" if time_per_image is None else f"{time_per_image:.2f}"
143-143
: Remove commented-out code.Dead code should be removed to maintain code clarity.
workflow = transform_to_txt2img_workflow(data) - #workflow = transform_to_txt2img_workflow(data, ckpt_name=ckpt_name)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (8)
dream_layer_backend/dream_layer_backend_utils/inference_traces/inference_trace_img2img.csv
is excluded by!**/*.csv
dream_layer_backend/dream_layer_backend_utils/inference_traces/inference_trace_txt2img.csv
is excluded by!**/*.csv
dream_layer_backend/inference_traces/inference_trace_img2img.csv
is excluded by!**/*.csv
dream_layer_backend/inference_traces/inference_trace_text2img.csv
is excluded by!**/*.csv
dream_layer_backend/inference_traces/inference_trace_txt2img.csv
is excluded by!**/*.csv
dream_layer_backend/test_image/examjam.png
is excluded by!**/*.png
dream_layer_backend/test_image/test_image.png
is excluded by!**/*.png
dream_layer_frontend/package-lock.json
is excluded by!**/package-lock.json
📒 Files selected for processing (12)
.python-version
(1 hunks)ComfyUI/user/default/comfy.settings.json
(1 hunks)dream_layer_backend/img2img_server.py
(3 hunks)dream_layer_backend/img2img_workflow.py
(3 hunks)dream_layer_backend/test_image/base64conversion.py
(1 hunks)dream_layer_backend/test_image/test_img_endpoints.py
(1 hunks)dream_layer_backend/txt2img_server.py
(3 hunks)dream_layer_backend/txt2img_workflow.py
(4 hunks)dream_layer_backend/workflow_dumps/last_workflow.json
(1 hunks)dream_layer_frontend/src/features/Img2Img/Img2ImgPage.tsx
(6 hunks)dream_layer_frontend/src/features/Txt2Img/Txt2ImgPage.tsx
(3 hunks)dream_layer_frontend/vite.config.ts.timestamp-1755070710337-0629aa119849b.mjs
(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (4)
dream_layer_backend/txt2img_workflow.py (1)
dream_layer_backend/img2img_workflow.py (1)
get_available_checkpoints
(25-39)
dream_layer_backend/img2img_server.py (3)
dream_layer_backend/txt2img_server.py (1)
get_allowed_checkpoints
(117-125)dream_layer_backend/img2img_workflow.py (1)
transform_to_img2img_workflow
(41-261)dream_layer_backend/shared_utils.py (1)
send_to_comfyui
(160-222)
dream_layer_backend/img2img_workflow.py (1)
dream_layer_backend/txt2img_workflow.py (1)
get_available_checkpoints
(24-32)
dream_layer_backend/txt2img_server.py (3)
dream_layer_backend/shared_utils.py (1)
send_to_comfyui
(160-222)dream_layer_backend/img2img_server.py (1)
get_allowed_checkpoints
(164-172)dream_layer_backend/txt2img_workflow.py (1)
transform_to_txt2img_workflow
(34-261)
🪛 Ruff (0.12.2)
dream_layer_backend/txt2img_workflow.py
5-5: Redefinition of unused json
from line 1
Remove definition: json
(F811)
dream_layer_backend/img2img_workflow.py
249-249: f-string without any placeholders
Remove extraneous f
prefix
(F541)
dream_layer_backend/txt2img_server.py
169-169: Undefined name total_time
(F821)
172-172: Local variable time_per_image_str
is assigned to but never used
Remove assignment to unused variable time_per_image_str
(F841)
🔇 Additional comments (16)
.python-version (1)
1-1
: All Python version declarations are aligned with 3.11.9
.python-version
is pinned to 3.11.9ComfyUI/pyproject.toml
hasrequires-python = ">=3.9"
(3.11.9 is within that range)- No
FROM python
lines found in any Dockerfiles- No
actions/setup-python
orpython-version:
entries in GitHub workflowsNo mismatches detected—no changes needed.
dream_layer_backend/img2img_workflow.py (1)
46-59
: Good dynamic checkpoint validation implementationThe dynamic checkpoint resolution properly validates the requested model against available checkpoints and provides appropriate fallback behavior with logging.
dream_layer_frontend/src/features/Img2Img/Img2ImgPage.tsx (4)
43-48
: LGTM! Metrics state properly typed and initializedThe metrics state is well-structured with appropriate TypeScript types for tracking performance data.
290-310
: CSV export implementation looks goodThe CSV export function properly handles the metrics data and creates a downloadable file with appropriate headers.
461-475
: Metrics display integration is well-implementedThe conditional rendering of metrics with the CSV download button and badge is properly integrated into the UI header.
176-181
: ✅ Metrics extraction validatedThe backend sets
comfy_response["metrics"]
in both:
dream_layer_backend/txt2img_server.py
(lines 178–183)dream_layer_backend/img2img_server.py
(lines 251–256)Your TSX snippet in
Img2ImgPage.tsx
safely checks for and appliesdata.comfy_response.metrics
. No further changes needed.dream_layer_frontend/src/features/Txt2Img/Txt2ImgPage.tsx (2)
44-49
: LGTM! Consistent metrics state implementationThe metrics state structure matches the Img2Img implementation, maintaining consistency across components.
224-227
: Good metrics extraction with safe navigationThe code properly extracts metrics from the backend response with appropriate conditional checks.
dream_layer_backend/txt2img_workflow.py (2)
24-33
: Good checkpoint discovery implementationThe function properly handles the checkpoint directory discovery with appropriate error handling and logging.
76-88
: Consistent dynamic checkpoint validationThe implementation matches the pattern in img2img_workflow.py, maintaining consistency across workflows.
dream_layer_backend/img2img_server.py (6)
36-36
: LGTM! Proper path resolution for COMFY_ROOTThe change to use
os.path.abspath
withos.path.join
ensures the path is properly resolved as an absolute path.
157-186
: Good dynamic checkpoint validation with fallbackThe checkpoint validation logic properly handles invalid or missing checkpoints with appropriate fallback behavior and error handling.
196-204
: Good GPU detection with fallbackThe GPU detection properly handles exceptions by falling back to CPU when NVML is not available.
206-221
: Excellent performance metrics collectionThe timing instrumentation properly measures elapsed time and calculates per-image metrics with appropriate logging.
223-248
: Well-structured CSV logging for inference tracesThe CSV logging implementation properly creates the directory structure, handles headers, and appends metrics data in a structured format.
250-256
: Good metrics integration in responseThe metrics are properly added to the response JSON, making them available to the frontend for display.
logger.info("Injecting Refiner parameters...") | ||
workflow = inject_refiner_parameters(workflow, refiner_data) | ||
|
||
print(f"✅ Workflow transformation complete") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove extraneous f-string prefix
The print statement uses an f-string prefix but doesn't contain any placeholders.
- print(f"✅ Workflow transformation complete")
+ print("✅ Workflow transformation complete")
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
print(f"✅ Workflow transformation complete") | |
print("✅ Workflow transformation complete") |
🧰 Tools
🪛 Ruff (0.12.2)
249-249: f-string without any placeholders
Remove extraneous f
prefix
(F541)
🤖 Prompt for AI Agents
In dream_layer_backend/img2img_workflow.py around line 249, the print statement
uses an unnecessary f-string prefix for a static message; remove the leading "f"
so the call becomes a normal string literal (print("✅ Workflow transformation
complete")) to avoid misleading use of f-strings.
TEST_IMAGE_DIR = Path(__file__).parent / "test_image" | ||
PNG_PATH = TEST_IMAGE_DIR / "test_image.png" | ||
BASE64_PATH = TEST_IMAGE_DIR / "base64_txt_test_image.txt" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix incorrect path construction for TEST_IMAGE_DIR.
The path construction assumes test_image
is at the same level as the script, but the script is already inside test_image
directory. This will result in an incorrect path like dream_layer_backend/test_image/test_image/test_image.png
.
-TEST_IMAGE_DIR = Path(__file__).parent / "test_image"
-PNG_PATH = TEST_IMAGE_DIR / "test_image.png"
-BASE64_PATH = TEST_IMAGE_DIR / "base64_txt_test_image.txt"
+TEST_IMAGE_DIR = Path(__file__).parent
+PNG_PATH = TEST_IMAGE_DIR / "test_image.png"
+BASE64_PATH = TEST_IMAGE_DIR / "base64_txt_test_image.txt"
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
TEST_IMAGE_DIR = Path(__file__).parent / "test_image" | |
PNG_PATH = TEST_IMAGE_DIR / "test_image.png" | |
BASE64_PATH = TEST_IMAGE_DIR / "base64_txt_test_image.txt" | |
TEST_IMAGE_DIR = Path(__file__).parent | |
PNG_PATH = TEST_IMAGE_DIR / "test_image.png" | |
BASE64_PATH = TEST_IMAGE_DIR / "base64_txt_test_image.txt" |
🤖 Prompt for AI Agents
In dream_layer_backend/test_image/test_img_endpoints.py around lines 15 to 17,
TEST_IMAGE_DIR is incorrectly constructed as Path(__file__).parent /
"test_image" causing a duplicate path segment; change TEST_IMAGE_DIR to refer
directly to the current directory (e.g., Path(__file__).parent) so PNG_PATH and
BASE64_PATH point to the actual files in the same folder.
r1 = requests.post(f"{TXT2IMG_API_HOST}/api/txt2img", json=txt2img_payload) | ||
pretty_print_response("TXT2IMG Response", r1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add timeout and error handling for API requests.
Network requests should have timeouts and proper error handling to prevent the test from hanging indefinitely.
- r1 = requests.post(f"{TXT2IMG_API_HOST}/api/txt2img", json=txt2img_payload)
+ try:
+ r1 = requests.post(f"{TXT2IMG_API_HOST}/api/txt2img", json=txt2img_payload, timeout=30)
+ except requests.exceptions.RequestException as e:
+ print(f"Failed to connect to txt2img endpoint: {e}")
+ return
pretty_print_response("TXT2IMG Response", r1)
# IMG2IMG Request payload
# ... (payload code)
- r2 = requests.post(f"{IMG2IMG_API_HOST}/api/img2img", json=img2img_payload)
+ try:
+ r2 = requests.post(f"{IMG2IMG_API_HOST}/api/img2img", json=img2img_payload, timeout=30)
+ except requests.exceptions.RequestException as e:
+ print(f"Failed to connect to img2img endpoint: {e}")
+ return
pretty_print_response("IMG2IMG Response", r2)
Also applies to: 102-103
🤖 Prompt for AI Agents
In dream_layer_backend/test_image/test_img_endpoints.py around lines 82-83 (and
similarly 102-103), the requests.post calls lack timeouts and error handling;
update each POST to include a reasonable timeout (e.g., timeout=10) and wrap the
call in a try/except that catches requests.exceptions.RequestException, logging
or pretty_printing the exception and failing the test (e.g., raise or
pytest.fail) on exception; additionally, after the response returns, check
response.ok (or status_code) and handle non-2xx responses by printing
response.text and failing the test so the test won't hang or silently pass on
errors.
def log_inference_trace(total_time, images_generated, gpu_name, driver_version,ckpt_name): | ||
"""Log inference details to CSV and console.""" | ||
# Checking to see if images were generated, finding the time per image | ||
time_per_image = None if images_generated == 0 else total_time / images_generated | ||
|
||
# Converting to string format | ||
time_per_image_str = "N/A" if time_per_image is None else f"{time_per_image:.2f}" | ||
|
||
# Console logging | ||
print(f"⏱ {total_time:.2f}s total · {time_per_image_str}s/img · {gpu_name} · Driver {driver_version}") | ||
|
||
# CSV logging | ||
ensure_csv_exists() | ||
with open(TRACE_CSV, "a", newline="") as f: | ||
writer = csv.writer(f) | ||
writer.writerow([ | ||
time.time(), | ||
round(total_time, 4), | ||
images_generated, | ||
round(time_per_image, 4) if time_per_image is not None else "", | ||
gpu_name, | ||
driver_version, | ||
ckpt_name | ||
]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider adding error handling for CSV operations.
The CSV writing operations could fail due to permissions or disk space issues, which would crash the request handling.
def log_inference_trace(total_time, images_generated, gpu_name, driver_version,ckpt_name):
"""Log inference details to CSV and console."""
# Checking to see if images were generated, finding the time per image
time_per_image = None if images_generated == 0 else total_time / images_generated
# Converting to string format
time_per_image_str = "N/A" if time_per_image is None else f"{time_per_image:.2f}"
# Console logging
print(f"⏱ {total_time:.2f}s total · {time_per_image_str}s/img · {gpu_name} · Driver {driver_version}")
# CSV logging
- ensure_csv_exists()
- with open(TRACE_CSV, "a", newline="") as f:
- writer = csv.writer(f)
- writer.writerow([
- time.time(),
- round(total_time, 4),
- images_generated,
- round(time_per_image, 4) if time_per_image is not None else "",
- gpu_name,
- driver_version,
- ckpt_name
- ])
+ try:
+ ensure_csv_exists()
+ with open(TRACE_CSV, "a", newline="") as f:
+ writer = csv.writer(f)
+ writer.writerow([
+ time.time(),
+ round(total_time, 4),
+ images_generated,
+ round(time_per_image, 4) if time_per_image is not None else "",
+ gpu_name,
+ driver_version,
+ ckpt_name
+ ])
+ except Exception as e:
+ print(f"Warning: Failed to log inference trace to CSV: {e}")
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def log_inference_trace(total_time, images_generated, gpu_name, driver_version,ckpt_name): | |
"""Log inference details to CSV and console.""" | |
# Checking to see if images were generated, finding the time per image | |
time_per_image = None if images_generated == 0 else total_time / images_generated | |
# Converting to string format | |
time_per_image_str = "N/A" if time_per_image is None else f"{time_per_image:.2f}" | |
# Console logging | |
print(f"⏱ {total_time:.2f}s total · {time_per_image_str}s/img · {gpu_name} · Driver {driver_version}") | |
# CSV logging | |
ensure_csv_exists() | |
with open(TRACE_CSV, "a", newline="") as f: | |
writer = csv.writer(f) | |
writer.writerow([ | |
time.time(), | |
round(total_time, 4), | |
images_generated, | |
round(time_per_image, 4) if time_per_image is not None else "", | |
gpu_name, | |
driver_version, | |
ckpt_name | |
]) | |
def log_inference_trace(total_time, images_generated, gpu_name, driver_version, ckpt_name): | |
"""Log inference details to CSV and console.""" | |
# Checking to see if images were generated, finding the time per image | |
time_per_image = None if images_generated == 0 else total_time / images_generated | |
# Converting to string format | |
time_per_image_str = "N/A" if time_per_image is None else f"{time_per_image:.2f}" | |
# Console logging | |
print(f"⏱ {total_time:.2f}s total · {time_per_image_str}s/img · {gpu_name} · Driver {driver_version}") | |
# CSV logging | |
try: | |
ensure_csv_exists() | |
with open(TRACE_CSV, "a", newline="") as f: | |
writer = csv.writer(f) | |
writer.writerow([ | |
time.time(), | |
round(total_time, 4), | |
images_generated, | |
round(time_per_image, 4) if time_per_image is not None else "", | |
gpu_name, | |
driver_version, | |
ckpt_name | |
]) | |
except Exception as e: | |
print(f"Warning: Failed to log inference trace to CSV: {e}") |
🤖 Prompt for AI Agents
In dream_layer_backend/txt2img_server.py around lines 47 to 70, the CSV file
operations (ensure_csv_exists and opening/writing TRACE_CSV) are unprotected and
can raise IOError/OSError (permissions, disk full), which will crash request
handling; wrap the CSV creation/open/write sequence in a try/except that catches
OSError/IOError/Exception, log the failure (use existing logger or print) with
the error details and a clear message including TRACE_CSV, and fall back to
continuing without CSV logging so the request still succeeds; ensure the
timestamp/metrics are preserved in memory or skipped when CSV fails and do not
re-raise the exception.
# Ensure dump directory exists | ||
dump_dir = os.path.join(os.path.dirname(__file__), "workflow_dumps") | ||
os.makedirs(dump_dir, exist_ok=True) | ||
|
||
# Save the workflow JSON | ||
output_path = os.path.join(dump_dir, "last_workflow.json") | ||
with open(output_path, "w") as f: | ||
json.dump(workflow, f, indent=2) | ||
|
||
print(f"📋 Generated workflow JSON: {json.dumps(workflow, indent=2)}") | ||
print(f"🚀 Workflow JSON saved to {output_path}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling for workflow dump operations
Similar to img2img_workflow.py, the file I/O operations should include error handling.
- # Ensure dump directory exists
- dump_dir = os.path.join(os.path.dirname(__file__), "workflow_dumps")
- os.makedirs(dump_dir, exist_ok=True)
-
- # Save the workflow JSON
- output_path = os.path.join(dump_dir, "last_workflow.json")
- with open(output_path, "w") as f:
- json.dump(workflow, f, indent=2)
-
- print(f"📋 Generated workflow JSON: {json.dumps(workflow, indent=2)}")
- print(f"🚀 Workflow JSON saved to {output_path}")
+ # Ensure dump directory exists and save workflow
+ try:
+ dump_dir = os.path.join(os.path.dirname(__file__), "workflow_dumps")
+ os.makedirs(dump_dir, exist_ok=True)
+
+ # Save the workflow JSON
+ output_path = os.path.join(dump_dir, "last_workflow.json")
+ with open(output_path, "w") as f:
+ json.dump(workflow, f, indent=2)
+
+ logger.info(f"Workflow JSON saved to {output_path}")
+ print(f"🚀 Workflow JSON saved to {output_path}")
+ except Exception as e:
+ logger.error(f"Failed to save workflow JSON: {e}")
+ # Continue execution as this is non-critical
+
+ # Only print the full JSON in debug mode to avoid cluttering logs
+ logger.debug(f"Generated workflow JSON: {json.dumps(workflow, indent=2)}")
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
# Ensure dump directory exists | |
dump_dir = os.path.join(os.path.dirname(__file__), "workflow_dumps") | |
os.makedirs(dump_dir, exist_ok=True) | |
# Save the workflow JSON | |
output_path = os.path.join(dump_dir, "last_workflow.json") | |
with open(output_path, "w") as f: | |
json.dump(workflow, f, indent=2) | |
print(f"📋 Generated workflow JSON: {json.dumps(workflow, indent=2)}") | |
print(f"🚀 Workflow JSON saved to {output_path}") | |
# Ensure dump directory exists and save workflow | |
try: | |
dump_dir = os.path.join(os.path.dirname(__file__), "workflow_dumps") | |
os.makedirs(dump_dir, exist_ok=True) | |
# Save the workflow JSON | |
output_path = os.path.join(dump_dir, "last_workflow.json") | |
with open(output_path, "w") as f: | |
json.dump(workflow, f, indent=2) | |
logger.info(f"Workflow JSON saved to {output_path}") | |
print(f"🚀 Workflow JSON saved to {output_path}") | |
except Exception as e: | |
logger.error(f"Failed to save workflow JSON: {e}") | |
# Continue execution as this is non-critical | |
# Only print the full JSON in debug mode to avoid cluttering logs | |
logger.debug(f"Generated workflow JSON: {json.dumps(workflow, indent=2)}") |
🤖 Prompt for AI Agents
In dream_layer_backend/txt2img_workflow.py around lines 244 to 254, the
directory creation and file write for dumping the workflow lack error handling;
wrap the os.makedirs, json.dump and open/write operations in a try/except that
catches Exceptions, log or print a clear error message with exception details,
and avoid crashing the process (optionally return or continue) — ensure you
still create the dump_dir only if needed, handle permission/IO errors, and
include the original workflow JSON or path in the error log for debugging.
{ | ||
"prompt": { | ||
"3": { | ||
"class_type": "KSampler", | ||
"inputs": { | ||
"cfg": 7.0, | ||
"denoise": 1.0, | ||
"latent_image": [ | ||
"5", | ||
0 | ||
], | ||
"model": [ | ||
"4", | ||
0 | ||
], | ||
"negative": [ | ||
"7", | ||
0 | ||
], | ||
"positive": [ | ||
"6", | ||
0 | ||
], | ||
"sampler_name": "euler", | ||
"scheduler": "normal", | ||
"seed": 53064756, | ||
"steps": 20 | ||
} | ||
}, | ||
"4": { | ||
"class_type": "CheckpointLoaderSimple", | ||
"inputs": { | ||
"ckpt_name": "juggernautXL_v8Rundiffusion.safetensors" | ||
} | ||
}, | ||
"5": { | ||
"class_type": "EmptyLatentImage", | ||
"inputs": { | ||
"batch_size": 4, | ||
"height": 512, | ||
"width": 512 | ||
} | ||
}, | ||
"6": { | ||
"class_type": "CLIPTextEncode", | ||
"inputs": { | ||
"clip": [ | ||
"4", | ||
1 | ||
], | ||
"text": "Red fox" | ||
} | ||
}, | ||
"7": { | ||
"class_type": "CLIPTextEncode", | ||
"inputs": { | ||
"clip": [ | ||
"4", | ||
1 | ||
], | ||
"text": "" | ||
} | ||
}, | ||
"8": { | ||
"class_type": "VAEDecode", | ||
"inputs": { | ||
"samples": [ | ||
"3", | ||
0 | ||
], | ||
"vae": [ | ||
"4", | ||
2 | ||
] | ||
} | ||
}, | ||
"9": { | ||
"class_type": "SaveImage", | ||
"inputs": { | ||
"filename_prefix": "DreamLayer", | ||
"images": [ | ||
"8", | ||
0 | ||
] | ||
} | ||
} | ||
}, | ||
"extra_data": {} | ||
} No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Generated workflow dump should be a runtime artifact, not committed.
This file will change often (checkpoint name, seed, prompts), leading to noisy diffs and merge conflicts. It’s better treated as a runtime output.
Recommendations:
- Add to .gitignore to prevent future commits:
+dream_layer_backend/workflow_dumps/*.json
- If you need a documented example, commit a stable sample (e.g., last_workflow.sample.json) under docs/examples/ with a README explaining how it’s produced.
- Ensure the backend writes this file to a writable runtime path (e.g., a mounted volume) rather than the repo directory.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In dream_layer_backend/workflow_dumps/last_workflow.json (lines 1-89): this
generated workflow dump is a runtime artifact and should not be committed;
remove it from version control (git rm --cached path/to/last_workflow.json and
commit), add workflow_dumps/ or last_workflow.json to .gitignore so it isn’t
re-added, create a stable example file (e.g.,
docs/examples/last_workflow.sample.json) with static values and a README
describing how it’s produced and used, and update the backend so it writes this
runtime dump to a writable runtime path (configurable directory or mounted
volume) instead of the repository directory.
const MetricsBadge: React.FC<{ | ||
elapsedTimeSec: number; | ||
gpu: string; | ||
driver: string; | ||
}> = ({ elapsedTimeSec, gpu, driver }) => { | ||
const timePerImage = elapsedTimeSec.toFixed(2); | ||
const shortGpu = gpu.length > 12 ? gpu.slice(0, 12) + "…" : gpu; | ||
|
||
return ( | ||
<div | ||
className="ml-4 px-3 py-1 rounded bg-gray-200 text-gray-800 text-sm font-semibold select-none" | ||
title={`GPU: ${gpu} · Driver: ${driver}`} // optional tooltip | ||
> | ||
{timePerImage} s per image · {shortGpu} | ||
</div> | ||
); | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Duplicate MetricsBadge component definition
This MetricsBadge component is identical to the one in Img2ImgPage.tsx, indicating code duplication.
Extract the MetricsBadge component to a shared components file:
// Create a new file: src/components/MetricsBadge.tsx
import React from 'react';
interface MetricsBadgeProps {
elapsedTimeSec: number;
gpu: string;
driver: string;
}
export const MetricsBadge: React.FC<MetricsBadgeProps> = ({ elapsedTimeSec, gpu, driver }) => {
const timePerImage = elapsedTimeSec.toFixed(2);
const shortGpu = gpu.length > 12 ? gpu.slice(0, 12) + "…" : gpu;
return (
<div
className="ml-4 px-3 py-1 rounded bg-gray-200 text-gray-800 text-sm font-semibold select-none"
title={`GPU: ${gpu} · Driver: ${driver}`}
>
{timePerImage} s per image · {shortGpu}
</div>
);
};
Then import and use it in both pages:
import { MetricsBadge } from '@/components/MetricsBadge';
🤖 Prompt for AI Agents
In dream_layer_frontend/src/features/Txt2Img/Txt2ImgPage.tsx around lines
274-290 there is a duplicate MetricsBadge component definition (identical to the
one in Img2ImgPage.tsx); extract this component to a shared file (e.g.,
src/components/MetricsBadge.tsx) with the same props and implementation, export
it, then replace the local component in Txt2ImgPage.tsx with an import: import {
MetricsBadge } from '@/components/MetricsBadge'; do the same in Img2ImgPage.tsx
so both pages reuse the shared component and remove the duplicate definitions.
const exportTxt2ImgMetricsToCSV = () => { | ||
if (!metrics) return; | ||
|
||
const headers = ['elapsed_time_sec', 'time_per_image_sec', 'gpu', 'driver_version']; | ||
const values = [ | ||
metrics.elapsed_time_sec, | ||
metrics.time_per_image_sec, | ||
metrics.gpu, | ||
metrics.driver_version | ||
]; | ||
|
||
const csv = [headers.join(','), values.join(',')].join('\n'); | ||
const blob = new Blob([csv], { type: 'text/csv;charset=utf-8;' }); | ||
|
||
const link = document.createElement('a'); | ||
link.href = URL.createObjectURL(blob); | ||
link.setAttribute('download', `inference_trace_txt2img.csv`); | ||
document.body.appendChild(link); | ||
link.click(); | ||
document.body.removeChild(link); | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Duplicate CSV export function
The exportTxt2ImgMetricsToCSV function is nearly identical to exportImg2ImgMetricsToCSV, only differing in the filename.
Create a shared utility function:
// In a utils file (e.g., src/utils/metricsExport.ts)
export const exportMetricsToCSV = (
metrics: {
elapsed_time_sec: number;
time_per_image_sec: number;
gpu: string;
driver_version: string;
},
filename: string
) => {
const headers = ['elapsed_time_sec', 'time_per_image_sec', 'gpu', 'driver_version'];
const values = [
metrics.elapsed_time_sec,
metrics.time_per_image_sec,
metrics.gpu,
metrics.driver_version
];
const csv = [headers.join(','), values.join(',')].join('\n');
const blob = new Blob([csv], { type: 'text/csv;charset=utf-8;' });
const link = document.createElement('a');
link.href = URL.createObjectURL(blob);
link.setAttribute('download', filename);
document.body.appendChild(link);
link.click();
document.body.removeChild(link);
};
Then use it in both pages:
const handleExportMetrics = () => {
if (!metrics) return;
exportMetricsToCSV(metrics, 'inference_trace_txt2img.csv');
};
🤖 Prompt for AI Agents
In dream_layer_frontend/src/features/Txt2Img/Txt2ImgPage.tsx around lines
292-312, the exportTxt2ImgMetricsToCSV duplicates logic from
exportImg2ImgMetricsToCSV; extract the CSV-export logic into a shared utility
(e.g., src/utils/metricsExport.ts) that accepts the metrics object and a
filename, move header/values/blob/link creation into that function, export it,
and then replace exportTxt2ImgMetricsToCSV with a thin wrapper that calls the
shared exportMetricsToCSV(metrics, 'inference_trace_txt2img.csv') (ensure
null-check for metrics before calling).
{metrics && ( | ||
<div className="flex items-center space-x-3 p-2 rounded-lg bg-gray-100 dark:bg-gray-800 border border-gray-200 dark:border-gray-700"> | ||
<div className="flex flex-col"> | ||
<span className="font-semibold text-gray-800 dark:text-gray-200"> | ||
{`${metrics.time_per_image_sec?.toFixed(2) ?? | ||
(metrics.elapsed_time_sec / (coreSettings.batch_size * coreSettings.batch_count)).toFixed(2) | ||
} s per image`} | ||
</span> | ||
<span className="text-gray-600 dark:text-gray-400"> | ||
{`${metrics.gpu} (Driver ${metrics.driver_version})`} | ||
</span> | ||
</div> | ||
<button | ||
onClick={exportTxt2ImgMetricsToCSV} | ||
className="ml-auto rounded-md border border-input bg-white dark:bg-gray-700 px-3 py-1 text-xs font-medium text-gray-800 dark:text-gray-200 transition-colors hover:bg-gray-200 dark:hover:bg-gray-600" | ||
> | ||
Download CSV | ||
</button> | ||
</div> | ||
)} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Inconsistent metrics display compared to Img2Img
The metrics display implementation differs from Img2ImgPage, which uses the MetricsBadge component. This page has an inline implementation with different styling.
For consistency, use the same MetricsBadge component approach as in Img2ImgPage after extracting it to a shared component:
- {metrics && (
- <div className="flex items-center space-x-3 p-2 rounded-lg bg-gray-100 dark:bg-gray-800 border border-gray-200 dark:border-gray-700">
- <div className="flex flex-col">
- <span className="font-semibold text-gray-800 dark:text-gray-200">
- {`${metrics.time_per_image_sec?.toFixed(2) ??
- (metrics.elapsed_time_sec / (coreSettings.batch_size * coreSettings.batch_count)).toFixed(2)
- } s per image`}
- </span>
- <span className="text-gray-600 dark:text-gray-400">
- {`${metrics.gpu} (Driver ${metrics.driver_version})`}
- </span>
- </div>
- <button
- onClick={exportTxt2ImgMetricsToCSV}
- className="ml-auto rounded-md border border-input bg-white dark:bg-gray-700 px-3 py-1 text-xs font-medium text-gray-800 dark:text-gray-200 transition-colors hover:bg-gray-200 dark:hover:bg-gray-600"
- >
- Download CSV
- </button>
- </div>
- )}
+ {metrics && (
+ <>
+ <button
+ onClick={() => exportMetricsToCSV(metrics, 'inference_trace_txt2img.csv')}
+ className="rounded-md border border-input bg-background px-4 py-2 text-sm font-medium transition-colors hover:bg-accent hover:text-accent-foreground"
+ >
+ Download CSV
+ </button>
+ <MetricsBadge
+ elapsedTimeSec={metrics.time_per_image_sec ?? (metrics.elapsed_time_sec / (coreSettings.batch_size * coreSettings.batch_count))}
+ gpu={metrics.gpu}
+ driver={metrics.driver_version}
+ />
+ </>
+ )}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
{metrics && ( | |
<div className="flex items-center space-x-3 p-2 rounded-lg bg-gray-100 dark:bg-gray-800 border border-gray-200 dark:border-gray-700"> | |
<div className="flex flex-col"> | |
<span className="font-semibold text-gray-800 dark:text-gray-200"> | |
{`${metrics.time_per_image_sec?.toFixed(2) ?? | |
(metrics.elapsed_time_sec / (coreSettings.batch_size * coreSettings.batch_count)).toFixed(2) | |
} s per image`} | |
</span> | |
<span className="text-gray-600 dark:text-gray-400"> | |
{`${metrics.gpu} (Driver ${metrics.driver_version})`} | |
</span> | |
</div> | |
<button | |
onClick={exportTxt2ImgMetricsToCSV} | |
className="ml-auto rounded-md border border-input bg-white dark:bg-gray-700 px-3 py-1 text-xs font-medium text-gray-800 dark:text-gray-200 transition-colors hover:bg-gray-200 dark:hover:bg-gray-600" | |
> | |
Download CSV | |
</button> | |
</div> | |
)} | |
{metrics && ( | |
<> | |
<button | |
onClick={() => exportMetricsToCSV(metrics, 'inference_trace_txt2img.csv')} | |
className="rounded-md border border-input bg-background px-4 py-2 text-sm font-medium transition-colors hover:bg-accent hover:text-accent-foreground" | |
> | |
Download CSV | |
</button> | |
<MetricsBadge | |
elapsedTimeSec={ | |
metrics.time_per_image_sec ?? | |
(metrics.elapsed_time_sec / | |
(coreSettings.batch_size * coreSettings.batch_count)) | |
} | |
gpu={metrics.gpu} | |
driver={metrics.driver_version} | |
/> | |
</> | |
)} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
dream_layer_backend/inference_traces/inference_trace_txt2img.csv
is excluded by!**/*.csv
dream_layer_frontend/package-lock.json
is excluded by!**/package-lock.json
📒 Files selected for processing (2)
dream_layer_backend/workflow_dumps/last_workflow.json
(1 hunks)dream_layer_frontend/package.json
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- dream_layer_backend/workflow_dumps/last_workflow.json
"typescript": "^5.5.3", | ||
"typescript-eslint": "^8.0.1", | ||
"vite": "^5.4.1" | ||
"vite": "^7.1.2" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Vite 7 major bump: verify React SWC plugin compatibility, Node engine, and config discovery
Upgrading to a new Vite major is non-trivial. Please verify the following before merging:
- @vitejs/plugin-react-swc compatibility with Vite 7 (the current ^3.5.0 may have older peer ranges).
- Enforce a Node runtime that satisfies Vite 7 (Node >= 18) via an engines field to avoid dev CI breakage.
- Ensure your Vite config is discoverable. If you added a timestamp-suffixed config (per PR summary), Vite won’t auto-pick it; rename to vite.config.ts|.mjs or pass --config in scripts.
Run this to check config discoverability and script wiring:
If the config filename is not vite.config.ts/js/mts/mjs and scripts don’t pass --config, Vite will ignore it.
Optional hardening:
- Add engines to enforce Node >= 18 (Vite 7 requirement):
{
"engines": {
"node": ">=18.0.0"
}
}
- If you must keep a nonstandard config filename, wire scripts explicitly:
{
"scripts": {
"dev": "vite --config vite.config.mjs",
"build": "vite build --config vite.config.mjs",
"preview": "vite preview --config vite.config.mjs"
}
}
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "1) Locate Vite config files:"
fd -I -a -g 'vite.config.*' || true
echo
echo "2) Check if scripts pass a custom --config:"
rg -n --json $'"vite[^"]*--config' package.json || true
echo
echo "3) Extract versions for vite + @vitejs/plugin-react-swc:"
jq -r '.devDependencies | to_entries[] | select(.key=="vite" or .key=="@vitejs/plugin-react-swc")' dream_layer_frontend/package.json | jq -r '.key + "@" + .value'
Length of output: 1078
Would you please run the following to verify scripts, engines, and plugin peer dependencies?
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "1) Scripts in dream_layer_frontend/package.json:"
jq '.scripts' dream_layer_frontend/package.json || echo "Unable to parse scripts"
echo
echo "2) Engines in dream_layer_frontend/package.json:"
jq '.engines' dream_layer_frontend/package.json || echo "No engines field"
echo
echo "3) PeerDependencies for @vitejs/[email protected]:"
npm view @vitejs/[email protected] peerDependencies --json || echo "NPM lookup failed"
Length of output: 706
Fix Vite7 compatibility: bump React SWC plugin, add Node engine, clean up config
- @vitejs/plugin-react-swc@^3.5.0 only supports Vite ^4 | ^5. Upgrade to a version with Vite 7 peer support (e.g.
^4.0.0
or the latest):dream_layer_frontend/package.json "devDependencies": { - "@vitejs/plugin-react-swc": "^3.5.0", + "@vitejs/plugin-react-swc": "^4.0.0", "vite": "^7.1.2", … }
- Enforce Node >= 18 for Vite 7 by adding an
engines
field:dream_layer_frontend/package.json { + "engines": { + "node": ">=18.0.0" + }, "devDependencies": { … } }
- Vite will auto-load
vite.config.ts
, so you don’t need--config
. Remove the timestamp-suffixed file (vite.config.ts.timestamp-*.mjs
) to avoid confusion.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In dream_layer_frontend/package.json around line 82, update Vite 7 compatibility
by bumping @vitejs/plugin-react-swc to a Vite‑7‑compatible release (e.g. ^4.0.0
or latest), add an "engines" field with "node": ">=18" to enforce Node 18+, and
remove any timestamp‑suffixed vite config file from the repo
(vite.config.ts.timestamp-*.mjs) and any package.json scripts that pass --config
pointing to that timestamped file so they rely on vite.config.ts instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
♻️ Duplicate comments (9)
dream_layer_backend/txt2img_workflow.py (2)
5-5
: Fix duplicatejson
importThe
json
module is imported twice (lines 1 and 5), which causes a redefinition warning.import json import random import os import logging -import json from pathlib import Path
263-273
: Add error handling for workflow dump operationsThe file I/O operations should include error handling to prevent crashes on permission or disk space issues.
- # Ensure dump directory exists - dump_dir = os.path.join(os.path.dirname(__file__), "workflow_dumps") - os.makedirs(dump_dir, exist_ok=True) - - # Save the workflow JSON - output_path = os.path.join(dump_dir, "last_workflow.json") - with open(output_path, "w") as f: - json.dump(workflow, f, indent=2) - - print(f"📋 Generated workflow JSON: {json.dumps(workflow, indent=2)}") - print(f"🚀 Workflow JSON saved to {output_path}") + # Ensure dump directory exists and save workflow + try: + dump_dir = os.path.join(os.path.dirname(__file__), "workflow_dumps") + os.makedirs(dump_dir, exist_ok=True) + + # Save the workflow JSON + output_path = os.path.join(dump_dir, "last_workflow.json") + with open(output_path, "w") as f: + json.dump(workflow, f, indent=2) + + logger.info(f"Workflow JSON saved to {output_path}") + print(f"🚀 Workflow JSON saved to {output_path}") + except Exception as e: + logger.error(f"Failed to save workflow JSON: {e}") + # Continue execution as this is non-critical + + # Only print the full JSON in debug mode to avoid cluttering logs + logger.debug(f"Generated workflow JSON: {json.dumps(workflow, indent=2)}")dream_layer_backend/img2img_workflow.py (2)
259-259
: Remove unnecessary f-string prefixThe print statement uses an f-string prefix but doesn't contain any placeholders.
- print(f"✅ Workflow transformation complete") + print("✅ Workflow transformation complete")
260-270
: Add error handling for workflow dump operationsThe file I/O operations lack error handling for potential permission or disk space issues.
- # Ensure dump directory exists - dump_dir = os.path.join(os.path.dirname(__file__), "workflow_dumps") - os.makedirs(dump_dir, exist_ok=True) - - # Save the workflow JSON - output_path = os.path.join(dump_dir, "last_workflow.json") - with open(output_path, "w") as f: - json.dump(workflow, f, indent=2) - - print(f"📋 Generated workflow JSON: {json.dumps(workflow, indent=2)}") - print(f"🚀 Workflow JSON saved to {output_path}") + # Ensure dump directory exists and save workflow + try: + dump_dir = os.path.join(os.path.dirname(__file__), "workflow_dumps") + os.makedirs(dump_dir, exist_ok=True) + + # Save the workflow JSON + output_path = os.path.join(dump_dir, "last_workflow.json") + with open(output_path, "w") as f: + json.dump(workflow, f, indent=2) + + logger.info(f"Workflow JSON saved to {output_path}") + print(f"🚀 Workflow JSON saved to {output_path}") + except Exception as e: + logger.error(f"Failed to save workflow JSON: {e}") + # Continue execution as this is non-critical + + # Only print the full JSON in debug mode to avoid cluttering logs + logger.debug(f"Generated workflow JSON: {json.dumps(workflow, indent=2)}")dream_layer_backend/txt2img_server.py (5)
50-73
: Add error handling for CSV operationsThe CSV writing operations could fail due to permissions or disk space issues, which would crash the request handling.
def log_inference_trace(total_time, images_generated, gpu_name, driver_version,ckpt_name): """Log inference details to CSV and console.""" # Checking to see if images were generated, finding the time per image time_per_image = None if images_generated == 0 else total_time / images_generated # Converting to string format time_per_image_str = "N/A" if time_per_image is None else f"{time_per_image:.2f}" # Console logging print(f"⏱ {total_time:.2f}s total · {time_per_image_str}s/img · {gpu_name} · Driver {driver_version}") # CSV logging - ensure_csv_exists() - with open(TRACE_CSV, "a", newline="") as f: - writer = csv.writer(f) - writer.writerow([ - time.time(), - round(total_time, 4), - images_generated, - round(time_per_image, 4) if time_per_image is not None else "", - gpu_name, - driver_version, - ckpt_name - ]) + try: + ensure_csv_exists() + with open(TRACE_CSV, "a", newline="") as f: + writer = csv.writer(f) + writer.writerow([ + time.time(), + round(total_time, 4), + images_generated, + round(time_per_image, 4) if time_per_image is not None else "", + gpu_name, + driver_version, + ckpt_name + ]) + except Exception as e: + print(f"Warning: Failed to log inference trace to CSV: {e}")
111-111
: Fix COMFY_ROOT path calculationThe COMFY_ROOT is set to the
dream_layer_backend
directory, butComfyUI
is a sibling directory at the repository root level.# Get the absolute path to the ComfyUI root directory - COMFY_ROOT = os.path.dirname(os.path.abspath(__file__)) + # Navigate up to the repository root where ComfyUI resides + COMFY_ROOT = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
133-139
: Fix inverted checkpoint validation logicThe else branch executes when the checkpoint IS valid, but it returns an error. The error should only be returned when no checkpoints are available.
# Validate checkpoint if not ckpt_name or ckpt_name not in ALLOWED_CKPTS: if ALLOWED_CKPTS: chosen_ckpt = ALLOWED_CKPTS[0] print(f"Checkpoint '{ckpt_name}' invalid or missing, falling back to '{chosen_ckpt}'") ckpt_name = chosen_ckpt - else: - return jsonify({"error": "No checkpoints available on server"}), 500 + else: + return jsonify({"error": "No checkpoints available on server"}), 500
152-159
: Add NVML cleanup to prevent resource leaksNVML should be properly shut down after use to prevent resource leaks in long-running servers.
# Init NVML once at startup try: pynvml.nvmlInit() gpu_handle = pynvml.nvmlDeviceGetHandleByIndex(0) gpu_name = pynvml.nvmlDeviceGetName(gpu_handle).decode() driver_version = pynvml.nvmlSystemGetDriverVersion().decode() + pynvml.nvmlShutdown() except Exception: gpu_name = "CPU" driver_version = "N/A"
172-172
: Fix undefined variabletotal_time
- should beelapsed
Line 172 references
total_time
which doesn't exist. The elapsed time is stored in theelapsed
variable.# Determine number of images generated and time per image images_generated = len(comfy_response.get("all_images", [])) - time_per_image = None if images_generated == 0 else total_time / images_generated + time_per_image = None if images_generated == 0 else elapsed / images_generated
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
dream_layer_backend/img2img_server.py
(4 hunks)dream_layer_backend/img2img_workflow.py
(3 hunks)dream_layer_backend/txt2img_server.py
(3 hunks)dream_layer_backend/txt2img_workflow.py
(4 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (4)
dream_layer_backend/txt2img_workflow.py (1)
dream_layer_backend/img2img_workflow.py (1)
get_available_checkpoints
(25-39)
dream_layer_backend/img2img_server.py (3)
dream_layer_backend/txt2img_server.py (1)
get_allowed_checkpoints
(120-128)dream_layer_backend/img2img_workflow.py (1)
transform_to_img2img_workflow
(41-271)dream_layer_backend/shared_utils.py (1)
send_to_comfyui
(161-223)
dream_layer_backend/txt2img_server.py (3)
dream_layer_backend/shared_utils.py (1)
send_to_comfyui
(161-223)dream_layer_backend/img2img_server.py (1)
get_allowed_checkpoints
(168-176)dream_layer_backend/txt2img_workflow.py (1)
transform_to_txt2img_workflow
(35-280)
dream_layer_backend/img2img_workflow.py (1)
dream_layer_backend/txt2img_workflow.py (1)
get_available_checkpoints
(25-33)
🪛 Ruff (0.12.2)
dream_layer_backend/txt2img_workflow.py
5-5: Redefinition of unused json
from line 1
Remove definition: json
(F811)
dream_layer_backend/img2img_server.py
19-19: Found useless expression. Either assign it to a variable or remove it.
(B018)
19-19: Undefined name d
(F821)
dream_layer_backend/txt2img_server.py
172-172: Undefined name total_time
(F821)
dream_layer_backend/img2img_workflow.py
259-259: f-string without any placeholders
Remove extraneous f
prefix
(F541)
🔇 Additional comments (3)
dream_layer_backend/txt2img_workflow.py (1)
25-34
: LGTM! Checkpoint discovery implementation looks goodThe function properly handles filesystem errors and returns an empty list when the directory doesn't exist or errors occur, ensuring graceful degradation.
dream_layer_backend/img2img_workflow.py (1)
25-39
: LGTM! Robust checkpoint discovery with detailed loggingThe function includes comprehensive logging for debugging path resolution and gracefully handles missing directories.
dream_layer_backend/img2img_server.py (1)
40-40
: Path calculation for COMFY_ROOT appears correctThe path correctly navigates up one level from
dream_layer_backend/
to reach the repository root whereComfyUI/
resides.
try: | ||
pynvml.nvmlInit() | ||
gpu_handle = pynvml.nvmlDeviceGetHandleByIndex(0) | ||
gpu_name = pynvml.nvmlDeviceGetName(gpu_handle).decode() | ||
driver_version = pynvml.nvmlSystemGetDriverVersion().decode() | ||
except Exception: | ||
gpu_name = "CPU" | ||
driver_version = "N/A" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add NVML cleanup to prevent resource leaks
NVML should be properly shut down after use to prevent resource leaks in long-running servers.
try:
pynvml.nvmlInit()
gpu_handle = pynvml.nvmlDeviceGetHandleByIndex(0)
gpu_name = pynvml.nvmlDeviceGetName(gpu_handle).decode()
driver_version = pynvml.nvmlSystemGetDriverVersion().decode()
+ pynvml.nvmlShutdown()
except Exception:
gpu_name = "CPU"
driver_version = "N/A"
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In dream_layer_backend/img2img_server.py around lines 200 to 208, NVML is
initialized but not shut down which can leak resources; wrap the nvmlInit and
subsequent NVML calls in a try/finally (or try/except/finally) and call
pynvml.nvmlShutdown() in the finally block, using a local flag to track
successful initialization so shutdown is only called if init succeeded, and
ensure exceptions during NVML calls still set gpu_name/driver_version fallback
values before shutdown.
|
||
# Path for CSV log file | ||
BASE_DIR = os.path.dirname(os.path.abspath(__file__)) | ||
INFERENCE_TRACES_DIR = os.path.join(BASE_DIR, "inference_traces") | ||
os.makedirs(INFERENCE_TRACES_DIR, exist_ok=True) # create folder if it doesn't exist | ||
TRACE_CSV = os.path.join(INFERENCE_TRACES_DIR, "inference_trace_img2img.csv") | ||
|
||
# Ensure CSV file exists and has header | ||
if not os.path.exists(TRACE_CSV): | ||
with open(TRACE_CSV, "w", newline="") as f: | ||
writer = csv.writer(f) | ||
writer.writerow(["timestamp", "total_time_s", "images_generated", "time_per_image_s", "gpu_name", "driver_version","ckpt_name"]) | ||
|
||
# Append new row to CSV | ||
with open(TRACE_CSV, "a", newline="") as f: | ||
writer = csv.writer(f) | ||
writer.writerow([ | ||
time.time(), | ||
round(elapsed, 4), | ||
images_generated, | ||
round(time_per_image, 4) if time_per_image is not None else "", | ||
gpu_name, | ||
driver_version, | ||
ckpt_name | ||
]) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider adding error handling for CSV operations
The CSV writing operations could fail due to permissions or disk space issues, which would crash the request handling.
# Log info into CSV
# Path for CSV log file
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
INFERENCE_TRACES_DIR = os.path.join(BASE_DIR, "inference_traces")
- os.makedirs(INFERENCE_TRACES_DIR, exist_ok=True) # create folder if it doesn't exist
- TRACE_CSV = os.path.join(INFERENCE_TRACES_DIR, "inference_trace_img2img.csv")
-
- # Ensure CSV file exists and has header
- if not os.path.exists(TRACE_CSV):
- with open(TRACE_CSV, "w", newline="") as f:
- writer = csv.writer(f)
- writer.writerow(["timestamp", "total_time_s", "images_generated", "time_per_image_s", "gpu_name", "driver_version","ckpt_name"])
-
- # Append new row to CSV
- with open(TRACE_CSV, "a", newline="") as f:
- writer = csv.writer(f)
- writer.writerow([
- time.time(),
- round(elapsed, 4),
- images_generated,
- round(time_per_image, 4) if time_per_image is not None else "",
- gpu_name,
- driver_version,
- ckpt_name
- ])
+ try:
+ os.makedirs(INFERENCE_TRACES_DIR, exist_ok=True) # create folder if it doesn't exist
+ TRACE_CSV = os.path.join(INFERENCE_TRACES_DIR, "inference_trace_img2img.csv")
+
+ # Ensure CSV file exists and has header
+ if not os.path.exists(TRACE_CSV):
+ with open(TRACE_CSV, "w", newline="") as f:
+ writer = csv.writer(f)
+ writer.writerow(["timestamp", "total_time_s", "images_generated", "time_per_image_s", "gpu_name", "driver_version","ckpt_name"])
+
+ # Append new row to CSV
+ with open(TRACE_CSV, "a", newline="") as f:
+ writer = csv.writer(f)
+ writer.writerow([
+ time.time(),
+ round(elapsed, 4),
+ images_generated,
+ round(time_per_image, 4) if time_per_image is not None else "",
+ gpu_name,
+ driver_version,
+ ckpt_name
+ ])
+ except Exception as e:
+ logger.warning(f"Failed to log inference trace to CSV: {e}")
+ # Continue execution as this is non-critical
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
# Path for CSV log file | |
BASE_DIR = os.path.dirname(os.path.abspath(__file__)) | |
INFERENCE_TRACES_DIR = os.path.join(BASE_DIR, "inference_traces") | |
os.makedirs(INFERENCE_TRACES_DIR, exist_ok=True) # create folder if it doesn't exist | |
TRACE_CSV = os.path.join(INFERENCE_TRACES_DIR, "inference_trace_img2img.csv") | |
# Ensure CSV file exists and has header | |
if not os.path.exists(TRACE_CSV): | |
with open(TRACE_CSV, "w", newline="") as f: | |
writer = csv.writer(f) | |
writer.writerow(["timestamp", "total_time_s", "images_generated", "time_per_image_s", "gpu_name", "driver_version","ckpt_name"]) | |
# Append new row to CSV | |
with open(TRACE_CSV, "a", newline="") as f: | |
writer = csv.writer(f) | |
writer.writerow([ | |
time.time(), | |
round(elapsed, 4), | |
images_generated, | |
round(time_per_image, 4) if time_per_image is not None else "", | |
gpu_name, | |
driver_version, | |
ckpt_name | |
]) | |
# Path for CSV log file | |
BASE_DIR = os.path.dirname(os.path.abspath(__file__)) | |
INFERENCE_TRACES_DIR = os.path.join(BASE_DIR, "inference_traces") | |
try: | |
os.makedirs(INFERENCE_TRACES_DIR, exist_ok=True) # create folder if it doesn't exist | |
TRACE_CSV = os.path.join(INFERENCE_TRACES_DIR, "inference_trace_img2img.csv") | |
# Ensure CSV file exists and has header | |
if not os.path.exists(TRACE_CSV): | |
with open(TRACE_CSV, "w", newline="") as f: | |
writer = csv.writer(f) | |
writer.writerow([ | |
"timestamp", | |
"total_time_s", | |
"images_generated", | |
"time_per_image_s", | |
"gpu_name", | |
"driver_version", | |
"ckpt_name" | |
]) | |
# Append new row to CSV | |
with open(TRACE_CSV, "a", newline="") as f: | |
writer = csv.writer(f) | |
writer.writerow([ | |
time.time(), | |
round(elapsed, 4), | |
images_generated, | |
round(time_per_image, 4) if time_per_image is not None else "", | |
gpu_name, | |
driver_version, | |
ckpt_name | |
]) | |
except Exception as e: | |
logger.warning(f"Failed to log inference trace to CSV: {e}") | |
# Continue execution as this is non-critical |
|
||
# Dynamically determine the model name that's being used and validate | ||
requested_model = data.get("model_name") | ||
available_models = get_available_checkpoints() | ||
if not available_models: | ||
raise FileNotFoundError("No checkpoint models found in ComfyUI models/checkpoints directory") # could be changed to juggernautXL_v8Rundiffusion.safetensors | ||
|
||
# Use requested model if valid, else fallback to detected | ||
if requested_model and requested_model in available_models: | ||
model_name = requested_model | ||
else: | ||
# fallback to first available checkpoint and log the fallback | ||
model_name = available_models[0] | ||
logger.warning(f"Requested model '{requested_model}' not found. Falling back to '{model_name}'.") | ||
|
||
#model_name = data.get('model_name', 'juggernautXL_v8Rundiffusion.safetensors') # was hardcoded | ||
|
||
# Check if it's a closed-source model (DALL-E, FLUX, Ideogram, etc.) | ||
closed_source_models = ['dall-e-3', 'dall-e-2', 'flux-pro', 'flux-dev', 'ideogram-v3'] | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove duplicate closed-source model definitions
The closed-source models list is defined twice with overlapping entries (lines 83-84 and 104). Consolidate into a single definition.
- # Check if it's a closed-source model (DALL-E, FLUX, Ideogram, Stability AI, etc.)
- closed_source_models = ['dall-e-3', 'dall-e-2', 'flux-pro',
- 'flux-dev', 'ideogram-v3', 'stability-sdxl', 'stability-sd-turbo']
-
# Dynamically determine the model name that's being used and validate
requested_model = data.get("model_name")
available_models = get_available_checkpoints()
if not available_models:
raise FileNotFoundError("No checkpoint models found in ComfyUI models/checkpoints directory") # could be changed to juggernautXL_v8Rundiffusion.safetensors
# Use requested model if valid, else fallback to detected
if requested_model and requested_model in available_models:
model_name = requested_model
else:
# fallback to first available checkpoint and log the fallback
model_name = available_models[0]
logger.warning(f"Requested model '{requested_model}' not found. Falling back to '{model_name}'.")
- #model_name = data.get('model_name', 'juggernautXL_v8Rundiffusion.safetensors') # was hardcoded
-
- # Check if it's a closed-source model (DALL-E, FLUX, Ideogram, etc.)
- closed_source_models = ['dall-e-3', 'dall-e-2', 'flux-pro', 'flux-dev', 'ideogram-v3']
+ # Check if it's a closed-source model (DALL-E, FLUX, Ideogram, Stability AI, etc.)
+ closed_source_models = ['dall-e-3', 'dall-e-2', 'flux-pro', 'flux-dev', 'ideogram-v3', 'stability-sdxl', 'stability-sd-turbo']
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
# Dynamically determine the model name that's being used and validate | |
requested_model = data.get("model_name") | |
available_models = get_available_checkpoints() | |
if not available_models: | |
raise FileNotFoundError("No checkpoint models found in ComfyUI models/checkpoints directory") # could be changed to juggernautXL_v8Rundiffusion.safetensors | |
# Use requested model if valid, else fallback to detected | |
if requested_model and requested_model in available_models: | |
model_name = requested_model | |
else: | |
# fallback to first available checkpoint and log the fallback | |
model_name = available_models[0] | |
logger.warning(f"Requested model '{requested_model}' not found. Falling back to '{model_name}'.") | |
#model_name = data.get('model_name', 'juggernautXL_v8Rundiffusion.safetensors') # was hardcoded | |
# Check if it's a closed-source model (DALL-E, FLUX, Ideogram, etc.) | |
closed_source_models = ['dall-e-3', 'dall-e-2', 'flux-pro', 'flux-dev', 'ideogram-v3'] | |
# Dynamically determine the model name that's being used and validate | |
requested_model = data.get("model_name") | |
available_models = get_available_checkpoints() | |
if not available_models: | |
raise FileNotFoundError("No checkpoint models found in ComfyUI models/checkpoints directory") | |
# Use requested model if valid, else fallback to detected | |
if requested_model and requested_model in available_models: | |
model_name = requested_model | |
else: | |
model_name = available_models[0] | |
logger.warning(f"Requested model '{requested_model}' not found. Falling back to '{model_name}'.") | |
# Check if it's a closed-source model (DALL-E, FLUX, Ideogram, Stability AI, etc.) | |
closed_source_models = ['dall-e-3', 'dall-e-2', 'flux-pro', 'flux-dev', 'ideogram-v3', 'stability-sdxl', 'stability-sd-turbo'] |
🤖 Prompt for AI Agents
In dream_layer_backend/txt2img_workflow.py around lines 86 to 105, there are
duplicate definitions of the closed_source_models list; consolidate them into a
single definition (remove the second occurrence), place that single list in a
clear location before any checks that reference it (e.g., right after model
discovery/validation), and ensure all subsequent code references the unified
variable name so there is no shadowing or contradictory entries.
Hey! Thanks for raising the PR, can you please upload the evidence requested? It will be easier for us to review. |
Description
Brief description of what this PR does.
Changes Made
Evidence Required ✅
UI Screenshot
Generated Image
Logs
Tests (Optional)
Checklist
Summary by Sourcery
Add inference tracing, dynamic checkpoint handling, workflow saving and metrics display/export features, along with endpoint tests and updated build config.
New Features:
Enhancements:
Build:
Tests:
Summary by CodeRabbit