Skip to content

Conversation

jrb00013
Copy link

@jrb00013 jrb00013 commented Aug 13, 2025

Description

Brief description of what this PR does.

Changes Made

Evidence Required ✅

UI Screenshot

UI Screenshot

Generated Image

Generated Image

Logs

# Paste logs here

Tests (Optional)

# Test results

Checklist

  • UI screenshot provided
  • Generated image provided
  • Logs provided
  • Tests added (optional)
  • Code follows project style
  • Self-review completed

Summary by Sourcery

Add inference tracing, dynamic checkpoint handling, workflow saving and metrics display/export features, along with endpoint tests and updated build config.

New Features:

  • Record inference metrics (runtime, per-image time, GPU/driver, checkpoint) to CSV and include them in API responses
  • Display inference metrics badge and provide CSV export button in Txt2Img and Img2Img UIs
  • Dynamically discover and validate available model checkpoints, falling back to a valid checkpoint if needed
  • Save generated ComfyUI workflow JSON files to disk after transformation
  • Add integration test script for txt2img and img2img endpoints and a base64 conversion utility script

Enhancements:

  • Improve path resolution for COMFY_ROOT and checkpoint directories
  • Initialize NVML for GPU info and enhance logging in backend
  • Refactor transform_to_img2img and transform_to_txt2img workflows for dynamic model selection

Build:

  • Add Vite configuration file for frontend development server

Tests:

  • Add test_image/test_img_endpoints.py to validate backend endpoints
  • Add base64conversion.py utility in test_image folder

Summary by CodeRabbit

  • New Features
    • Runtime inference metrics for Txt2Img and Img2Img with in-app display and CSV export.
    • Automatic detection and selection of available model checkpoints for generation.
  • Performance
    • GPU/driver detection and per-request timing (total and per-image) surfaced to UI and API.
  • Bug Fixes
    • Temporary input images are removed after processing.
  • Tests
    • Added local test scripts for image endpoints and base64 conversion.
  • Chores
    • Pinned Python to 3.11.9; upgraded Vite and added a test script.

Copy link
Contributor

sourcery-ai bot commented Aug 13, 2025

Reviewer's Guide

This PR integrates performance tracing, dynamic checkpoint validation, and workflow dumping on both txt2img and img2img backends, surfaces inference metrics in the React frontend with CSV export, and adds basic integration tests alongside minor configuration fixes.

Sequence diagram for inference request with metrics logging and checkpoint validation

sequenceDiagram
  actor User
  participant Frontend
  participant Backend
  participant ComfyUI
  participant CSVFile

  User->>Frontend: Submit txt2img/img2img request
  Frontend->>Backend: POST /api/txt2img or /api/img2img
  Backend->>Backend: Validate checkpoint (dynamic)
  alt Checkpoint invalid or missing
    Backend->>Backend: Fallback to first available checkpoint
  end
  Backend->>ComfyUI: Send workflow
  ComfyUI-->>Backend: Return generated images
  Backend->>Backend: Collect metrics (elapsed time, GPU, etc.)
  Backend->>CSVFile: Log metrics to CSV
  Backend->>Frontend: Return images and metrics
  Frontend->>User: Display images and metrics
  User->>Frontend: Download metrics CSV (optional)
  Frontend->>User: Provide CSV file
Loading

ER diagram for inference trace CSV logging

erDiagram
  INFERENCE_TRACE_TXT2IMG {
    timestamp float
    total_time_s float
    images_generated int
    time_per_image_s float
    gpu_name string
    driver_version string
    ckpt_name string
  }
  INFERENCE_TRACE_IMG2IMG {
    timestamp float
    total_time_s float
    images_generated int
    time_per_image_s float
    gpu_name string
    driver_version string
    ckpt_name string
  }
Loading

Class diagram for backend workflow and checkpoint validation utilities

classDiagram
  class Txt2ImgWorkflow {
    +transform_to_txt2img_workflow(data)
    +get_available_checkpoints()
    -logger
  }
  class Img2ImgWorkflow {
    +transform_to_img2img_workflow(data)
    +get_available_checkpoints()
    -logger
  }
  class BackendServer {
    +handle_txt2img()
    +handle_img2img()
    +log_inference_trace(...)
    +ensure_csv_exists()
  }
  Txt2ImgWorkflow <.. BackendServer : uses
  Img2ImgWorkflow <.. BackendServer : uses
  BackendServer o-- "1" CSVFile : logs metrics
  BackendServer o-- "1" ComfyUI : sends workflow
Loading

File-Level Changes

Change Details Files
Implement inference performance tracing and metrics collection in API servers
  • Create inference_traces directory and CSV files with headers
  • Add ensure_csv_exists and log_inference_trace helper functions
  • Initialize NVML to capture GPU name and driver version
  • Measure elapsed time and images generated around ComfyUI calls
  • Append data to CSV and inject metrics into JSON responses
dream_layer_backend/txt2img_server.py
dream_layer_backend/img2img_server.py
Validate and fallback checkpoint selection in text2img and img2img endpoints
  • Add inline get_allowed_checkpoints to list .ckpt/.safetensors files
  • Check ckpt_name input and fallback to first available if missing or invalid
  • Return error if no checkpoints are present
  • Insert resolved ckpt_name back into request data
dream_layer_backend/txt2img_server.py
dream_layer_backend/img2img_server.py
Enable dynamic checkpoint discovery and dump workflows for debugging
  • Add get_available_checkpoints to workflow modules
  • Validate requested_model against available checkpoints with fallback
  • Inject logging and warnings on model fallback
  • Ensure workflow_dumps directory exists and write last_workflow.json
dream_layer_backend/img2img_workflow.py
dream_layer_backend/txt2img_workflow.py
Surface inference metrics in React frontend with badge and CSV export
  • Introduce metrics state and effect hooks to capture backend data
  • Implement MetricsBadge components to display time per image and GPU info
  • Provide exportTxt2ImgMetricsToCSV and exportImg2ImgMetricsToCSV functions
  • Embed metrics badge and Download CSV buttons into UI action areas
dream_layer_frontend/src/features/Txt2Img/Txt2ImgPage.tsx
dream_layer_frontend/src/features/Img2Img/Img2ImgPage.tsx
Add integration tests and base64 conversion utility
  • Create test_img_endpoints.py to exercise txt2img/img2img APIs
  • Implement base64conversion.py to generate/read Base64 test images
  • Verify checkpoint presence and print truncated JSON responses
dream_layer_backend/test_image/test_img_endpoints.py
dream_layer_backend/test_image/base64conversion.py
Apply miscellaneous configuration updates
  • Refactor COMFY_ROOT path resolution in img2img_server
  • Add Vite config with path alias in frontend
  • Include .python-version file for environment consistency
dream_layer_backend/img2img_server.py
dream_layer_frontend/vite.config.ts
.python-version

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

coderabbitai bot commented Aug 13, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

Pins Python version and adds dynamic checkpoint discovery/validation, NVML GPU/driver detection, timing and CSV inference tracing, and workflow dumps; frontend displays and exports runtime metrics. Also adds test utilities and minor tooling updates.

Changes

Cohort / File(s) Summary
Version Pinning
'.python-version'
New file pinning Python to 3.11.9.
Img2Img Server
dream_layer_backend/img2img_server.py
Add dynamic checkpoint lookup/validation, NVML GPU/driver capture, timing around ComfyUI calls, CSV inference trace logging, metrics in JSON responses, temp file cleanup, and adjusted COMFY_ROOT resolution.
Txt2Img Server
dream_layer_backend/txt2img_server.py
Add dynamic checkpoint lookup/validation, NVML GPU/driver capture, timing/inference CSV tracing, metrics/results enrichment in responses, and CSV helper functions/paths.
Img2Img Workflow
dream_layer_backend/img2img_workflow.py
Add get_available_checkpoints(), select/validate ckpt dynamically, inject ckpt/vae/controlnet/lora, seed handling, optional custom workflow merging, and dump last workflow JSON.
Txt2Img Workflow
dream_layer_backend/txt2img_workflow.py
Add get_available_checkpoints(), dynamic model selection with fallback, logger, and write final workflow to workflow_dumps/last_workflow.json.
Workflow Dump
dream_layer_backend/workflow_dumps/last_workflow.json
Update serialized workflow (checkpoint reference, sampler seed change, preserved step structure).
Backend Test Utilities
dream_layer_backend/test_image/base64conversion.py, dream_layer_backend/test_image/test_img_endpoints.py
New scripts: base64 round-trip image encoder/decoder; endpoint tester that discovers checkpoints, builds payloads for txt2img/img2img, and pretty-prints responses.
Frontend — Img2Img UI
dream_layer_frontend/src/features/Img2Img/Img2ImgPage.tsx
Add metrics state, MetricsBadge UI, CSV export for img2img metrics, and integrate metrics into header/action area.
Frontend — Txt2Img UI
dream_layer_frontend/src/features/Txt2Img/Txt2ImgPage.tsx
Add metrics state, MetricsBadge UI, CSV export for txt2img metrics, and surface metrics + export button in action area.
Frontend Tooling
dream_layer_frontend/vite.config.ts.timestamp-...mjs, dream_layer_frontend/package.json
Add a Vite config module (host ::, port 8080, react-swc, alias @) and bump vite devDependency to ^7.1.2; add test script (jest).

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User
  participant FE as Frontend
  participant BE as Backend (txt/img)
  participant WF as Workflow Builder
  participant CU as ComfyUI
  participant CSV as Trace CSV
  participant NV as NVML

  User->>FE: Click Generate
  FE->>BE: POST /txt2img or /img2img (payload, optional ckpt_name)
  BE->>BE: Discover available checkpoints
  BE->>BE: Validate/select ckpt_name (fallback if needed)
  BE->>NV: Init NVML, read GPU & driver
  BE->>WF: Build workflow (includes ckpt_name, image, seed, options)
  BE->>CU: Send workflow
  CU-->>BE: Comfy response (images, metadata)
  BE->>BE: Measure elapsed, compute per-image time
  BE->>CSV: Append trace row (timestamp, times, gpu, ckpt)
  BE-->>FE: Return comfy_response + metrics + generated_images
  FE->>FE: Update UI, show MetricsBadge, enable CSV export
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • darcy3000

Poem

In my burrow I compile and hop,
Checkpoints found — I never stop.
Timers tick and GPUs hum,
CSV crumbs — nibble, thrum, thrum.
A rabbit's cheer: metrics saved — let’s ship the crop! 🐇✨

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between d84d791 and 04520bf.

⛔ Files ignored due to path filters (1)
  • dream_layer_backend/inference_traces/inference_trace_txt2img.csv is excluded by !**/*.csv
📒 Files selected for processing (3)
  • dream_layer_backend/img2img_server.py (3 hunks)
  • dream_layer_backend/workflow_dumps/last_workflow.json (1 hunks)
  • dream_layer_frontend/package.json (2 hunks)
✅ Files skipped from review due to trivial changes (1)
  • dream_layer_backend/workflow_dumps/last_workflow.json
🚧 Files skipped from review as they are similar to previous changes (2)
  • dream_layer_frontend/package.json
  • dream_layer_backend/img2img_server.py
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @jrb00013 - I've reviewed your changes and found some issues that need to be addressed.

Blocking issues:

  • Variable 'total_time' is undefined; should use 'elapsed'. (link)

General comments:

  • Consider extracting the repeated checkpoint‐validation and CSV logging logic into a shared utility module to reduce code duplication across txt2img and img2img handlers.
  • In the txt2img handler, time_per_image uses total_time which isn’t defined—update this to use the measured elapsed variable to avoid reference errors.
  • The frontend metrics badge and CSV export bits are duplicated in both Txt2ImgPage and Img2ImgPage—consider moving them into a shared React component to keep the UI code DRY.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Consider extracting the repeated checkpoint‐validation and CSV logging logic into a shared utility module to reduce code duplication across txt2img and img2img handlers.
- In the txt2img handler, `time_per_image` uses `total_time` which isn’t defined—update this to use the measured `elapsed` variable to avoid reference errors.
- The frontend metrics badge and CSV export bits are duplicated in both Txt2ImgPage and Img2ImgPage—consider moving them into a shared React component to keep the UI code DRY.

## Individual Comments

### Comment 1
<location> `dream_layer_backend/txt2img_server.py:168` </location>
<code_context>
+        elapsed = time.perf_counter() - start_time
+
+        # Calculate images generated
+        images_generated = len(comfy_response.get("all_images", []))
+        time_per_image = elapsed / images_generated if images_generated > 0 else None
+
</code_context>

<issue_to_address>
Variable 'total_time' is undefined; should use 'elapsed'.

Replace 'total_time' with 'elapsed' in the time_per_image calculation to prevent a NameError.
</issue_to_address>

### Comment 2
<location> `dream_layer_frontend/src/features/Img2Img/Img2ImgPage.tsx:469` </location>
<code_context>
+              >
+                Download CSV
+              </button>
+              <MetricsBadge
+              elapsedTimeSec={metrics.time_per_image_sec ?? (metrics.elapsed_time_sec / (batchSize * batchCount))}
+              gpu={metrics.gpu}
+              driver={metrics.driver_version}
+            />
+            </>
</code_context>

<issue_to_address>
Fallback for time per image may divide by zero if batchSize or batchCount are zero.

Add a check to ensure batchSize * batchCount is not zero before dividing; if zero, display 'N/A' instead.
</issue_to_address>

### Comment 3
<location> `dream_layer_backend/img2img_workflow.py:249` </location>
<code_context>
         logger.info("Injecting Refiner parameters...")
         workflow = inject_refiner_parameters(workflow, refiner_data)

+    print(f"✅ Workflow transformation complete")
+     # Ensure dump directory exists
+    dump_dir = os.path.join(os.path.dirname(__file__), "workflow_dumps")
+    os.makedirs(dump_dir, exist_ok=True)
+    
+    # Save the workflow JSON
+    output_path = os.path.join(dump_dir, "last_workflow.json")
+    with open(output_path, "w") as f:
+        json.dump(workflow, f, indent=2)
+
+    print(f"📋 Generated workflow JSON: {json.dumps(workflow, indent=2)}")
+    print(f"🚀 Workflow JSON saved to {output_path}")
     return workflow

</code_context>

<issue_to_address>
Console print statements may clutter logs in production.

Use the logger for these messages to ensure consistent log management and prevent unnecessary output in production.
</issue_to_address>

<suggested_fix>
<<<<<<< SEARCH
    print(f"✅ Workflow transformation complete")
     # Ensure dump directory exists
    dump_dir = os.path.join(os.path.dirname(__file__), "workflow_dumps")
    os.makedirs(dump_dir, exist_ok=True)

    # Save the workflow JSON
    output_path = os.path.join(dump_dir, "last_workflow.json")
    with open(output_path, "w") as f:
        json.dump(workflow, f, indent=2)

    print(f"📋 Generated workflow JSON: {json.dumps(workflow, indent=2)}")
    print(f"🚀 Workflow JSON saved to {output_path}")
    return workflow
=======
    logger.info("✅ Workflow transformation complete")
     # Ensure dump directory exists
    dump_dir = os.path.join(os.path.dirname(__file__), "workflow_dumps")
    os.makedirs(dump_dir, exist_ok=True)

    # Save the workflow JSON
    output_path = os.path.join(dump_dir, "last_workflow.json")
    with open(output_path, "w") as f:
        json.dump(workflow, f, indent=2)

    logger.info(f"📋 Generated workflow JSON: {json.dumps(workflow, indent=2)}")
    logger.info(f"🚀 Workflow JSON saved to {output_path}")
    return workflow
>>>>>>> REPLACE

</suggested_fix>

### Comment 4
<location> `dream_layer_backend/test_image/test_img_endpoints.py:29` </location>
<code_context>
+        return []
+    return [f.name for f in CHECKPOINTS_DIR.iterdir() if f.suffix in {".safetensors", ".ckpt"}]
+
+def get_base64_image() -> str:
+    """Return base64 string from .txt if available, else generate from PNG."""
+    if BASE64_PATH.exists():
+        print(f"Reading existing Base64 from {BASE64_PATH}")
+        return BASE64_PATH.read_text().strip()
+    elif PNG_PATH.exists():
+        print(f"Encoding {PNG_PATH} to Base64...")
+        encoded = base64.b64encode(PNG_PATH.read_bytes()).decode("utf-8")
+        BASE64_PATH.write_text(encoded)
+        print(f"Saved Base64 to {BASE64_PATH}")
+        return encoded
+    else:
+        raise FileNotFoundError(f"Neither {PNG_PATH} nor {BASE64_PATH} found.")
+
+def pretty_print_response(title: str, resp: requests.Response):
</code_context>

<issue_to_address>
No test for handling missing image files.

Please add a test to confirm that a FileNotFoundError is raised when both files are missing.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

def log_inference_trace(total_time, images_generated, gpu_name, driver_version,ckpt_name):
"""Log inference details to CSV and console."""
# Checking to see if images were generated, finding the time per image
time_per_image = None if images_generated == 0 else total_time / images_generated
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): Variable 'total_time' is undefined; should use 'elapsed'.

Replace 'total_time' with 'elapsed' in the time_per_image calculation to prevent a NameError.

Comment on lines +469 to +472
<MetricsBadge
elapsedTimeSec={metrics.time_per_image_sec ?? (metrics.elapsed_time_sec / (batchSize * batchCount))}
gpu={metrics.gpu}
driver={metrics.driver_version}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): Fallback for time per image may divide by zero if batchSize or batchCount are zero.

Add a check to ensure batchSize * batchCount is not zero before dividing; if zero, display 'N/A' instead.

Comment on lines +249 to 261
print(f"✅ Workflow transformation complete")
# Ensure dump directory exists
dump_dir = os.path.join(os.path.dirname(__file__), "workflow_dumps")
os.makedirs(dump_dir, exist_ok=True)

# Save the workflow JSON
output_path = os.path.join(dump_dir, "last_workflow.json")
with open(output_path, "w") as f:
json.dump(workflow, f, indent=2)

print(f"📋 Generated workflow JSON: {json.dumps(workflow, indent=2)}")
print(f"🚀 Workflow JSON saved to {output_path}")
return workflow
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Console print statements may clutter logs in production.

Use the logger for these messages to ensure consistent log management and prevent unnecessary output in production.

Suggested change
print(f"✅ Workflow transformation complete")
# Ensure dump directory exists
dump_dir = os.path.join(os.path.dirname(__file__), "workflow_dumps")
os.makedirs(dump_dir, exist_ok=True)
# Save the workflow JSON
output_path = os.path.join(dump_dir, "last_workflow.json")
with open(output_path, "w") as f:
json.dump(workflow, f, indent=2)
print(f"📋 Generated workflow JSON: {json.dumps(workflow, indent=2)}")
print(f"🚀 Workflow JSON saved to {output_path}")
return workflow
logger.info("✅ Workflow transformation complete")
# Ensure dump directory exists
dump_dir = os.path.join(os.path.dirname(__file__), "workflow_dumps")
os.makedirs(dump_dir, exist_ok=True)
# Save the workflow JSON
output_path = os.path.join(dump_dir, "last_workflow.json")
with open(output_path, "w") as f:
json.dump(workflow, f, indent=2)
logger.info(f"📋 Generated workflow JSON: {json.dumps(workflow, indent=2)}")
logger.info(f"🚀 Workflow JSON saved to {output_path}")
return workflow

Comment on lines +29 to +38
def get_base64_image() -> str:
"""Return base64 string from .txt if available, else generate from PNG."""
if BASE64_PATH.exists():
print(f"Reading existing Base64 from {BASE64_PATH}")
return BASE64_PATH.read_text().strip()
elif PNG_PATH.exists():
print(f"Encoding {PNG_PATH} to Base64...")
encoded = base64.b64encode(PNG_PATH.read_bytes()).decode("utf-8")
BASE64_PATH.write_text(encoded)
print(f"Saved Base64 to {BASE64_PATH}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): No test for handling missing image files.

Please add a test to confirm that a FileNotFoundError is raised when both files are missing.

import { defineConfig } from "file:///Users/clairehudson/DreamLayer/DreamLayer/dream_layer_frontend/node_modules/vite/dist/node/index.js";
import react from "file:///Users/clairehudson/DreamLayer/DreamLayer/dream_layer_frontend/node_modules/@vitejs/plugin-react-swc/index.mjs";
import path from "path";
var __vite_injected_original_dirname = "/Users/clairehudson/DreamLayer/DreamLayer/dream_layer_frontend";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (code-quality): Use const or let instead of var. (avoid-using-var)

Explanation`const` is preferred as it ensures you cannot reassign references (which can lead to buggy and confusing code). `let` may be used if you need to reassign references - it's preferred to `var` because it is block- rather than function-scoped.

From the Airbnb JavaScript Style Guide

Comment on lines +6 to +19
var vite_config_default = defineConfig({
server: {
host: "::",
port: 8080
},
plugins: [
react()
],
resolve: {
alias: {
"@": path.resolve(__vite_injected_original_dirname, "./src")
}
}
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (code-quality): Use const or let instead of var. (avoid-using-var)

Explanation`const` is preferred as it ensures you cannot reassign references (which can lead to buggy and confusing code). `let` may be used if you need to reassign references - it's preferred to `var` because it is block- rather than function-scoped.

From the Airbnb JavaScript Style Guide

logger.info(f"Found checkpoint files: {models}")
return models

def transform_to_img2img_workflow(data):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (code-quality): We've found these issues:


Explanation
The quality score for this function is below the quality threshold of 25%.
This score is a combination of the method length, cognitive complexity and working memory.

How can you solve this?

It might be worth refactoring this function to make it shorter and more readable.

  • Reduce the function length by extracting pieces of functionality out into
    their own functions. This is the most important thing you can do - ideally a
    function should be less than 10 lines.
  • Reduce nesting, perhaps by introducing guard clauses to return early.
  • Ensure that variables are tightly scoped, so that code using related concepts
    sits together within the function rather than being scattered.

Comment on lines +56 to +57
checkpoints = get_checkpoints()
if checkpoints:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (code-quality): Use named expression to simplify assignment and conditional (use-named-expression)

Suggested change
checkpoints = get_checkpoints()
if checkpoints:
if checkpoints := get_checkpoints():

Comment on lines +28 to +29
models = [f.name for f in checkpoints_dir.glob("*") if f.suffix in ['.safetensors', '.ckpt']]
return models
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (code-quality): Inline variable that is immediately returned (inline-immediately-returned-variable)

Suggested change
models = [f.name for f in checkpoints_dir.glob("*") if f.suffix in ['.safetensors', '.ckpt']]
return models
return [
f.name
for f in checkpoints_dir.glob("*")
if f.suffix in ['.safetensors', '.ckpt']
]

logger.error(f"Failed to list checkpoints: {e}")
return []

def transform_to_txt2img_workflow(data):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (code-quality): We've found these issues:


Explanation

The quality score for this function is below the quality threshold of 25%.
This score is a combination of the method length, cognitive complexity and working memory.

How can you solve this?

It might be worth refactoring this function to make it shorter and more readable.

  • Reduce the function length by extracting pieces of functionality out into
    their own functions. This is the most important thing you can do - ideally a
    function should be less than 10 lines.
  • Reduce nesting, perhaps by introducing guard clauses to return early.
  • Ensure that variables are tightly scoped, so that code using related concepts
    sits together within the function rather than being scattered.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 15

🔭 Outside diff range comments (2)
dream_layer_frontend/vite.config.ts.timestamp-1755070710337-0629aa119849b.mjs (1)

1-24: Remove timestamped Vite artifact exposing local paths and add a proper config

The generated file
dream_layer_frontend/vite.config.ts.timestamp-1755070710337-0629aa119849b.mjs
contains hard-coded file:///Users/... imports and __vite_injected_original_dirname, which:

  • Leaks a developer username/absolute paths
  • Will break on any other machine/CI
  • Bloats the repo

Action items:

  • Delete the timestamped .mjs file
  • Commit a clean dream_layer_frontend/vite.config.ts (not a transpiled artifact)
  • Update .gitignore to block any future timestamped artifacts

Proposed .gitignore update:

 # ignore Vite timestamped build artifacts
+dream_layer_frontend/*.timestamp-*.mjs
+**/*.timestamp-*.mjs

Proposed clean Vite config at dream_layer_frontend/vite.config.ts:

import { defineConfig } from "vite";
import react from "@vitejs/plugin-react-swc";
import { fileURLToPath, URL } from "node:url";

export default defineConfig({
  server: {
    host: "::", // or host: true
    port: 8080,
  },
  plugins: [react()],
  resolve: {
    alias: {
      "@": fileURLToPath(new URL("./src", import.meta.url)),
    },
  },
});
dream_layer_backend/test_image/base64conversion.py (1)

1-43: Avoid top-level side effects; fix misleading comment; make selection deterministic.

Importing this module currently executes file I/O and writes artifacts. Wrap in main(), guard with if name == "main", sort files for determinism, and fix the comment.

Apply:

 import base64
 from pathlib import Path
 
-# Folder containing images
-folder = Path(__file__).parent  # assuming this script is inside test_image folder
-
-# Supported image extensions
-img_extensions = {".png", ".jpg", ".jpeg", ".bmp", ".gif"}
-
-# Find first image file in folder
-image_files = [f for f in folder.iterdir() if f.suffix.lower() in img_extensions]
-
-if not image_files:
-    raise FileNotFoundError(f"No image files found in {folder}")
-
-input_img_path = image_files[0]
-base64_txt_path = folder / "base64_txt_test_image.txt"
-output_img_path = folder / "test_image.png"
-
-print(f"Using input image: {input_img_path.name}")
-
-# Read image bytes
-with open(input_img_path, "rb") as f:
-    img_bytes = f.read()
-
-# Encode to base64 string
-base64_img = base64.b64encode(img_bytes).decode("utf-8")
-
-# Save base64 string to 4.txt
-with open(base64_txt_path, "w") as f:
-    f.write(base64_img)
-
-print(f"Saved base64 string to {base64_txt_path}")
-
-# Decode base64 string back to bytes
-decoded_bytes = base64.b64decode(base64_img)
-
-# Save decoded bytes as new PNG file
-with open(output_img_path, "wb") as f:
-    f.write(decoded_bytes)
-
-print(f"Saved decoded image as {output_img_path}")
+def main() -> None:
+    # Folder containing images
+    folder = Path(__file__).parent  # assuming this script is inside test_image folder
+
+    # Supported image extensions
+    img_extensions = {".png", ".jpg", ".jpeg", ".bmp", ".gif"}
+
+    # Find first image file in folder (sorted for determinism)
+    image_files = sorted(
+        f for f in folder.iterdir() if f.is_file() and f.suffix.lower() in img_extensions
+    )
+    if not image_files:
+        raise FileNotFoundError(f"No image files found in {folder}")
+
+    input_img_path = image_files[0]
+    base64_txt_path = folder / "base64_txt_test_image.txt"
+    output_img_path = folder / "test_image.png"
+
+    print(f"Using input image: {input_img_path.name}")
+
+    # Read image bytes
+    img_bytes = input_img_path.read_bytes()
+
+    # Encode to base64 string
+    base64_img = base64.b64encode(img_bytes).decode("utf-8")
+
+    # Save base64 string to base64_txt_test_image.txt
+    base64_txt_path.write_text(base64_img, encoding="utf-8")
+    print(f"Saved base64 string to {base64_txt_path}")
+
+    # Decode base64 string back to bytes and save as new PNG file
+    decoded_bytes = base64.b64decode(base64_img)
+    output_img_path.write_bytes(decoded_bytes)
+    print(f"Saved decoded image as {output_img_path}")
+
+
+if __name__ == "__main__":
+    main()
🧹 Nitpick comments (13)
ComfyUI/user/default/comfy.settings.json (1)

2-3: Committing default user settings may unintentionally override or conflict with local user preferences.

Storing user defaults under ComfyUI/user/default can be mistaken for a vendored upstream file and may be overwritten during upgrades. Also, forcing Comfy.TutorialCompleted and a ColorPalette globally could surprise users.

Consider:

  • Moving project-level defaults to an app config layer or provisioning step that writes this file at runtime.
  • If keeping in-repo, add a README explaining intended behavior and acceptable values for Comfy.ColorPalette (e.g., "light"/"dark"/"system"), and how users can override locally.
dream_layer_backend/img2img_workflow.py (4)

21-21: Remove duplicate import statement

The pathlib module import on line 21 appears after other imports, which disrupts the conventional import ordering where standard library imports should be grouped together at the beginning.

Move this import to be grouped with other standard library imports:

import json
import os
import random
import re
import logging
+from pathlib import Path
from dream_layer import get_directories 
from extras import COMFY_INPUT_DIR
-from pathlib import Path

25-39: Consider simplifying checkpoint discovery logic

The function includes verbose logging for debugging purposes. While helpful during development, this might clutter production logs.

Consider using debug-level logging for verbose path information:

def get_available_checkpoints():
-    logger.info(f"Current __file__ path: {__file__}")
+    logger.debug(f"Current __file__ path: {__file__}")
    root_dir = Path(__file__).resolve().parent.parent
-    logger.info(f"Resolved root_dir: {root_dir}")
+    logger.debug(f"Resolved root_dir: {root_dir}")
    
    checkpoints_dir = root_dir / "ComfyUI" / "models" / "checkpoints"
-    logger.info(f"Looking for checkpoints in: {checkpoints_dir}")
+    logger.debug(f"Looking for checkpoints in: {checkpoints_dir}")
    
    if not checkpoints_dir.exists():
        logger.error(f"Checkpoints directory does not exist: {checkpoints_dir}")
        return []
    
    models = [f.name for f in checkpoints_dir.glob("*") if f.suffix in ['.safetensors', '.ckpt']]
-    logger.info(f"Found checkpoint files: {models}")
+    logger.info(f"Found {len(models)} checkpoint files")
+    logger.debug(f"Checkpoint files: {models}")
    return models

60-61: Remove commented-out legacy code

Line 60 contains commented-out code that's no longer needed since the dynamic model resolution is now in place.

-    #model_name = data.get('model_name', 'v1-6-pruned-emaonly-fp16.safetensors') # was hardcoded 
-    

108-108: Remove duplicate commented code

This line duplicates the commented-out code from line 60.

-    #model_name = data.get('model_name', 'v1-6-pruned-emaonly-fp16.safetensors')
dream_layer_frontend/src/features/Img2Img/Img2ImgPage.tsx (1)

272-288: Consider extracting GPU truncation logic to a utility function

The MetricsBadge component contains GPU name truncation logic that could be reused elsewhere.

Consider extracting the truncation logic to a utility function:

// In a utils file
export const truncateText = (text: string, maxLength: number = 12): string => {
  return text.length > maxLength ? text.slice(0, maxLength) + "…" : text;
};

// Then in the component
const shortGpu = truncateText(gpu, 12);
dream_layer_backend/txt2img_workflow.py (1)

89-90: Remove commented legacy code

The commented line is no longer needed with the dynamic checkpoint resolution in place.

-        #model_name = data.get('model_name', 'juggernautXL_v8Rundiffusion.safetensors') # was hardcoded
-
dream_layer_backend/img2img_server.py (1)

189-190: Remove commented alternative approach

The commented line shows an alternative API that's not being used and should be removed for clarity.

        workflow = transform_to_img2img_workflow(data)
-        # workflow = transform_to_img2img_workflow(data, ckpt_name=ckpt_name)
-
dream_layer_backend/test_image/test_img_endpoints.py (3)

1-3: Fix the docstring to follow standard Python conventions.

The docstring should provide a clear description of the module's purpose.

 """
-Test file validation logic for testing model endpoint communication
+Test script for validating model endpoints and communication with txt2img/img2img servers.
 """

29-42: Consider adding error handling for base64 encoding failures.

The function should handle potential encoding errors more gracefully, especially for large files or corrupted images.

 def get_base64_image() -> str:
     """Return base64 string from .txt if available, else generate from PNG."""
     if BASE64_PATH.exists():
         print(f"Reading existing Base64 from {BASE64_PATH}")
         return BASE64_PATH.read_text().strip()
     elif PNG_PATH.exists():
-        print(f"Encoding {PNG_PATH} to Base64...")
-        encoded = base64.b64encode(PNG_PATH.read_bytes()).decode("utf-8")
-        BASE64_PATH.write_text(encoded)
-        print(f"Saved Base64 to {BASE64_PATH}")
-        return encoded
+        try:
+            print(f"Encoding {PNG_PATH} to Base64...")
+            image_bytes = PNG_PATH.read_bytes()
+            encoded = base64.b64encode(image_bytes).decode("utf-8")
+            BASE64_PATH.write_text(encoded)
+            print(f"Saved Base64 to {BASE64_PATH}")
+            return encoded
+        except Exception as e:
+            raise RuntimeError(f"Failed to encode {PNG_PATH} to Base64: {e}")
     else:
         raise FileNotFoundError(f"Neither {PNG_PATH} nor {BASE64_PATH} found.")

53-105: Consider making the test script more configurable and reusable.

The main block could benefit from being wrapped in a function and accepting command-line arguments for flexibility.

Consider refactoring the script to:

  1. Accept command-line arguments for API hosts, test image paths, and other parameters
  2. Wrap the main logic in a main() function for better structure
  3. Return exit codes to indicate success/failure for CI/CD integration

Example structure:

import argparse

def main():
    parser = argparse.ArgumentParser(description='Test DreamLayer API endpoints')
    parser.add_argument('--txt2img-host', default='http://127.0.0.1:5001')
    parser.add_argument('--img2img-host', default='http://127.0.0.1:5004')
    args = parser.parse_args()
    
    # ... test logic ...
    return 0  # Success

if __name__ == "__main__":
    sys.exit(main())
dream_layer_backend/txt2img_server.py (2)

172-172: Remove unused variable time_per_image_str.

The variable time_per_image_str is computed but never used in the response or logging.

-            # Converting to string format
-            time_per_image_str = "N/A" if time_per_image is None else f"{time_per_image:.2f}"

143-143: Remove commented-out code.

Dead code should be removed to maintain code clarity.

             workflow = transform_to_txt2img_workflow(data)
-            #workflow = transform_to_txt2img_workflow(data, ckpt_name=ckpt_name)
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a51a673 and 85222a9.

⛔ Files ignored due to path filters (8)
  • dream_layer_backend/dream_layer_backend_utils/inference_traces/inference_trace_img2img.csv is excluded by !**/*.csv
  • dream_layer_backend/dream_layer_backend_utils/inference_traces/inference_trace_txt2img.csv is excluded by !**/*.csv
  • dream_layer_backend/inference_traces/inference_trace_img2img.csv is excluded by !**/*.csv
  • dream_layer_backend/inference_traces/inference_trace_text2img.csv is excluded by !**/*.csv
  • dream_layer_backend/inference_traces/inference_trace_txt2img.csv is excluded by !**/*.csv
  • dream_layer_backend/test_image/examjam.png is excluded by !**/*.png
  • dream_layer_backend/test_image/test_image.png is excluded by !**/*.png
  • dream_layer_frontend/package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (12)
  • .python-version (1 hunks)
  • ComfyUI/user/default/comfy.settings.json (1 hunks)
  • dream_layer_backend/img2img_server.py (3 hunks)
  • dream_layer_backend/img2img_workflow.py (3 hunks)
  • dream_layer_backend/test_image/base64conversion.py (1 hunks)
  • dream_layer_backend/test_image/test_img_endpoints.py (1 hunks)
  • dream_layer_backend/txt2img_server.py (3 hunks)
  • dream_layer_backend/txt2img_workflow.py (4 hunks)
  • dream_layer_backend/workflow_dumps/last_workflow.json (1 hunks)
  • dream_layer_frontend/src/features/Img2Img/Img2ImgPage.tsx (6 hunks)
  • dream_layer_frontend/src/features/Txt2Img/Txt2ImgPage.tsx (3 hunks)
  • dream_layer_frontend/vite.config.ts.timestamp-1755070710337-0629aa119849b.mjs (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (4)
dream_layer_backend/txt2img_workflow.py (1)
dream_layer_backend/img2img_workflow.py (1)
  • get_available_checkpoints (25-39)
dream_layer_backend/img2img_server.py (3)
dream_layer_backend/txt2img_server.py (1)
  • get_allowed_checkpoints (117-125)
dream_layer_backend/img2img_workflow.py (1)
  • transform_to_img2img_workflow (41-261)
dream_layer_backend/shared_utils.py (1)
  • send_to_comfyui (160-222)
dream_layer_backend/img2img_workflow.py (1)
dream_layer_backend/txt2img_workflow.py (1)
  • get_available_checkpoints (24-32)
dream_layer_backend/txt2img_server.py (3)
dream_layer_backend/shared_utils.py (1)
  • send_to_comfyui (160-222)
dream_layer_backend/img2img_server.py (1)
  • get_allowed_checkpoints (164-172)
dream_layer_backend/txt2img_workflow.py (1)
  • transform_to_txt2img_workflow (34-261)
🪛 Ruff (0.12.2)
dream_layer_backend/txt2img_workflow.py

5-5: Redefinition of unused json from line 1

Remove definition: json

(F811)

dream_layer_backend/img2img_workflow.py

249-249: f-string without any placeholders

Remove extraneous f prefix

(F541)

dream_layer_backend/txt2img_server.py

169-169: Undefined name total_time

(F821)


172-172: Local variable time_per_image_str is assigned to but never used

Remove assignment to unused variable time_per_image_str

(F841)

🔇 Additional comments (16)
.python-version (1)

1-1: All Python version declarations are aligned with 3.11.9

  • .python-version is pinned to 3.11.9
  • ComfyUI/pyproject.toml has requires-python = ">=3.9" (3.11.9 is within that range)
  • No FROM python lines found in any Dockerfiles
  • No actions/setup-python or python-version: entries in GitHub workflows

No mismatches detected—no changes needed.

dream_layer_backend/img2img_workflow.py (1)

46-59: Good dynamic checkpoint validation implementation

The dynamic checkpoint resolution properly validates the requested model against available checkpoints and provides appropriate fallback behavior with logging.

dream_layer_frontend/src/features/Img2Img/Img2ImgPage.tsx (4)

43-48: LGTM! Metrics state properly typed and initialized

The metrics state is well-structured with appropriate TypeScript types for tracking performance data.


290-310: CSV export implementation looks good

The CSV export function properly handles the metrics data and creates a downloadable file with appropriate headers.


461-475: Metrics display integration is well-implemented

The conditional rendering of metrics with the CSV download button and badge is properly integrated into the UI header.


176-181: ✅ Metrics extraction validated

The backend sets comfy_response["metrics"] in both:

  • dream_layer_backend/txt2img_server.py (lines 178–183)
  • dream_layer_backend/img2img_server.py (lines 251–256)

Your TSX snippet in Img2ImgPage.tsx safely checks for and applies data.comfy_response.metrics. No further changes needed.

dream_layer_frontend/src/features/Txt2Img/Txt2ImgPage.tsx (2)

44-49: LGTM! Consistent metrics state implementation

The metrics state structure matches the Img2Img implementation, maintaining consistency across components.


224-227: Good metrics extraction with safe navigation

The code properly extracts metrics from the backend response with appropriate conditional checks.

dream_layer_backend/txt2img_workflow.py (2)

24-33: Good checkpoint discovery implementation

The function properly handles the checkpoint directory discovery with appropriate error handling and logging.


76-88: Consistent dynamic checkpoint validation

The implementation matches the pattern in img2img_workflow.py, maintaining consistency across workflows.

dream_layer_backend/img2img_server.py (6)

36-36: LGTM! Proper path resolution for COMFY_ROOT

The change to use os.path.abspath with os.path.join ensures the path is properly resolved as an absolute path.


157-186: Good dynamic checkpoint validation with fallback

The checkpoint validation logic properly handles invalid or missing checkpoints with appropriate fallback behavior and error handling.


196-204: Good GPU detection with fallback

The GPU detection properly handles exceptions by falling back to CPU when NVML is not available.


206-221: Excellent performance metrics collection

The timing instrumentation properly measures elapsed time and calculates per-image metrics with appropriate logging.


223-248: Well-structured CSV logging for inference traces

The CSV logging implementation properly creates the directory structure, handles headers, and appends metrics data in a structured format.


250-256: Good metrics integration in response

The metrics are properly added to the response JSON, making them available to the frontend for display.

logger.info("Injecting Refiner parameters...")
workflow = inject_refiner_parameters(workflow, refiner_data)

print(f"✅ Workflow transformation complete")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove extraneous f-string prefix

The print statement uses an f-string prefix but doesn't contain any placeholders.

-    print(f"✅ Workflow transformation complete")
+    print("✅ Workflow transformation complete")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
print(f"✅ Workflow transformation complete")
print("✅ Workflow transformation complete")
🧰 Tools
🪛 Ruff (0.12.2)

249-249: f-string without any placeholders

Remove extraneous f prefix

(F541)

🤖 Prompt for AI Agents
In dream_layer_backend/img2img_workflow.py around line 249, the print statement
uses an unnecessary f-string prefix for a static message; remove the leading "f"
so the call becomes a normal string literal (print("✅ Workflow transformation
complete")) to avoid misleading use of f-strings.

Comment on lines +15 to +17
TEST_IMAGE_DIR = Path(__file__).parent / "test_image"
PNG_PATH = TEST_IMAGE_DIR / "test_image.png"
BASE64_PATH = TEST_IMAGE_DIR / "base64_txt_test_image.txt"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix incorrect path construction for TEST_IMAGE_DIR.

The path construction assumes test_image is at the same level as the script, but the script is already inside test_image directory. This will result in an incorrect path like dream_layer_backend/test_image/test_image/test_image.png.

-TEST_IMAGE_DIR = Path(__file__).parent / "test_image"
-PNG_PATH = TEST_IMAGE_DIR / "test_image.png"
-BASE64_PATH = TEST_IMAGE_DIR / "base64_txt_test_image.txt"  
+TEST_IMAGE_DIR = Path(__file__).parent
+PNG_PATH = TEST_IMAGE_DIR / "test_image.png"
+BASE64_PATH = TEST_IMAGE_DIR / "base64_txt_test_image.txt"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
TEST_IMAGE_DIR = Path(__file__).parent / "test_image"
PNG_PATH = TEST_IMAGE_DIR / "test_image.png"
BASE64_PATH = TEST_IMAGE_DIR / "base64_txt_test_image.txt"
TEST_IMAGE_DIR = Path(__file__).parent
PNG_PATH = TEST_IMAGE_DIR / "test_image.png"
BASE64_PATH = TEST_IMAGE_DIR / "base64_txt_test_image.txt"
🤖 Prompt for AI Agents
In dream_layer_backend/test_image/test_img_endpoints.py around lines 15 to 17,
TEST_IMAGE_DIR is incorrectly constructed as Path(__file__).parent /
"test_image" causing a duplicate path segment; change TEST_IMAGE_DIR to refer
directly to the current directory (e.g., Path(__file__).parent) so PNG_PATH and
BASE64_PATH point to the actual files in the same folder.

Comment on lines +82 to +83
r1 = requests.post(f"{TXT2IMG_API_HOST}/api/txt2img", json=txt2img_payload)
pretty_print_response("TXT2IMG Response", r1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add timeout and error handling for API requests.

Network requests should have timeouts and proper error handling to prevent the test from hanging indefinitely.

-    r1 = requests.post(f"{TXT2IMG_API_HOST}/api/txt2img", json=txt2img_payload)
+    try:
+        r1 = requests.post(f"{TXT2IMG_API_HOST}/api/txt2img", json=txt2img_payload, timeout=30)
+    except requests.exceptions.RequestException as e:
+        print(f"Failed to connect to txt2img endpoint: {e}")
+        return
     pretty_print_response("TXT2IMG Response", r1)

     # IMG2IMG Request payload
     # ... (payload code)
     
-    r2 = requests.post(f"{IMG2IMG_API_HOST}/api/img2img", json=img2img_payload)
+    try:
+        r2 = requests.post(f"{IMG2IMG_API_HOST}/api/img2img", json=img2img_payload, timeout=30)
+    except requests.exceptions.RequestException as e:
+        print(f"Failed to connect to img2img endpoint: {e}")
+        return
     pretty_print_response("IMG2IMG Response", r2)

Also applies to: 102-103

🤖 Prompt for AI Agents
In dream_layer_backend/test_image/test_img_endpoints.py around lines 82-83 (and
similarly 102-103), the requests.post calls lack timeouts and error handling;
update each POST to include a reasonable timeout (e.g., timeout=10) and wrap the
call in a try/except that catches requests.exceptions.RequestException, logging
or pretty_printing the exception and failing the test (e.g., raise or
pytest.fail) on exception; additionally, after the response returns, check
response.ok (or status_code) and handle non-2xx responses by printing
response.text and failing the test so the test won't hang or silently pass on
errors.

Comment on lines 47 to 70
def log_inference_trace(total_time, images_generated, gpu_name, driver_version,ckpt_name):
"""Log inference details to CSV and console."""
# Checking to see if images were generated, finding the time per image
time_per_image = None if images_generated == 0 else total_time / images_generated

# Converting to string format
time_per_image_str = "N/A" if time_per_image is None else f"{time_per_image:.2f}"

# Console logging
print(f"⏱ {total_time:.2f}s total · {time_per_image_str}s/img · {gpu_name} · Driver {driver_version}")

# CSV logging
ensure_csv_exists()
with open(TRACE_CSV, "a", newline="") as f:
writer = csv.writer(f)
writer.writerow([
time.time(),
round(total_time, 4),
images_generated,
round(time_per_image, 4) if time_per_image is not None else "",
gpu_name,
driver_version,
ckpt_name
])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider adding error handling for CSV operations.

The CSV writing operations could fail due to permissions or disk space issues, which would crash the request handling.

 def log_inference_trace(total_time, images_generated, gpu_name, driver_version,ckpt_name):
     """Log inference details to CSV and console."""
     # Checking to see if images were generated, finding the time per image
     time_per_image = None if images_generated == 0 else total_time / images_generated
     
     # Converting to string format
     time_per_image_str = "N/A" if time_per_image is None else f"{time_per_image:.2f}"

     # Console logging 
     print(f"⏱ {total_time:.2f}s total · {time_per_image_str}s/img · {gpu_name} · Driver {driver_version}")

     # CSV logging
-    ensure_csv_exists()
-    with open(TRACE_CSV, "a", newline="") as f:
-        writer = csv.writer(f)
-        writer.writerow([
-            time.time(),
-            round(total_time, 4),
-            images_generated,
-            round(time_per_image, 4) if time_per_image is not None else "",
-            gpu_name,
-            driver_version,
-            ckpt_name
-        ])
+    try:
+        ensure_csv_exists()
+        with open(TRACE_CSV, "a", newline="") as f:
+            writer = csv.writer(f)
+            writer.writerow([
+                time.time(),
+                round(total_time, 4),
+                images_generated,
+                round(time_per_image, 4) if time_per_image is not None else "",
+                gpu_name,
+                driver_version,
+                ckpt_name
+            ])
+    except Exception as e:
+        print(f"Warning: Failed to log inference trace to CSV: {e}")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def log_inference_trace(total_time, images_generated, gpu_name, driver_version,ckpt_name):
"""Log inference details to CSV and console."""
# Checking to see if images were generated, finding the time per image
time_per_image = None if images_generated == 0 else total_time / images_generated
# Converting to string format
time_per_image_str = "N/A" if time_per_image is None else f"{time_per_image:.2f}"
# Console logging
print(f"⏱ {total_time:.2f}s total · {time_per_image_str}s/img · {gpu_name} · Driver {driver_version}")
# CSV logging
ensure_csv_exists()
with open(TRACE_CSV, "a", newline="") as f:
writer = csv.writer(f)
writer.writerow([
time.time(),
round(total_time, 4),
images_generated,
round(time_per_image, 4) if time_per_image is not None else "",
gpu_name,
driver_version,
ckpt_name
])
def log_inference_trace(total_time, images_generated, gpu_name, driver_version, ckpt_name):
"""Log inference details to CSV and console."""
# Checking to see if images were generated, finding the time per image
time_per_image = None if images_generated == 0 else total_time / images_generated
# Converting to string format
time_per_image_str = "N/A" if time_per_image is None else f"{time_per_image:.2f}"
# Console logging
print(f"⏱ {total_time:.2f}s total · {time_per_image_str}s/img · {gpu_name} · Driver {driver_version}")
# CSV logging
try:
ensure_csv_exists()
with open(TRACE_CSV, "a", newline="") as f:
writer = csv.writer(f)
writer.writerow([
time.time(),
round(total_time, 4),
images_generated,
round(time_per_image, 4) if time_per_image is not None else "",
gpu_name,
driver_version,
ckpt_name
])
except Exception as e:
print(f"Warning: Failed to log inference trace to CSV: {e}")
🤖 Prompt for AI Agents
In dream_layer_backend/txt2img_server.py around lines 47 to 70, the CSV file
operations (ensure_csv_exists and opening/writing TRACE_CSV) are unprotected and
can raise IOError/OSError (permissions, disk full), which will crash request
handling; wrap the CSV creation/open/write sequence in a try/except that catches
OSError/IOError/Exception, log the failure (use existing logger or print) with
the error details and a clear message including TRACE_CSV, and fall back to
continuing without CSV logging so the request still succeeds; ensure the
timestamp/metrics are preserved in memory or skipped when CSV fails and do not
re-raise the exception.

Comment on lines +244 to +254
# Ensure dump directory exists
dump_dir = os.path.join(os.path.dirname(__file__), "workflow_dumps")
os.makedirs(dump_dir, exist_ok=True)

# Save the workflow JSON
output_path = os.path.join(dump_dir, "last_workflow.json")
with open(output_path, "w") as f:
json.dump(workflow, f, indent=2)

print(f"📋 Generated workflow JSON: {json.dumps(workflow, indent=2)}")
print(f"🚀 Workflow JSON saved to {output_path}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add error handling for workflow dump operations

Similar to img2img_workflow.py, the file I/O operations should include error handling.

-        # Ensure dump directory exists
-        dump_dir = os.path.join(os.path.dirname(__file__), "workflow_dumps")
-        os.makedirs(dump_dir, exist_ok=True)
-
-        # Save the workflow JSON
-        output_path = os.path.join(dump_dir, "last_workflow.json")
-        with open(output_path, "w") as f:
-            json.dump(workflow, f, indent=2)
-
-        print(f"📋 Generated workflow JSON: {json.dumps(workflow, indent=2)}")
-        print(f"🚀 Workflow JSON saved to {output_path}")
+        # Ensure dump directory exists and save workflow
+        try:
+            dump_dir = os.path.join(os.path.dirname(__file__), "workflow_dumps")
+            os.makedirs(dump_dir, exist_ok=True)
+            
+            # Save the workflow JSON
+            output_path = os.path.join(dump_dir, "last_workflow.json")
+            with open(output_path, "w") as f:
+                json.dump(workflow, f, indent=2)
+            
+            logger.info(f"Workflow JSON saved to {output_path}")
+            print(f"🚀 Workflow JSON saved to {output_path}")
+        except Exception as e:
+            logger.error(f"Failed to save workflow JSON: {e}")
+            # Continue execution as this is non-critical
+        
+        # Only print the full JSON in debug mode to avoid cluttering logs
+        logger.debug(f"Generated workflow JSON: {json.dumps(workflow, indent=2)}")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Ensure dump directory exists
dump_dir = os.path.join(os.path.dirname(__file__), "workflow_dumps")
os.makedirs(dump_dir, exist_ok=True)
# Save the workflow JSON
output_path = os.path.join(dump_dir, "last_workflow.json")
with open(output_path, "w") as f:
json.dump(workflow, f, indent=2)
print(f"📋 Generated workflow JSON: {json.dumps(workflow, indent=2)}")
print(f"🚀 Workflow JSON saved to {output_path}")
# Ensure dump directory exists and save workflow
try:
dump_dir = os.path.join(os.path.dirname(__file__), "workflow_dumps")
os.makedirs(dump_dir, exist_ok=True)
# Save the workflow JSON
output_path = os.path.join(dump_dir, "last_workflow.json")
with open(output_path, "w") as f:
json.dump(workflow, f, indent=2)
logger.info(f"Workflow JSON saved to {output_path}")
print(f"🚀 Workflow JSON saved to {output_path}")
except Exception as e:
logger.error(f"Failed to save workflow JSON: {e}")
# Continue execution as this is non-critical
# Only print the full JSON in debug mode to avoid cluttering logs
logger.debug(f"Generated workflow JSON: {json.dumps(workflow, indent=2)}")
🤖 Prompt for AI Agents
In dream_layer_backend/txt2img_workflow.py around lines 244 to 254, the
directory creation and file write for dumping the workflow lack error handling;
wrap the os.makedirs, json.dump and open/write operations in a try/except that
catches Exceptions, log or print a clear error message with exception details,
and avoid crashing the process (optionally return or continue) — ensure you
still create the dump_dir only if needed, handle permission/IO errors, and
include the original workflow JSON or path in the error log for debugging.

Comment on lines 1 to 89
{
"prompt": {
"3": {
"class_type": "KSampler",
"inputs": {
"cfg": 7.0,
"denoise": 1.0,
"latent_image": [
"5",
0
],
"model": [
"4",
0
],
"negative": [
"7",
0
],
"positive": [
"6",
0
],
"sampler_name": "euler",
"scheduler": "normal",
"seed": 53064756,
"steps": 20
}
},
"4": {
"class_type": "CheckpointLoaderSimple",
"inputs": {
"ckpt_name": "juggernautXL_v8Rundiffusion.safetensors"
}
},
"5": {
"class_type": "EmptyLatentImage",
"inputs": {
"batch_size": 4,
"height": 512,
"width": 512
}
},
"6": {
"class_type": "CLIPTextEncode",
"inputs": {
"clip": [
"4",
1
],
"text": "Red fox"
}
},
"7": {
"class_type": "CLIPTextEncode",
"inputs": {
"clip": [
"4",
1
],
"text": ""
}
},
"8": {
"class_type": "VAEDecode",
"inputs": {
"samples": [
"3",
0
],
"vae": [
"4",
2
]
}
},
"9": {
"class_type": "SaveImage",
"inputs": {
"filename_prefix": "DreamLayer",
"images": [
"8",
0
]
}
}
},
"extra_data": {}
} No newline at end of file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Generated workflow dump should be a runtime artifact, not committed.

This file will change often (checkpoint name, seed, prompts), leading to noisy diffs and merge conflicts. It’s better treated as a runtime output.

Recommendations:

  • Add to .gitignore to prevent future commits:
+dream_layer_backend/workflow_dumps/*.json
  • If you need a documented example, commit a stable sample (e.g., last_workflow.sample.json) under docs/examples/ with a README explaining how it’s produced.
  • Ensure the backend writes this file to a writable runtime path (e.g., a mounted volume) rather than the repo directory.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In dream_layer_backend/workflow_dumps/last_workflow.json (lines 1-89): this
generated workflow dump is a runtime artifact and should not be committed;
remove it from version control (git rm --cached path/to/last_workflow.json and
commit), add workflow_dumps/ or last_workflow.json to .gitignore so it isn’t
re-added, create a stable example file (e.g.,
docs/examples/last_workflow.sample.json) with static values and a README
describing how it’s produced and used, and update the backend so it writes this
runtime dump to a writable runtime path (configurable directory or mounted
volume) instead of the repository directory.

Comment on lines +274 to +290
const MetricsBadge: React.FC<{
elapsedTimeSec: number;
gpu: string;
driver: string;
}> = ({ elapsedTimeSec, gpu, driver }) => {
const timePerImage = elapsedTimeSec.toFixed(2);
const shortGpu = gpu.length > 12 ? gpu.slice(0, 12) + "…" : gpu;

return (
<div
className="ml-4 px-3 py-1 rounded bg-gray-200 text-gray-800 text-sm font-semibold select-none"
title={`GPU: ${gpu} · Driver: ${driver}`} // optional tooltip
>
{timePerImage} s per image · {shortGpu}
</div>
);
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Duplicate MetricsBadge component definition

This MetricsBadge component is identical to the one in Img2ImgPage.tsx, indicating code duplication.

Extract the MetricsBadge component to a shared components file:

// Create a new file: src/components/MetricsBadge.tsx
import React from 'react';

interface MetricsBadgeProps {
  elapsedTimeSec: number;
  gpu: string;
  driver: string;
}

export const MetricsBadge: React.FC<MetricsBadgeProps> = ({ elapsedTimeSec, gpu, driver }) => {
  const timePerImage = elapsedTimeSec.toFixed(2);
  const shortGpu = gpu.length > 12 ? gpu.slice(0, 12) + "…" : gpu;

  return (
    <div
      className="ml-4 px-3 py-1 rounded bg-gray-200 text-gray-800 text-sm font-semibold select-none"
      title={`GPU: ${gpu} · Driver: ${driver}`}
    >
      {timePerImage} s per image · {shortGpu}
    </div>
  );
};

Then import and use it in both pages:

import { MetricsBadge } from '@/components/MetricsBadge';
🤖 Prompt for AI Agents
In dream_layer_frontend/src/features/Txt2Img/Txt2ImgPage.tsx around lines
274-290 there is a duplicate MetricsBadge component definition (identical to the
one in Img2ImgPage.tsx); extract this component to a shared file (e.g.,
src/components/MetricsBadge.tsx) with the same props and implementation, export
it, then replace the local component in Txt2ImgPage.tsx with an import: import {
MetricsBadge } from '@/components/MetricsBadge'; do the same in Img2ImgPage.tsx
so both pages reuse the shared component and remove the duplicate definitions.

Comment on lines +292 to +312
const exportTxt2ImgMetricsToCSV = () => {
if (!metrics) return;

const headers = ['elapsed_time_sec', 'time_per_image_sec', 'gpu', 'driver_version'];
const values = [
metrics.elapsed_time_sec,
metrics.time_per_image_sec,
metrics.gpu,
metrics.driver_version
];

const csv = [headers.join(','), values.join(',')].join('\n');
const blob = new Blob([csv], { type: 'text/csv;charset=utf-8;' });

const link = document.createElement('a');
link.href = URL.createObjectURL(blob);
link.setAttribute('download', `inference_trace_txt2img.csv`);
document.body.appendChild(link);
link.click();
document.body.removeChild(link);
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Duplicate CSV export function

The exportTxt2ImgMetricsToCSV function is nearly identical to exportImg2ImgMetricsToCSV, only differing in the filename.

Create a shared utility function:

// In a utils file (e.g., src/utils/metricsExport.ts)
export const exportMetricsToCSV = (
  metrics: {
    elapsed_time_sec: number;
    time_per_image_sec: number;
    gpu: string;
    driver_version: string;
  },
  filename: string
) => {
  const headers = ['elapsed_time_sec', 'time_per_image_sec', 'gpu', 'driver_version'];
  const values = [
    metrics.elapsed_time_sec,
    metrics.time_per_image_sec,
    metrics.gpu,
    metrics.driver_version
  ];

  const csv = [headers.join(','), values.join(',')].join('\n');
  const blob = new Blob([csv], { type: 'text/csv;charset=utf-8;' });

  const link = document.createElement('a');
  link.href = URL.createObjectURL(blob);
  link.setAttribute('download', filename);
  document.body.appendChild(link);
  link.click();
  document.body.removeChild(link);
};

Then use it in both pages:

const handleExportMetrics = () => {
  if (!metrics) return;
  exportMetricsToCSV(metrics, 'inference_trace_txt2img.csv');
};
🤖 Prompt for AI Agents
In dream_layer_frontend/src/features/Txt2Img/Txt2ImgPage.tsx around lines
292-312, the exportTxt2ImgMetricsToCSV duplicates logic from
exportImg2ImgMetricsToCSV; extract the CSV-export logic into a shared utility
(e.g., src/utils/metricsExport.ts) that accepts the metrics object and a
filename, move header/values/blob/link creation into that function, export it,
and then replace exportTxt2ImgMetricsToCSV with a thin wrapper that calls the
shared exportMetricsToCSV(metrics, 'inference_trace_txt2img.csv') (ensure
null-check for metrics before calling).

Comment on lines +331 to +350
{metrics && (
<div className="flex items-center space-x-3 p-2 rounded-lg bg-gray-100 dark:bg-gray-800 border border-gray-200 dark:border-gray-700">
<div className="flex flex-col">
<span className="font-semibold text-gray-800 dark:text-gray-200">
{`${metrics.time_per_image_sec?.toFixed(2) ??
(metrics.elapsed_time_sec / (coreSettings.batch_size * coreSettings.batch_count)).toFixed(2)
} s per image`}
</span>
<span className="text-gray-600 dark:text-gray-400">
{`${metrics.gpu} (Driver ${metrics.driver_version})`}
</span>
</div>
<button
onClick={exportTxt2ImgMetricsToCSV}
className="ml-auto rounded-md border border-input bg-white dark:bg-gray-700 px-3 py-1 text-xs font-medium text-gray-800 dark:text-gray-200 transition-colors hover:bg-gray-200 dark:hover:bg-gray-600"
>
Download CSV
</button>
</div>
)}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Inconsistent metrics display compared to Img2Img

The metrics display implementation differs from Img2ImgPage, which uses the MetricsBadge component. This page has an inline implementation with different styling.

For consistency, use the same MetricsBadge component approach as in Img2ImgPage after extracting it to a shared component:

-      {metrics && (
-          <div className="flex items-center space-x-3 p-2 rounded-lg bg-gray-100 dark:bg-gray-800 border border-gray-200 dark:border-gray-700">
-            <div className="flex flex-col">
-              <span className="font-semibold text-gray-800 dark:text-gray-200">
-                {`${metrics.time_per_image_sec?.toFixed(2) ??
-                  (metrics.elapsed_time_sec / (coreSettings.batch_size * coreSettings.batch_count)).toFixed(2)
-                } s per image`}
-              </span>
-              <span className="text-gray-600 dark:text-gray-400">
-                {`${metrics.gpu} (Driver ${metrics.driver_version})`}
-              </span>
-            </div>
-            <button
-              onClick={exportTxt2ImgMetricsToCSV}
-              className="ml-auto rounded-md border border-input bg-white dark:bg-gray-700 px-3 py-1 text-xs font-medium text-gray-800 dark:text-gray-200 transition-colors hover:bg-gray-200 dark:hover:bg-gray-600"
-            >
-              Download CSV
-            </button>
-          </div>
-        )}
+      {metrics && (
+        <>
+          <button
+            onClick={() => exportMetricsToCSV(metrics, 'inference_trace_txt2img.csv')}
+            className="rounded-md border border-input bg-background px-4 py-2 text-sm font-medium transition-colors hover:bg-accent hover:text-accent-foreground"
+          >
+            Download CSV
+          </button>
+          <MetricsBadge
+            elapsedTimeSec={metrics.time_per_image_sec ?? (metrics.elapsed_time_sec / (coreSettings.batch_size * coreSettings.batch_count))}
+            gpu={metrics.gpu}
+            driver={metrics.driver_version}
+          />
+        </>
+      )}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
{metrics && (
<div className="flex items-center space-x-3 p-2 rounded-lg bg-gray-100 dark:bg-gray-800 border border-gray-200 dark:border-gray-700">
<div className="flex flex-col">
<span className="font-semibold text-gray-800 dark:text-gray-200">
{`${metrics.time_per_image_sec?.toFixed(2) ??
(metrics.elapsed_time_sec / (coreSettings.batch_size * coreSettings.batch_count)).toFixed(2)
} s per image`}
</span>
<span className="text-gray-600 dark:text-gray-400">
{`${metrics.gpu} (Driver ${metrics.driver_version})`}
</span>
</div>
<button
onClick={exportTxt2ImgMetricsToCSV}
className="ml-auto rounded-md border border-input bg-white dark:bg-gray-700 px-3 py-1 text-xs font-medium text-gray-800 dark:text-gray-200 transition-colors hover:bg-gray-200 dark:hover:bg-gray-600"
>
Download CSV
</button>
</div>
)}
{metrics && (
<>
<button
onClick={() => exportMetricsToCSV(metrics, 'inference_trace_txt2img.csv')}
className="rounded-md border border-input bg-background px-4 py-2 text-sm font-medium transition-colors hover:bg-accent hover:text-accent-foreground"
>
Download CSV
</button>
<MetricsBadge
elapsedTimeSec={
metrics.time_per_image_sec ??
(metrics.elapsed_time_sec /
(coreSettings.batch_size * coreSettings.batch_count))
}
gpu={metrics.gpu}
driver={metrics.driver_version}
/>
</>
)}

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 85222a9 and b1e5927.

⛔ Files ignored due to path filters (2)
  • dream_layer_backend/inference_traces/inference_trace_txt2img.csv is excluded by !**/*.csv
  • dream_layer_frontend/package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (2)
  • dream_layer_backend/workflow_dumps/last_workflow.json (1 hunks)
  • dream_layer_frontend/package.json (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • dream_layer_backend/workflow_dumps/last_workflow.json

"typescript": "^5.5.3",
"typescript-eslint": "^8.0.1",
"vite": "^5.4.1"
"vite": "^7.1.2"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Vite 7 major bump: verify React SWC plugin compatibility, Node engine, and config discovery

Upgrading to a new Vite major is non-trivial. Please verify the following before merging:

  • @vitejs/plugin-react-swc compatibility with Vite 7 (the current ^3.5.0 may have older peer ranges).
  • Enforce a Node runtime that satisfies Vite 7 (Node >= 18) via an engines field to avoid dev CI breakage.
  • Ensure your Vite config is discoverable. If you added a timestamp-suffixed config (per PR summary), Vite won’t auto-pick it; rename to vite.config.ts|.mjs or pass --config in scripts.

Run this to check config discoverability and script wiring:

If the config filename is not vite.config.ts/js/mts/mjs and scripts don’t pass --config, Vite will ignore it.

Optional hardening:

  • Add engines to enforce Node >= 18 (Vite 7 requirement):
{
  "engines": {
    "node": ">=18.0.0"
  }
}
  • If you must keep a nonstandard config filename, wire scripts explicitly:
{
  "scripts": {
    "dev": "vite --config vite.config.mjs",
    "build": "vite build --config vite.config.mjs",
    "preview": "vite preview --config vite.config.mjs"
  }
}

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "1) Locate Vite config files:"
fd -I -a -g 'vite.config.*' || true

echo
echo "2) Check if scripts pass a custom --config:"
rg -n --json $'"vite[^"]*--config' package.json || true

echo
echo "3) Extract versions for vite + @vitejs/plugin-react-swc:"
jq -r '.devDependencies | to_entries[] | select(.key=="vite" or .key=="@vitejs/plugin-react-swc")' dream_layer_frontend/package.json | jq -r '.key + "@" + .value'

Length of output: 1078


Would you please run the following to verify scripts, engines, and plugin peer dependencies?


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "1) Scripts in dream_layer_frontend/package.json:"
jq '.scripts' dream_layer_frontend/package.json || echo "Unable to parse scripts"

echo
echo "2) Engines in dream_layer_frontend/package.json:"
jq '.engines' dream_layer_frontend/package.json || echo "No engines field"

echo
echo "3) PeerDependencies for @vitejs/[email protected]:"
npm view @vitejs/[email protected] peerDependencies --json || echo "NPM lookup failed"

Length of output: 706


Fix Vite7 compatibility: bump React SWC plugin, add Node engine, clean up config

  • @vitejs/plugin-react-swc@^3.5.0 only supports Vite ^4 | ^5. Upgrade to a version with Vite 7 peer support (e.g. ^4.0.0 or the latest):
    dream_layer_frontend/package.json
     "devDependencies": {
    -  "@vitejs/plugin-react-swc": "^3.5.0",
    +  "@vitejs/plugin-react-swc": "^4.0.0",
        "vite": "^7.1.2",
        …
     }
  • Enforce Node >= 18 for Vite 7 by adding an engines field:
    dream_layer_frontend/package.json
     {
    +  "engines": {
    +    "node": ">=18.0.0"
    +  },
        "devDependencies": { … }
     }
  • Vite will auto-load vite.config.ts, so you don’t need --config. Remove the timestamp-suffixed file (vite.config.ts.timestamp-*.mjs) to avoid confusion.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In dream_layer_frontend/package.json around line 82, update Vite 7 compatibility
by bumping @vitejs/plugin-react-swc to a Vite‑7‑compatible release (e.g. ^4.0.0
or latest), add an "engines" field with "node": ">=18" to enforce Node 18+, and
remove any timestamp‑suffixed vite config file from the repo
(vite.config.ts.timestamp-*.mjs) and any package.json scripts that pass --config
pointing to that timestamped file so they rely on vite.config.ts instead.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (9)
dream_layer_backend/txt2img_workflow.py (2)

5-5: Fix duplicate json import

The json module is imported twice (lines 1 and 5), which causes a redefinition warning.

 import json
 import random
 import os
 import logging
-import json
 from pathlib import Path

263-273: Add error handling for workflow dump operations

The file I/O operations should include error handling to prevent crashes on permission or disk space issues.

-        # Ensure dump directory exists
-        dump_dir = os.path.join(os.path.dirname(__file__), "workflow_dumps")
-        os.makedirs(dump_dir, exist_ok=True)
-
-        # Save the workflow JSON
-        output_path = os.path.join(dump_dir, "last_workflow.json")
-        with open(output_path, "w") as f:
-            json.dump(workflow, f, indent=2)
-
-        print(f"📋 Generated workflow JSON: {json.dumps(workflow, indent=2)}")
-        print(f"🚀 Workflow JSON saved to {output_path}")
+        # Ensure dump directory exists and save workflow
+        try:
+            dump_dir = os.path.join(os.path.dirname(__file__), "workflow_dumps")
+            os.makedirs(dump_dir, exist_ok=True)
+            
+            # Save the workflow JSON
+            output_path = os.path.join(dump_dir, "last_workflow.json")
+            with open(output_path, "w") as f:
+                json.dump(workflow, f, indent=2)
+            
+            logger.info(f"Workflow JSON saved to {output_path}")
+            print(f"🚀 Workflow JSON saved to {output_path}")
+        except Exception as e:
+            logger.error(f"Failed to save workflow JSON: {e}")
+            # Continue execution as this is non-critical
+        
+        # Only print the full JSON in debug mode to avoid cluttering logs
+        logger.debug(f"Generated workflow JSON: {json.dumps(workflow, indent=2)}")
dream_layer_backend/img2img_workflow.py (2)

259-259: Remove unnecessary f-string prefix

The print statement uses an f-string prefix but doesn't contain any placeholders.

-    print(f"✅ Workflow transformation complete")
+    print("✅ Workflow transformation complete")

260-270: Add error handling for workflow dump operations

The file I/O operations lack error handling for potential permission or disk space issues.

-     # Ensure dump directory exists
-    dump_dir = os.path.join(os.path.dirname(__file__), "workflow_dumps")
-    os.makedirs(dump_dir, exist_ok=True)
-    
-    # Save the workflow JSON
-    output_path = os.path.join(dump_dir, "last_workflow.json")
-    with open(output_path, "w") as f:
-        json.dump(workflow, f, indent=2)
-
-    print(f"📋 Generated workflow JSON: {json.dumps(workflow, indent=2)}")
-    print(f"🚀 Workflow JSON saved to {output_path}")
+    # Ensure dump directory exists and save workflow
+    try:
+        dump_dir = os.path.join(os.path.dirname(__file__), "workflow_dumps")
+        os.makedirs(dump_dir, exist_ok=True)
+        
+        # Save the workflow JSON
+        output_path = os.path.join(dump_dir, "last_workflow.json")
+        with open(output_path, "w") as f:
+            json.dump(workflow, f, indent=2)
+        
+        logger.info(f"Workflow JSON saved to {output_path}")
+        print(f"🚀 Workflow JSON saved to {output_path}")
+    except Exception as e:
+        logger.error(f"Failed to save workflow JSON: {e}")
+        # Continue execution as this is non-critical
+    
+    # Only print the full JSON in debug mode to avoid cluttering logs
+    logger.debug(f"Generated workflow JSON: {json.dumps(workflow, indent=2)}")
dream_layer_backend/txt2img_server.py (5)

50-73: Add error handling for CSV operations

The CSV writing operations could fail due to permissions or disk space issues, which would crash the request handling.

 def log_inference_trace(total_time, images_generated, gpu_name, driver_version,ckpt_name):
     """Log inference details to CSV and console."""
     # Checking to see if images were generated, finding the time per image
     time_per_image = None if images_generated == 0 else total_time / images_generated
     
     # Converting to string format
     time_per_image_str = "N/A" if time_per_image is None else f"{time_per_image:.2f}"

     # Console logging 
     print(f"⏱ {total_time:.2f}s total · {time_per_image_str}s/img · {gpu_name} · Driver {driver_version}")

     # CSV logging
-    ensure_csv_exists()
-    with open(TRACE_CSV, "a", newline="") as f:
-        writer = csv.writer(f)
-        writer.writerow([
-            time.time(),
-            round(total_time, 4),
-            images_generated,
-            round(time_per_image, 4) if time_per_image is not None else "",
-            gpu_name,
-            driver_version,
-            ckpt_name
-        ])
+    try:
+        ensure_csv_exists()
+        with open(TRACE_CSV, "a", newline="") as f:
+            writer = csv.writer(f)
+            writer.writerow([
+                time.time(),
+                round(total_time, 4),
+                images_generated,
+                round(time_per_image, 4) if time_per_image is not None else "",
+                gpu_name,
+                driver_version,
+                ckpt_name
+            ])
+    except Exception as e:
+        print(f"Warning: Failed to log inference trace to CSV: {e}")

111-111: Fix COMFY_ROOT path calculation

The COMFY_ROOT is set to the dream_layer_backend directory, but ComfyUI is a sibling directory at the repository root level.

             # Get the absolute path to the ComfyUI root directory 
-            COMFY_ROOT = os.path.dirname(os.path.abspath(__file__))
+            # Navigate up to the repository root where ComfyUI resides
+            COMFY_ROOT = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))

133-139: Fix inverted checkpoint validation logic

The else branch executes when the checkpoint IS valid, but it returns an error. The error should only be returned when no checkpoints are available.

             # Validate checkpoint 
             if not ckpt_name or ckpt_name not in ALLOWED_CKPTS:
                 if ALLOWED_CKPTS:
                     chosen_ckpt = ALLOWED_CKPTS[0]
                     print(f"Checkpoint '{ckpt_name}' invalid or missing, falling back to '{chosen_ckpt}'")
                     ckpt_name = chosen_ckpt
-            else:
-                return jsonify({"error": "No checkpoints available on server"}), 500
+                else:
+                    return jsonify({"error": "No checkpoints available on server"}), 500

152-159: Add NVML cleanup to prevent resource leaks

NVML should be properly shut down after use to prevent resource leaks in long-running servers.

             # Init NVML once at startup
             try:
                 pynvml.nvmlInit()
                 gpu_handle = pynvml.nvmlDeviceGetHandleByIndex(0)
                 gpu_name = pynvml.nvmlDeviceGetName(gpu_handle).decode()
                 driver_version = pynvml.nvmlSystemGetDriverVersion().decode()
+                pynvml.nvmlShutdown()
             except Exception:
                 gpu_name = "CPU"
                 driver_version = "N/A"

172-172: Fix undefined variable total_time - should be elapsed

Line 172 references total_time which doesn't exist. The elapsed time is stored in the elapsed variable.

             # Determine number of images generated and time per image
             images_generated = len(comfy_response.get("all_images", []))
-            time_per_image = None if images_generated == 0 else total_time / images_generated
+            time_per_image = None if images_generated == 0 else elapsed / images_generated
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b1e5927 and d84d791.

📒 Files selected for processing (4)
  • dream_layer_backend/img2img_server.py (4 hunks)
  • dream_layer_backend/img2img_workflow.py (3 hunks)
  • dream_layer_backend/txt2img_server.py (3 hunks)
  • dream_layer_backend/txt2img_workflow.py (4 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (4)
dream_layer_backend/txt2img_workflow.py (1)
dream_layer_backend/img2img_workflow.py (1)
  • get_available_checkpoints (25-39)
dream_layer_backend/img2img_server.py (3)
dream_layer_backend/txt2img_server.py (1)
  • get_allowed_checkpoints (120-128)
dream_layer_backend/img2img_workflow.py (1)
  • transform_to_img2img_workflow (41-271)
dream_layer_backend/shared_utils.py (1)
  • send_to_comfyui (161-223)
dream_layer_backend/txt2img_server.py (3)
dream_layer_backend/shared_utils.py (1)
  • send_to_comfyui (161-223)
dream_layer_backend/img2img_server.py (1)
  • get_allowed_checkpoints (168-176)
dream_layer_backend/txt2img_workflow.py (1)
  • transform_to_txt2img_workflow (35-280)
dream_layer_backend/img2img_workflow.py (1)
dream_layer_backend/txt2img_workflow.py (1)
  • get_available_checkpoints (25-33)
🪛 Ruff (0.12.2)
dream_layer_backend/txt2img_workflow.py

5-5: Redefinition of unused json from line 1

Remove definition: json

(F811)

dream_layer_backend/img2img_server.py

19-19: Found useless expression. Either assign it to a variable or remove it.

(B018)


19-19: Undefined name d

(F821)

dream_layer_backend/txt2img_server.py

172-172: Undefined name total_time

(F821)

dream_layer_backend/img2img_workflow.py

259-259: f-string without any placeholders

Remove extraneous f prefix

(F541)

🔇 Additional comments (3)
dream_layer_backend/txt2img_workflow.py (1)

25-34: LGTM! Checkpoint discovery implementation looks good

The function properly handles filesystem errors and returns an empty list when the directory doesn't exist or errors occur, ensuring graceful degradation.

dream_layer_backend/img2img_workflow.py (1)

25-39: LGTM! Robust checkpoint discovery with detailed logging

The function includes comprehensive logging for debugging path resolution and gracefully handles missing directories.

dream_layer_backend/img2img_server.py (1)

40-40: Path calculation for COMFY_ROOT appears correct

The path correctly navigates up one level from dream_layer_backend/ to reach the repository root where ComfyUI/ resides.

Comment on lines +200 to +208
try:
pynvml.nvmlInit()
gpu_handle = pynvml.nvmlDeviceGetHandleByIndex(0)
gpu_name = pynvml.nvmlDeviceGetName(gpu_handle).decode()
driver_version = pynvml.nvmlSystemGetDriverVersion().decode()
except Exception:
gpu_name = "CPU"
driver_version = "N/A"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add NVML cleanup to prevent resource leaks

NVML should be properly shut down after use to prevent resource leaks in long-running servers.

         try:
             pynvml.nvmlInit()
             gpu_handle = pynvml.nvmlDeviceGetHandleByIndex(0)
             gpu_name = pynvml.nvmlDeviceGetName(gpu_handle).decode()
             driver_version = pynvml.nvmlSystemGetDriverVersion().decode()
+            pynvml.nvmlShutdown()
         except Exception:
             gpu_name = "CPU"
             driver_version = "N/A"

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In dream_layer_backend/img2img_server.py around lines 200 to 208, NVML is
initialized but not shut down which can leak resources; wrap the nvmlInit and
subsequent NVML calls in a try/finally (or try/except/finally) and call
pynvml.nvmlShutdown() in the finally block, using a local flag to track
successful initialization so shutdown is only called if init succeeded, and
ensure exceptions during NVML calls still set gpu_name/driver_version fallback
values before shutdown.

Comment on lines +228 to +253

# Path for CSV log file
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
INFERENCE_TRACES_DIR = os.path.join(BASE_DIR, "inference_traces")
os.makedirs(INFERENCE_TRACES_DIR, exist_ok=True) # create folder if it doesn't exist
TRACE_CSV = os.path.join(INFERENCE_TRACES_DIR, "inference_trace_img2img.csv")

# Ensure CSV file exists and has header
if not os.path.exists(TRACE_CSV):
with open(TRACE_CSV, "w", newline="") as f:
writer = csv.writer(f)
writer.writerow(["timestamp", "total_time_s", "images_generated", "time_per_image_s", "gpu_name", "driver_version","ckpt_name"])

# Append new row to CSV
with open(TRACE_CSV, "a", newline="") as f:
writer = csv.writer(f)
writer.writerow([
time.time(),
round(elapsed, 4),
images_generated,
round(time_per_image, 4) if time_per_image is not None else "",
gpu_name,
driver_version,
ckpt_name
])

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider adding error handling for CSV operations

The CSV writing operations could fail due to permissions or disk space issues, which would crash the request handling.

         # Log info into CSV
         
         # Path for CSV log file
         BASE_DIR = os.path.dirname(os.path.abspath(__file__))
         INFERENCE_TRACES_DIR = os.path.join(BASE_DIR, "inference_traces")
-        os.makedirs(INFERENCE_TRACES_DIR, exist_ok=True)  # create folder if it doesn't exist
-        TRACE_CSV = os.path.join(INFERENCE_TRACES_DIR, "inference_trace_img2img.csv")
-
-        # Ensure CSV file exists and has header
-        if not os.path.exists(TRACE_CSV):
-            with open(TRACE_CSV, "w", newline="") as f:
-                writer = csv.writer(f)
-                writer.writerow(["timestamp", "total_time_s", "images_generated", "time_per_image_s", "gpu_name", "driver_version","ckpt_name"])
-
-        # Append new row to CSV
-        with open(TRACE_CSV, "a", newline="") as f:
-            writer = csv.writer(f)
-            writer.writerow([
-                time.time(),
-                round(elapsed, 4),
-                images_generated,
-                round(time_per_image, 4) if time_per_image is not None else "",
-                gpu_name,
-                driver_version,
-                ckpt_name
-            ])
+        try:
+            os.makedirs(INFERENCE_TRACES_DIR, exist_ok=True)  # create folder if it doesn't exist
+            TRACE_CSV = os.path.join(INFERENCE_TRACES_DIR, "inference_trace_img2img.csv")
+    
+            # Ensure CSV file exists and has header
+            if not os.path.exists(TRACE_CSV):
+                with open(TRACE_CSV, "w", newline="") as f:
+                    writer = csv.writer(f)
+                    writer.writerow(["timestamp", "total_time_s", "images_generated", "time_per_image_s", "gpu_name", "driver_version","ckpt_name"])
+    
+            # Append new row to CSV
+            with open(TRACE_CSV, "a", newline="") as f:
+                writer = csv.writer(f)
+                writer.writerow([
+                    time.time(),
+                    round(elapsed, 4),
+                    images_generated,
+                    round(time_per_image, 4) if time_per_image is not None else "",
+                    gpu_name,
+                    driver_version,
+                    ckpt_name
+                ])
+        except Exception as e:
+            logger.warning(f"Failed to log inference trace to CSV: {e}")
+            # Continue execution as this is non-critical
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Path for CSV log file
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
INFERENCE_TRACES_DIR = os.path.join(BASE_DIR, "inference_traces")
os.makedirs(INFERENCE_TRACES_DIR, exist_ok=True) # create folder if it doesn't exist
TRACE_CSV = os.path.join(INFERENCE_TRACES_DIR, "inference_trace_img2img.csv")
# Ensure CSV file exists and has header
if not os.path.exists(TRACE_CSV):
with open(TRACE_CSV, "w", newline="") as f:
writer = csv.writer(f)
writer.writerow(["timestamp", "total_time_s", "images_generated", "time_per_image_s", "gpu_name", "driver_version","ckpt_name"])
# Append new row to CSV
with open(TRACE_CSV, "a", newline="") as f:
writer = csv.writer(f)
writer.writerow([
time.time(),
round(elapsed, 4),
images_generated,
round(time_per_image, 4) if time_per_image is not None else "",
gpu_name,
driver_version,
ckpt_name
])
# Path for CSV log file
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
INFERENCE_TRACES_DIR = os.path.join(BASE_DIR, "inference_traces")
try:
os.makedirs(INFERENCE_TRACES_DIR, exist_ok=True) # create folder if it doesn't exist
TRACE_CSV = os.path.join(INFERENCE_TRACES_DIR, "inference_trace_img2img.csv")
# Ensure CSV file exists and has header
if not os.path.exists(TRACE_CSV):
with open(TRACE_CSV, "w", newline="") as f:
writer = csv.writer(f)
writer.writerow([
"timestamp",
"total_time_s",
"images_generated",
"time_per_image_s",
"gpu_name",
"driver_version",
"ckpt_name"
])
# Append new row to CSV
with open(TRACE_CSV, "a", newline="") as f:
writer = csv.writer(f)
writer.writerow([
time.time(),
round(elapsed, 4),
images_generated,
round(time_per_image, 4) if time_per_image is not None else "",
gpu_name,
driver_version,
ckpt_name
])
except Exception as e:
logger.warning(f"Failed to log inference trace to CSV: {e}")
# Continue execution as this is non-critical

Comment on lines +86 to +105

# Dynamically determine the model name that's being used and validate
requested_model = data.get("model_name")
available_models = get_available_checkpoints()
if not available_models:
raise FileNotFoundError("No checkpoint models found in ComfyUI models/checkpoints directory") # could be changed to juggernautXL_v8Rundiffusion.safetensors

# Use requested model if valid, else fallback to detected
if requested_model and requested_model in available_models:
model_name = requested_model
else:
# fallback to first available checkpoint and log the fallback
model_name = available_models[0]
logger.warning(f"Requested model '{requested_model}' not found. Falling back to '{model_name}'.")

#model_name = data.get('model_name', 'juggernautXL_v8Rundiffusion.safetensors') # was hardcoded

# Check if it's a closed-source model (DALL-E, FLUX, Ideogram, etc.)
closed_source_models = ['dall-e-3', 'dall-e-2', 'flux-pro', 'flux-dev', 'ideogram-v3']

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove duplicate closed-source model definitions

The closed-source models list is defined twice with overlapping entries (lines 83-84 and 104). Consolidate into a single definition.

-        # Check if it's a closed-source model (DALL-E, FLUX, Ideogram, Stability AI, etc.)
-        closed_source_models = ['dall-e-3', 'dall-e-2', 'flux-pro',
-                                'flux-dev', 'ideogram-v3', 'stability-sdxl', 'stability-sd-turbo']
-
         
         # Dynamically determine the model name that's being used and validate
         requested_model = data.get("model_name")
         available_models = get_available_checkpoints()
         if not available_models:
             raise FileNotFoundError("No checkpoint models found in ComfyUI models/checkpoints directory")  # could be changed to juggernautXL_v8Rundiffusion.safetensors
 
         # Use requested model if valid, else fallback to detected
         if requested_model and requested_model in available_models:
                 model_name = requested_model
         else:
             # fallback to first available checkpoint and log the fallback
             model_name = available_models[0]
             logger.warning(f"Requested model '{requested_model}' not found. Falling back to '{model_name}'.")
         
-        #model_name = data.get('model_name', 'juggernautXL_v8Rundiffusion.safetensors') # was hardcoded
-
-        # Check if it's a closed-source model (DALL-E, FLUX, Ideogram, etc.)
-        closed_source_models = ['dall-e-3', 'dall-e-2', 'flux-pro', 'flux-dev', 'ideogram-v3']
+        # Check if it's a closed-source model (DALL-E, FLUX, Ideogram, Stability AI, etc.)
+        closed_source_models = ['dall-e-3', 'dall-e-2', 'flux-pro', 'flux-dev', 'ideogram-v3', 'stability-sdxl', 'stability-sd-turbo']
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Dynamically determine the model name that's being used and validate
requested_model = data.get("model_name")
available_models = get_available_checkpoints()
if not available_models:
raise FileNotFoundError("No checkpoint models found in ComfyUI models/checkpoints directory") # could be changed to juggernautXL_v8Rundiffusion.safetensors
# Use requested model if valid, else fallback to detected
if requested_model and requested_model in available_models:
model_name = requested_model
else:
# fallback to first available checkpoint and log the fallback
model_name = available_models[0]
logger.warning(f"Requested model '{requested_model}' not found. Falling back to '{model_name}'.")
#model_name = data.get('model_name', 'juggernautXL_v8Rundiffusion.safetensors') # was hardcoded
# Check if it's a closed-source model (DALL-E, FLUX, Ideogram, etc.)
closed_source_models = ['dall-e-3', 'dall-e-2', 'flux-pro', 'flux-dev', 'ideogram-v3']
# Dynamically determine the model name that's being used and validate
requested_model = data.get("model_name")
available_models = get_available_checkpoints()
if not available_models:
raise FileNotFoundError("No checkpoint models found in ComfyUI models/checkpoints directory")
# Use requested model if valid, else fallback to detected
if requested_model and requested_model in available_models:
model_name = requested_model
else:
model_name = available_models[0]
logger.warning(f"Requested model '{requested_model}' not found. Falling back to '{model_name}'.")
# Check if it's a closed-source model (DALL-E, FLUX, Ideogram, Stability AI, etc.)
closed_source_models = ['dall-e-3', 'dall-e-2', 'flux-pro', 'flux-dev', 'ideogram-v3', 'stability-sdxl', 'stability-sd-turbo']
🤖 Prompt for AI Agents
In dream_layer_backend/txt2img_workflow.py around lines 86 to 105, there are
duplicate definitions of the closed_source_models list; consolidate them into a
single definition (remove the second occurrence), place that single list in a
clear location before any checks that reference it (e.g., right after model
discovery/validation), and ensure all subsequent code references the unified
variable name so there is no shadowing or contradictory entries.

@darcy3000
Copy link
Collaborator

Hey! Thanks for raising the PR, can you please upload the evidence requested? It will be easier for us to review.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants