Skip to content

Conversation

@Swiftyos
Copy link
Contributor

@Swiftyos Swiftyos commented Jan 7, 2026

Frontend changes extracted from the hackathon/copilot branch for the copilot feature development.

Changes 🏗️

  • New Chat system with contextual components (Chat, ChatDrawer, ChatContainer, ChatMessage, etc.)
  • Form renderer system with RJSF v6 integration and new input renderers
  • Enhanced credentials management with improved OAuth flow and credential selection
  • New output renderers for various content types (Code, Image, JSON, Markdown, Text, Video)
  • Scrollable tabs component for better UI organization
  • Marketplace update notifications and publishing workflow improvements
  • Draft recovery feature with IndexedDB persistence
  • Safe mode toggle functionality
  • Various UI/UX improvements across the platform

Checklist 📋

For code changes:

  • I have clearly listed my changes in the PR description
  • I have made a test plan
  • I have tested my changes according to the test plan:
    • Test new Chat components functionality
    • Verify form renderer with various input types
    • Test credential management flows
    • Verify output renderers display correctly
    • Test draft recovery feature

For configuration changes:

  • .env.default is updated or already compatible with my changes
  • docker-compose.yml is updated or already compatible with my changes
  • I have included a list of my configuration changes in the PR description (under Changes)

@Swiftyos Swiftyos requested review from a team as code owners January 7, 2026 08:30
@Swiftyos Swiftyos requested review from kcze and majdyz and removed request for a team January 7, 2026 08:30
@github-project-automation github-project-automation bot moved this to 🆕 Needs initial review in AutoGPT development kanban Jan 7, 2026
@github-actions
Copy link
Contributor

github-actions bot commented Jan 7, 2026

This PR targets the master branch but does not come from dev or a hotfix/* branch.

Automatically setting the base branch to dev.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 7, 2026

Warning

Rate limit exceeded

@0ubbe has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 14 minutes and 19 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 22efc77 and 7c3c91e.

📒 Files selected for processing (15)
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/Chat.tsx
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/AgentInputsSetup/AgentInputsSetup.tsx
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/ChatContainer.tsx
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/helpers.ts
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.ts
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatMessage/ChatMessage.tsx
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/MessageList/MessageList.tsx
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/QuickActionsWelcome/QuickActionsWelcome.tsx
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChat.ts
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatStream.ts
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/usePageContext.ts
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/AgentInputsReadOnly/AgentInputsReadOnly.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentInputs/RunAgentInputs.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTriggerView/SelectedTriggerView.tsx
  • autogpt_platform/frontend/src/components/contextual/RunAgentInputs/RunAgentInputs.tsx

Walkthrough

Centralizes many UI imports, adds a pluggable OutputRenderers system with multiple renderers and copy/download utilities, introduces RunAgentInputs and related upload hook, implements a new modular chat system with streaming (including a POST SSE proxy), and adds CSS loader/shimmer styles.

Changes

Cohort / File(s) Summary
Output Renderers (centralized)
frontend/src/components/contextual/OutputRenderers/*, .../renderers/*
New contextual renderer registry, types, utilities, and six renderer implementations (Text, Code, JSON, Markdown, Image, Video). Adds OutputItem and OutputActions components plus copy/download helpers.
Import path consolidation — renderers/credentials/inputs
.../build/.../AgentOutputs.tsx, .../FlowEditor/.../ContentRenderer.tsx, .../NodeDataViewer.tsx, .../SelectedRunView/RunOutputs.tsx, .../agent-run-output-view.tsx, many .../library/... and onboarding files
Replaced deep, platform-specific import paths with centralized @/components/contextual/... imports (OutputRenderers, CredentialsInput, RunAgentInputs) across many files; no logic changes.
RunAgentInputs (inputs UI & uploads)
frontend/src/components/contextual/RunAgentInputs/RunAgentInputs.tsx, useRunAgentInputs.ts
New schema-driven input renderer supporting many input types and a hook for file uploads with progress.
Chat: new modular components & hooks
frontend/src/app/(platform)/chat/components/Chat/*, useChat.ts, usePageContext.ts, useChatStream.ts, useChatContainer.ts
New Chat component, ChatDrawer, container, input components, message components, page-context capture, and a streaming hook that normalizes stream chunks and exposes send/stop APIs.
Chat: message & UI components
.../Chat/components/* (ChatMessage, ChatContainer, MessageList, MessageBubble, ToolResponseMessage, ToolCallMessage, StreamingMessage, ThinkingMessage, QuickActionsWelcome, SessionsDrawer, etc.)
Large set of new/rewritten chat UI components handling credentials, inputs_needed messages, streaming, tool responses, and session drawer. Some old chat files removed and replaced by new modular versions.
SSE proxy route — POST support
frontend/src/app/api/chat/sessions/[sessionId]/stream/route.ts
Added authenticated POST handler forwarding message/context to backend stream endpoint and returning backend SSE; includes token handling and error paths.
Clipboard / Download utilities
frontend/src/components/contextual/OutputRenderers/utils/{copy,download}.ts
New copyToClipboard, fetchAndCopyImage, supported-type checks, and downloadOutputs for batch downloads with concatenation logic.
CredentialsInput client directive & imports
frontend/src/components/contextual/CredentialsInput/CredentialsInput.tsx, many files updated
Added "use client" to CredentialsInput and updated many imports to use centralized CredentialsInput path.
Styling & layout tweaks
frontend/src/app/globals.css, frontend/src/app/layout.tsx, .../NewAgentLibraryView.tsx, components/atoms/Input/Input.tsx
New shimmer and loader keyframes/classes; body gets min-h-screen; minor layout and input style tweaks.
Utilities relocation
frontend/src/lib/utils.ts, .../chat/helpers.ts
Moved isValidUUID into shared lib/utils and removed duplicate from chat helpers.
Removed legacy chat files
frontend/src/app/(platform)/chat/* (old useChatStream, old container/components)
Deleted older chat streaming/container/message implementations replaced by the new modularized system.

Sequence Diagram(s)

mermaid
sequenceDiagram
participant Client as Client (UI)
participant Proxy as Frontend API (POST /stream)
participant Backend as Backend Stream Endpoint
Client->>Proxy: POST /api/chat/sessions/{id}/stream (message + context, auth)
Proxy->>Backend: Forward request with auth headers, body
Backend-->>Proxy: SSE stream (text/event-stream)
Proxy-->>Client: Relay SSE to browser client
Client->>Client: useChatStream normalizes chunks → UI components render

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

Suggested reviewers

  • kcze
  • Abhi1992002

Poem

🐰 I hopped through code and stitched the streams,
Renderers, inputs, and clipboard dreams,
I planted loaders with a shimmer bright,
Wired chat to stream by day and night,
Now outputs dance and messages sing — hooray, what a spring! 🥕

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 6.45% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately summarizes the main change: extracting frontend changes from the hackathon/copilot branch for copilot feature development.
Description check ✅ Passed The description is comprehensive and directly related to the changeset, detailing new Chat system, output renderers, credentials management, and various other frontend improvements that align with the actual code changes.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions github-actions bot changed the base branch from master to dev January 7, 2026 08:30
@github-actions github-actions bot added platform/frontend AutoGPT Platform - Front end platform/backend AutoGPT Platform - Back end platform/blocks labels Jan 7, 2026
@qodo-code-review
Copy link

PR Reviewer Guide 🔍

Here are some key observations to aid the review process:

⏱️ Estimated effort to review: 5 🔵🔵🔵🔵🔵
🧪 PR contains tests
🔒 Security concerns

Access control / unintended data exposure:
GoogleDocsShareBlock and GoogleDocsSetPublicAccessBlock can grant anyone permissions (link/public access) and share documents to arbitrary emails. This can unintentionally expose sensitive document contents if invoked without adequate authorization/guardrails. Validate that only authorized users/agents can invoke these blocks, that the UI/UX or policy layer requires explicit intent for public sharing, and that the OAuth scopes used are constrained to the minimum necessary.

⚡ Recommended focus areas for review

Permissions

The sharing blocks can create anyone permissions (public/link sharing) and share with arbitrary emails. This should be validated against intended product policy (e.g., disallowing public sharing by default, requiring explicit user confirmation, and ensuring the OAuth scopes used actually allow permission changes). Also verify that link-sharing and public access changes behave correctly for different Drive permission configurations and that errors from the Drive API are surfaced clearly.

class GoogleDocsShareBlock(Block):
    """Share a Google Doc with specific users."""

    class Input(BlockSchemaInput):
        document: GoogleDriveFile = GoogleDriveFileField(
            title="Document",
            description="Select a Google Doc to share",
            allowed_views=["DOCUMENTS"],
        )
        email: str = SchemaField(
            default="",
            description="Email address to share with. Leave empty for link sharing.",
        )
        role: ShareRole = SchemaField(
            default=ShareRole.READER,
            description="Permission role for the user",
        )
        send_notification: bool = SchemaField(
            default=True,
            description="Send notification email to the user",
        )
        message: str = SchemaField(
            default="",
            description="Optional message to include in notification email",
        )

    class Output(BlockSchemaOutput):
        result: dict = SchemaField(description="Result of the share operation")
        share_link: str = SchemaField(description="Link to the document")
        document: GoogleDriveFile = SchemaField(description="The document for chaining")
        error: str = SchemaField(description="Error message if share failed")

    def __init__(self):
        super().__init__(
            id="4e7ec771-4cc8-4eb7-ae3d-46377ecdb5d2",
            description="Share a Google Doc with specific users",
            categories={BlockCategory.DATA},
            input_schema=GoogleDocsShareBlock.Input,
            output_schema=GoogleDocsShareBlock.Output,
            disabled=GOOGLE_DOCS_DISABLED,
            test_input={
                "document": {
                    "id": "1abc123def456",
                    "name": "Test Document",
                    "mimeType": "application/vnd.google-apps.document",
                },
                "email": "test@example.com",
                "role": "reader",
            },
            test_credentials=TEST_CREDENTIALS,
            test_output=[
                ("result", {"success": True}),
                ("share_link", "https://docs.google.com/document/d/1abc123def456/edit"),
                (
                    "document",
                    GoogleDriveFile(
                        id="1abc123def456",
                        name="Test Document",
                        mimeType="application/vnd.google-apps.document",
                        url="https://docs.google.com/document/d/1abc123def456/edit",
                        iconUrl="https://www.gstatic.com/images/branding/product/1x/docs_48dp.png",
                        isFolder=False,
                        _credentials_id=None,
                    ),
                ),
            ],
            test_mock={
                "_share_document": lambda *args, **kwargs: {
                    "success": True,
                    "share_link": "https://docs.google.com/document/d/1abc123def456/edit",
                },
            },
        )

    async def run(
        self, input_data: Input, *, credentials: GoogleCredentials, **kwargs
    ) -> BlockOutput:
        if not input_data.document:
            yield "error", "No document selected"
            return

        validation_error = _validate_document_file(input_data.document)
        if validation_error:
            yield "error", validation_error
            return

        try:
            service = _build_drive_service(credentials)
            result = await asyncio.to_thread(
                self._share_document,
                service,
                input_data.document.id,
                input_data.email,
                input_data.role,
                input_data.send_notification,
                input_data.message,
            )
            yield "result", {"success": True}
            yield "share_link", result["share_link"]
            yield "document", _make_document_output(input_data.document)
        except Exception as e:
            yield "error", f"Failed to share document: {str(e)}"

    def _share_document(
        self,
        service,
        document_id: str,
        email: str,
        role: ShareRole,
        send_notification: bool,
        message: str,
    ) -> dict:
        share_link = f"https://docs.google.com/document/d/{document_id}/edit"

        if email:
            # Share with specific user
            permission = {"type": "user", "role": role.value, "emailAddress": email}

            kwargs: dict[str, Any] = {
                "fileId": document_id,
                "body": permission,
                "sendNotificationEmail": send_notification,
            }
            if message:
                kwargs["emailMessage"] = message

            service.permissions().create(**kwargs).execute()
        else:
            # Create "anyone with the link" permission for link sharing
            permission = {"type": "anyone", "role": role.value}
            service.permissions().create(
                fileId=document_id,
                body=permission,
            ).execute()

        return {"success": True, "share_link": share_link}


class GoogleDocsSetPublicAccessBlock(Block):
    """Make a Google Doc publicly accessible or private."""

    class Input(BlockSchemaInput):
        document: GoogleDriveFile = GoogleDriveFileField(
            title="Document",
            description="Select a Google Doc",
            allowed_views=["DOCUMENTS"],
        )
        public: bool = SchemaField(
            default=True,
            description="True to make public, False to make private",
        )
        role: PublicAccessRole = SchemaField(
            default=PublicAccessRole.READER,
            description="Permission role for public access",
        )

    class Output(BlockSchemaOutput):
        result: dict = SchemaField(description="Result of the operation")
        share_link: str = SchemaField(description="Link to the document")
        document: GoogleDriveFile = SchemaField(description="The document for chaining")
        error: str = SchemaField(description="Error message if operation failed")

    def __init__(self):
        super().__init__(
            id="d104f6e1-80af-4fe9-b5a1-3cab20081b6c",
            description="Make a Google Doc public or private",
            categories={BlockCategory.DATA},
            input_schema=GoogleDocsSetPublicAccessBlock.Input,
            output_schema=GoogleDocsSetPublicAccessBlock.Output,
            disabled=GOOGLE_DOCS_DISABLED,
            test_input={
                "document": {
                    "id": "1abc123def456",
                    "name": "Test Document",
                    "mimeType": "application/vnd.google-apps.document",
                },
                "public": True,
            },
            test_credentials=TEST_CREDENTIALS,
            test_output=[
                ("result", {"success": True, "is_public": True}),
                (
                    "share_link",
                    "https://docs.google.com/document/d/1abc123def456/edit?usp=sharing",
                ),
                (
                    "document",
                    GoogleDriveFile(
                        id="1abc123def456",
                        name="Test Document",
                        mimeType="application/vnd.google-apps.document",
                        url="https://docs.google.com/document/d/1abc123def456/edit",
                        iconUrl="https://www.gstatic.com/images/branding/product/1x/docs_48dp.png",
                        isFolder=False,
                        _credentials_id=None,
                    ),
                ),
            ],
            test_mock={
                "_set_public_access": lambda *args, **kwargs: {
                    "success": True,
                    "is_public": True,
                    "share_link": "https://docs.google.com/document/d/1abc123def456/edit?usp=sharing",
                },
            },
        )

    async def run(
        self, input_data: Input, *, credentials: GoogleCredentials, **kwargs
    ) -> BlockOutput:
        if not input_data.document:
            yield "error", "No document selected"
            return

        validation_error = _validate_document_file(input_data.document)
        if validation_error:
            yield "error", validation_error
            return

        try:
            service = _build_drive_service(credentials)
            result = await asyncio.to_thread(
                self._set_public_access,
                service,
                input_data.document.id,
                input_data.public,
                input_data.role,
            )
            yield "result", {"success": True, "is_public": result["is_public"]}
            yield "share_link", result["share_link"]
            yield "document", _make_document_output(input_data.document)
        except Exception as e:
            yield "error", f"Failed to set public access: {str(e)}"

    def _set_public_access(
        self, service, document_id: str, public: bool, role: PublicAccessRole
    ) -> dict:
        share_link = f"https://docs.google.com/document/d/{document_id}/edit"

        if public:
            permission = {"type": "anyone", "role": role.value}
            service.permissions().create(fileId=document_id, body=permission).execute()
            share_link += "?usp=sharing"
        else:
            permissions = service.permissions().list(fileId=document_id).execute()
            for perm in permissions.get("permissions", []):
                if perm.get("type") == "anyone":
                    service.permissions().delete(
                        fileId=document_id, permissionId=perm["id"]
                    ).execute()

        return {"success": True, "is_public": public, "share_link": share_link}
Performance

The export block reads the entire exported file into memory and base64-encodes binary formats, which can be large and cause memory/time issues. Validate expected document sizes, consider streaming/chunking where possible, and ensure the output contract (string/base64) is acceptable for downstream consumers and transport limits.

class GoogleDocsExportBlock(Block):
    """Export a Google Doc to various formats."""

    class Input(BlockSchemaInput):
        document: GoogleDriveFile = GoogleDriveFileField(
            title="Document",
            description="Select a Google Doc to export",
            allowed_views=["DOCUMENTS"],
        )
        format: ExportFormat = SchemaField(
            default=ExportFormat.PDF,
            description="Export format",
        )

    class Output(BlockSchemaOutput):
        content: str = SchemaField(
            description="Exported content (base64 encoded for binary formats)"
        )
        mime_type: str = SchemaField(description="MIME type of exported content")
        document: GoogleDriveFile = SchemaField(description="The document for chaining")
        error: str = SchemaField(description="Error message if export failed")

    def __init__(self):
        super().__init__(
            id="e32d5642-7b51-458c-bd83-75ff96fec299",
            description="Export a Google Doc to PDF, Word, text, or other formats",
            categories={BlockCategory.DATA},
            input_schema=GoogleDocsExportBlock.Input,
            output_schema=GoogleDocsExportBlock.Output,
            disabled=GOOGLE_DOCS_DISABLED,
            test_input={
                "document": {
                    "id": "1abc123def456",
                    "name": "Test Document",
                    "mimeType": "application/vnd.google-apps.document",
                },
                "format": ExportFormat.TXT,
            },
            test_credentials=TEST_CREDENTIALS,
            test_output=[
                ("content", "This is the document content as plain text."),
                ("mime_type", "text/plain"),
                (
                    "document",
                    GoogleDriveFile(
                        id="1abc123def456",
                        name="Test Document",
                        mimeType="application/vnd.google-apps.document",
                        url="https://docs.google.com/document/d/1abc123def456/edit",
                        iconUrl="https://www.gstatic.com/images/branding/product/1x/docs_48dp.png",
                        isFolder=False,
                        _credentials_id=None,
                    ),
                ),
            ],
            test_mock={
                "_export_document": lambda *args, **kwargs: {
                    "content": "This is the document content as plain text.",
                    "mime_type": "text/plain",
                },
            },
        )

    async def run(
        self, input_data: Input, *, credentials: GoogleCredentials, **kwargs
    ) -> BlockOutput:
        if not input_data.document:
            yield "error", "No document selected"
            return

        validation_error = _validate_document_file(input_data.document)
        if validation_error:
            yield "error", validation_error
            return

        try:
            drive_service = _build_drive_service(credentials)
            result = await asyncio.to_thread(
                self._export_document,
                drive_service,
                input_data.document.id,
                input_data.format.value,
            )
            yield "content", result["content"]
            yield "mime_type", result["mime_type"]
            yield "document", _make_document_output(input_data.document)
        except Exception as e:
            yield "error", f"Failed to export document: {str(e)}"

    def _export_document(self, service, document_id: str, mime_type: str) -> dict:
        import base64

        response = (
            service.files().export(fileId=document_id, mimeType=mime_type).execute()
        )

        # For text formats, return as string; for binary, base64 encode
        if mime_type in ["text/plain", "text/html"]:
            content = (
                response.decode("utf-8") if isinstance(response, bytes) else response
            )
        else:
            content = base64.b64encode(response).decode("utf-8")

        return {"content": content, "mime_type": mime_type}
Correctness

The table insertion/population logic relies on heuristics for identifying the newly inserted table and uses multiple API calls (including fetching the entire doc). Validate correctness when multiple tables exist, when inserting near the end vs middle, and when content rows have ragged lengths. Also confirm index calculations remain correct when inserting markdown per-cell (many batch updates) and that rate limits are handled gracefully.

class GoogleDocsInsertTableBlock(Block):
    """Insert a table into a Google Doc, optionally with content."""

    class Input(BlockSchemaInput):
        document: GoogleDriveFile = GoogleDriveFileField(
            title="Document",
            description="Select a Google Doc",
            allowed_views=["DOCUMENTS"],
        )
        rows: int = SchemaField(
            default=3,
            description="Number of rows (ignored if content provided)",
        )
        columns: int = SchemaField(
            default=3,
            description="Number of columns (ignored if content provided)",
        )
        content: list[list[str]] = SchemaField(
            default=[],
            description="Optional 2D array of cell content, e.g. [['Header1', 'Header2'], ['Row1Col1', 'Row1Col2']]. If provided, rows/columns are derived from this.",
        )
        index: int = SchemaField(
            default=0,
            description="Position to insert table (0 = end of document)",
        )
        format_as_markdown: bool = SchemaField(
            default=False,
            description="Format cell content as Markdown (headers, bold, links, etc.)",
        )

    class Output(BlockSchemaOutput):
        result: dict = SchemaField(description="Result of table insertion")
        document: GoogleDriveFile = SchemaField(description="The document for chaining")
        error: str = SchemaField(description="Error message if operation failed")

    def __init__(self):
        super().__init__(
            id="e104b3ab-dfef-45f9-9702-14e950988f53",
            description="Insert a table into a Google Doc, optionally with content and Markdown formatting",
            categories={BlockCategory.DATA},
            input_schema=GoogleDocsInsertTableBlock.Input,
            output_schema=GoogleDocsInsertTableBlock.Output,
            disabled=GOOGLE_DOCS_DISABLED,
            test_input={
                "document": {
                    "id": "1abc123def456",
                    "name": "Test Document",
                    "mimeType": "application/vnd.google-apps.document",
                },
                "content": [["Header1", "Header2"], ["Row1Col1", "Row1Col2"]],
            },
            test_credentials=TEST_CREDENTIALS,
            test_output=[
                (
                    "result",
                    {
                        "success": True,
                        "rows": 2,
                        "columns": 2,
                        "cells_populated": 4,
                        "cells_found": 4,
                    },
                ),
                (
                    "document",
                    GoogleDriveFile(
                        id="1abc123def456",
                        name="Test Document",
                        mimeType="application/vnd.google-apps.document",
                        url="https://docs.google.com/document/d/1abc123def456/edit",
                        iconUrl="https://www.gstatic.com/images/branding/product/1x/docs_48dp.png",
                        isFolder=False,
                        _credentials_id=None,
                    ),
                ),
            ],
            test_mock={
                "_insert_table": lambda *args, **kwargs: {
                    "success": True,
                    "rows": 2,
                    "columns": 2,
                    "cells_populated": 4,
                    "cells_found": 4,
                },
            },
        )

    async def run(
        self, input_data: Input, *, credentials: GoogleCredentials, **kwargs
    ) -> BlockOutput:
        if not input_data.document:
            yield "error", "No document selected"
            return

        validation_error = _validate_document_file(input_data.document)
        if validation_error:
            yield "error", validation_error
            return

        # Determine rows/columns from content if provided
        content = input_data.content

        # Check if content is valid:
        # 1. Has at least one row with at least one cell (even if empty string)
        # 2. Has at least one non-empty cell value
        has_valid_structure = bool(content and any(len(row) > 0 for row in content))
        has_content = has_valid_structure and any(
            cell for row in content for cell in row
        )

        if has_content:
            # Use content dimensions - filter out empty rows for row count,
            # use max column count across all rows
            rows = len(content)
            columns = max(len(row) for row in content)
        else:
            # No valid content - use explicit rows/columns, clear content
            rows = input_data.rows
            columns = input_data.columns
            content = []  # Clear so we skip population step

        try:
            service = _build_docs_service(credentials)
            result = await asyncio.to_thread(
                self._insert_table,
                service,
                input_data.document.id,
                rows,
                columns,
                input_data.index,
                content,
                input_data.format_as_markdown,
            )
            yield "result", result
            yield "document", _make_document_output(input_data.document)
        except Exception as e:
            yield "error", f"Failed to insert table: {str(e)}"

    def _insert_table(
        self,
        service,
        document_id: str,
        rows: int,
        columns: int,
        index: int,
        content: list[list[str]],
        format_as_markdown: bool,
    ) -> dict:
        # If index is 0, insert at end of document
        if index == 0:
            index = _get_document_end_index(service, document_id)

        # Insert the empty table structure
        requests = [
            {
                "insertTable": {
                    "rows": rows,
                    "columns": columns,
                    "location": {"index": index},
                }
            }
        ]

        service.documents().batchUpdate(
            documentId=document_id, body={"requests": requests}
        ).execute()

        # If no content provided, we're done
        if not content:
            return {"success": True, "rows": rows, "columns": columns}

        # Fetch the document to find cell indexes
        doc = service.documents().get(documentId=document_id).execute()
        body_content = doc.get("body", {}).get("content", [])

        # Find all tables and pick the one we just inserted
        # (the one with highest startIndex that's >= our insert point, or the last one if inserted at end)
        tables_found = []
        for element in body_content:
            if "table" in element:
                tables_found.append(element)

        if not tables_found:
            return {
                "success": True,
                "rows": rows,
                "columns": columns,
                "warning": "Table created but could not find it to populate",
            }

        # If we inserted at end (index was high), take the last table
        # Otherwise, take the first table at or after our insert index
        table_element = None
        # Heuristic: rows * columns * 2 estimates the minimum index space a table
        # occupies (each cell has at least a start index and structural overhead).
        # This helps determine if our insert point was near the document end.
        estimated_table_size = rows * columns * 2
        if (
            index
            >= _get_document_end_index(service, document_id) - estimated_table_size
        ):
            # Likely inserted at end - use last table
            table_element = tables_found[-1]
        else:
            for tbl in tables_found:
                if tbl.get("startIndex", 0) >= index:
                    table_element = tbl
                    break
            if not table_element:
                table_element = tables_found[-1]

        # Extract cell start indexes from the table structure
        # Structure: table -> tableRows -> tableCells -> content[0] -> startIndex
        cell_positions: list[tuple[int, int, int]] = []  # (row, col, start_index)
        table_data = table_element.get("table", {})
        table_rows_list = table_data.get("tableRows", [])

        for row_idx, table_row in enumerate(table_rows_list):
            cells = table_row.get("tableCells", [])
            for col_idx, cell in enumerate(cells):
                cell_content = cell.get("content", [])
                if cell_content:
                    # Get the start index of the first element in the cell
                    first_element = cell_content[0]
                    cell_start = first_element.get("startIndex")
                    if cell_start is not None:
                        cell_positions.append((row_idx, col_idx, cell_start))

        if not cell_positions:
            return {
                "success": True,
                "rows": rows,
                "columns": columns,
                "warning": f"Table created but could not extract cell positions. Table has {len(table_rows_list)} rows.",
            }

        # Sort by index descending so we can insert in reverse order
        # (inserting later content first preserves earlier indexes)
        cell_positions.sort(key=lambda x: x[2], reverse=True)

        cells_populated = 0

        if format_as_markdown:
            # Markdown formatting: process each cell individually since
            # gravitas-md2gdocs requests may have complex interdependencies
            for row_idx, col_idx, cell_start in cell_positions:
                if row_idx < len(content) and col_idx < len(content[row_idx]):
                    cell_text = content[row_idx][col_idx]
                    if not cell_text:
                        continue
                    md_requests = to_requests(cell_text, start_index=cell_start)
                    if md_requests:
                        service.documents().batchUpdate(
                            documentId=document_id, body={"requests": md_requests}
                        ).execute()
                        cells_populated += 1
        else:
            # Plain text: batch all insertions into a single API call
            # Cells are sorted by index descending, so earlier requests
            # don't affect indices of later ones
            all_text_requests = []
            for row_idx, col_idx, cell_start in cell_positions:
                if row_idx < len(content) and col_idx < len(content[row_idx]):
                    cell_text = content[row_idx][col_idx]
                    if not cell_text:
                        continue
                    all_text_requests.append(
                        {
                            "insertText": {
                                "location": {"index": cell_start},
                                "text": cell_text,
                            }
                        }
                    )
                    cells_populated += 1

            if all_text_requests:
                service.documents().batchUpdate(
                    documentId=document_id, body={"requests": all_text_requests}
                ).execute()

        return {
            "success": True,
            "rows": rows,
            "columns": columns,
            "cells_populated": cells_populated,
            "cells_found": len(cell_positions),
        }

@AutoGPT-Agent
Copy link

Thanks for submitting this PR to extract frontend changes from the hackathon/copilot branch. The changes look well-organized and the PR description provides good context.

However, there's an issue that needs to be addressed before this can be merged:

  • The checklist in your PR description has an incomplete test plan. All items in the checklist must be fully checked off. Currently, the test items under "I have tested my changes according to the test plan" are not checked off.

Please update your PR by:

  1. Either completing all the tests and checking them off, or
  2. Revising your test plan to include only tests you've actually completed

The code changes themselves look good and align with the PR description, but we need to ensure all checklist items are properly addressed before merging.

Let me know if you have any questions or need assistance completing the checklist.

@AutoGPT-Agent
Copy link

Thank you for submitting your PR to extract frontend changes from the hackathon/copilot branch! The changes look substantial and include important new features like the chat system, form renderer, and output renderers.

However, before we can merge this PR, there's one issue that needs to be addressed:

Required Changes

  • Test Plan Completion: Your PR checklist includes a test plan with 5 items, but none of them are checked off. According to our requirements, all checklist items must be completed before a PR can be merged. Please complete the testing for:
    • Test new Chat components functionality
    • Verify form renderer with various input types
    • Test credential management flows
    • Verify output renderers display correctly
    • Test draft recovery feature

Once you've completed these tests, please update your PR by checking off these items.

Additional Notes

The changes look well-structured, with components being moved to more appropriate locations. The new chat system appears to be a significant feature addition that will enhance the platform's capabilities.

Please complete the test plan checklist, and we'll be happy to review this PR again for merging.

@majdyz majdyz removed their request for review January 13, 2026 16:21
@github-actions github-actions bot added the conflicts Automatically applied to PRs with merge conflicts label Jan 15, 2026
@github-actions
Copy link
Contributor

This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request.

@github-actions github-actions bot removed the conflicts Automatically applied to PRs with merge conflicts label Jan 16, 2026
@github-actions
Copy link
Contributor

Conflicts have been resolved! 🎉 A maintainer will review the pull request shortly.

@github-actions github-actions bot added conflicts Automatically applied to PRs with merge conflicts and removed platform/backend AutoGPT Platform - Back end platform/blocks labels Jan 16, 2026
Comment on lines 188 to 190
const hasAllRequiredScopes = new Set(requiredScopes).isSubsetOf(
grantedScopes,
);

This comment was marked as outdated.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 20

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

🤖 Fix all issues with AI agents
In
`@autogpt_platform/frontend/src/components/contextual/Chat/components/ChatContainer/helpers.ts`:
- Around line 137-219: The parseToolResponse function currently overwrites the
originating toolName and collapses "no_results" into a generic tool_response,
losing suggestions/session info and the original tool identity; update
parseToolResponse to (1) preserve and return the incoming toolName and toolId
for all responses instead of replacing toolName with response-type labels, (2)
emit a distinct "no_results" (or keep responseType === "no_results") return
shape that includes parsedResult.message, any suggestions/session_id/agent_info
present, and success flag, and (3) for special responseType branches
(agent_carousel, execution_started, need_login, setup_requirements) ensure you
map responseType to the correct returned "type" value but keep toolId/toolName
from the parameters and include any additional parsedResult fields (agents,
execution_id, session_id, agent_info, total_count, etc.) so no useful fields are
dropped; locate changes in parseToolResponse to adjust returned objects for
responseType checks and to stop setting toolName =
"agent_carousel"/"execution_started"/"login_needed".

In
`@autogpt_platform/frontend/src/components/contextual/Chat/components/ChatCredentialsSetup/useChatCredentialsSetup.ts`:
- Around line 1-3: The import of CredentialsMetaInput from
"@/lib/autogpt-server-api" is deprecated; replace it with the generated OpenAPI
type (or a local type alias) and update any usages in this hook: change the
import statement that references CredentialsMetaInput and update types in
useChatCredentialsSetup and related symbols (e.g., CredentialInfo) to use the
new generated type/hook names from the OpenAPI client (or your local type) so
the hook aligns with the current API layer and frontend guidelines.

In
`@autogpt_platform/frontend/src/components/contextual/Chat/components/ChatInput/ChatInput.tsx`:
- Around line 29-45: The Input component isn't forwarding aria-describedby to
its textarea, so update the textarea rendering in Input (where it conditionally
sets aria-label) to also set aria-describedby={props['aria-describedby']} (or
equivalent from the incoming props) so the hint can be associated with the
control; then in ChatInput, add aria-describedby="chat-input-hint" to the <Input
... /> instance so the screen-reader-only hint is connected to the textarea.

In
`@autogpt_platform/frontend/src/components/contextual/Chat/components/StreamingMessage/useStreamingMessage.ts`:
- Around line 8-24: useStreamingMessage never updates isComplete because the
setter _setIsComplete is unused; change isComplete to be derived from the
incoming chunks instead of local state (in useStreamingMessage) by detecting the
stream-complete condition based on your chunk shape (e.g., check a done flag on
the last chunk like chunk.done === true or a sentinel string such as '[DONE]'),
remove _setIsComplete, compute isComplete from chunks, and keep the useEffect
that calls onComplete when that derived isComplete flips true so onComplete
fires reliably.

In
`@autogpt_platform/frontend/src/components/contextual/Chat/components/ThinkingMessage/ThinkingMessage.tsx`:
- Line 48: Replace the raw CSS-driven loader in ThinkingMessage (the div with
className="loader" and the inline animation style) with Tailwind utilities or
the design-system loading component: remove usage of the global .loader and
shimmer keyframes, import and render the canonical spinner/loading component
from src/components (e.g., Spinner or LoadingIndicator) if available, or replace
the div with a Tailwind-styled element using utility classes (fixed
width/height, rounded, bg-gradient or bg-gray with animate-pulse/animate-spin as
appropriate) instead of inline animation: "shimmer..."; ensure the new element
uses Tailwind classes only and lives inside the ThinkingMessage component to
match the design system API.

In
`@autogpt_platform/frontend/src/components/contextual/Chat/components/ToolResponseMessage/ToolResponseMessage.tsx`:
- Around line 82-115: The duplicated rendering logic for agent_output and
block_output should be extracted into a shared component (e.g., OutputsGrid) and
reused; create an OutputsGrid component that accepts outputs: Record<string,
unknown[]> and optional className, move the map logic that iterates
Object.entries(outputs), calls globalRegistry.getRenderer(value), returns
<OutputItem .../> or the fallback <div> with Text and JSON.stringify, and
replace the existing duplicated blocks in ToolResponseMessage (the sections
rendering agent_output and block_output) with <OutputsGrid
outputs={agent_output} /> and <OutputsGrid outputs={block_output} />
respectively; ensure you preserve keys (`${outputName}-${index}`), props (value,
renderer, label) and any className/cn usage so behavior and styling remain
identical.

In `@autogpt_platform/frontend/src/components/contextual/Chat/useChatStream.ts`:
- Around line 315-345: The retry logic in useChatStream.ts currently re-calls
sendMessage with identical params causing duplicate persisted messages; fix by
adding an idempotency token: generate a unique id (e.g., idempotencyKeyRef or
messageIdRef) when starting a send, pass that token into sendMessage and ensure
the client includes it in the API request, and wire the backend to use that
token to detect and skip duplicate persists; update places referencing
sendMessage, retryTimeoutRef, retryCountRef and MAX_RETRIES so retries reuse the
same idempotency token instead of creating a new message on each retry.

In
`@autogpt_platform/frontend/src/components/contextual/CredentialsInputs/components/APIKeyCredentialsModal/useAPIKeyCredentialsModal.ts`:
- Around line 5-8: Replace the deprecated import of BlockIOCredentialsSubSchema
and CredentialsMetaInput in the useAPIKeyCredentialsModal hook with the
generated OpenAPI types (or a local type definition) used by the frontend;
locate the import statement that brings in BlockIOCredentialsSubSchema and
CredentialsMetaInput and either import their equivalents from the generated
OpenAPI client/types package used elsewhere in the frontend or define a small
local interface matching the fields the hook needs, then update any references
inside useAPIKeyCredentialsModal to use the new type names so the hook no longer
depends on "@/lib/autogpt-server-api/types".
- Around line 52-67: Wrap the async body of onSubmit in a try/catch around the
call to credentials.createAPIKeyCredentials so failures are handled: on error
call the app toast error helper (e.g., toast.error) with a user-friendly message
and capture the exception with Sentry.captureException(error), then return/exit
so onCredentialsCreate is not invoked for failed creation; keep the existing
successful flow (creating newCredentials and calling onCredentialsCreate)
unchanged inside the try block and include references to APIKeyFormValues,
values.expiresAt conversion, and credentials.provider/id/title as-is.

In
`@autogpt_platform/frontend/src/components/contextual/CredentialsInputs/components/HotScopedCredentialsModal/HotScopedCredentialsModal.tsx`:
- Around line 8-13: The imports in HotScopedCredentialsModal.tsx reference the
disallowed legacy module (__legacy__/ui/form) for Form, FormDescription,
FormField, and FormLabel; replace that import with the current design-system
form components (the modern form module used across the frontend) and update any
component usage to match the new API (ensure prop names and component wrappers
used in HotScopedCredentialsModal still align with the new Form, FormField,
FormLabel, and FormDescription exports), preserving behavior and types.
- Around line 16-18: The component HotScopedCredentialsModal imports
BlockIOCredentialsSubSchema and CredentialsMetaInput from the deprecated
"@/lib/autogpt-server-api/types"; replace those with the generated OpenAPI types
(or a local equivalent) used elsewhere in the frontend, updating the import to
the generated types module (or local type file) and adjusting any usages/props
in HotScopedCredentialsModal to match the generated type names and shapes;
ensure you update any type references within the component (e.g., prop/type
annotations and form handlers) to the new types so there are no remaining
references to "@/lib/autogpt-server-api/types".
- Around line 84-102: The handlers addHeaderPair, removeHeaderPair, and
updateHeaderPair are implemented as arrow functions; change them to named
function declarations (e.g., function addHeaderPair() { ... }, function
removeHeaderPair(index: number) { ... }, function updateHeaderPair(index:
number, field: "key" | "value", value: string) { ... }) to follow frontend
conventions, keep their internal logic identical (including the setHeaderPairs
calls and guard in removeHeaderPair), and ensure any references or exports to
these names remain valid after the refactor.
- Around line 104-128: Wrap the createHostScopedCredentials call inside onSubmit
in a try/catch: import useToast and Sentry (import { useToast } from
"@/components/molecules/Toast/use-toast" and import * as Sentry from
"@sentry/nextjs"), call const { toast } = useToast() in the component, then in
onSubmit await createHostScopedCredentials inside try; on success proceed to
call onCredentialsCreate as before; in catch call toast({ title: "Failed to
create credentials", description: error.message || "An unexpected error
occurred", variant: "destructive" }) and Sentry.captureException(error) and
return/exit early to avoid calling onCredentialsCreate with undefined data.

In
`@autogpt_platform/frontend/src/components/contextual/CredentialsInputs/useCredentialsInput.ts`:
- Around line 131-238: The handleOAuthLogin function has two problems:
api.oAuthLogin can throw and the OAUTH timeout callback runs even after a
successful flow, overwriting state. Fix by wrapping the api.oAuthLogin(...) call
in try/catch and setOAuthError + return on failure; create a timeoutId from
setTimeout and store it in a local variable, then clearTimeout(timeoutId) when
the flow completes successfully (in the try block after oAuthCallback) or when
controller.abort is invoked (e.g., in controller.signal.onabort), and/or make
the timeout handler first check controller.signal.aborted before mutating state;
reference functions/vars: handleOAuthLogin, api.oAuthLogin, oAuthCallback,
controller.abort, controller.signal.onabort, setTimeout/clearTimeout, and
OAUTH_TIMEOUT_MS.
- Around line 56-71: The onSuccess callback in deleteCredentialsMutation closes
over credentialToDelete and can read a stale value; fix by using a stable
identifier instead of the state capture: either capture the id at mutation
invocation (e.g., const idToDelete = credentialToDelete?.id and pass it into the
mutate call or mutation context) or maintain a ref (credentialToDeleteRef) that
you update whenever credentialToDelete changes and read
credentialToDeleteRef.current inside onSuccess; then compare
selectedCredential?.id to that stable id and call onSelectCredential(undefined)
accordingly, keeping the existing invalidateQueries and
setCredentialToDelete(null) behavior.

In
`@autogpt_platform/frontend/src/components/contextual/OutputRenderers/index.ts`:
- Around line 1-20: This file creates a barrel index.ts and registers renderers;
replace it with explicit exports and a dedicated bootstrap module: move the
registration logic (calls to globalRegistry.register for videoRenderer,
imageRenderer, codeRenderer, markdownRenderer, jsonRenderer, textRenderer) into
a new named module (e.g., renderersRegistry or registerRenderers) that you
import where the app initializes, and remove the re-export barrel exports;
instead export symbols directly from their source files (export { globalRegistry
} from "./types"; export type { OutputRenderer, OutputMetadata, DownloadContent
} from "./types"; export { OutputItem } from "./components/OutputItem"; export {
OutputActions } from "./components/OutputActions";) so there is no index.ts
barrel—ensure globalRegistry and renderer identifiers remain referenced from
their original modules.

In
`@autogpt_platform/frontend/src/components/contextual/OutputRenderers/renderers/ImageRenderer.tsx`:
- Around line 106-124: renderImage currently coerces value to String(value)
which yields "[object Object]" when value is an object; update renderImage to
mirror canRenderImage's behavior by extracting the actual image source from an
object (prefer value.url, then value.data, then value.path) and falling back to
String(value) for primitives, and ensure data (base64/data URLs) are returned
as-is; apply the same extraction logic to getCopyContentImage and
getDownloadContentImage so all three functions consistently handle objects with
url/data/path properties.

In
`@autogpt_platform/frontend/src/components/contextual/RunAgentInputs/RunAgentInputs.tsx`:
- Around line 14-21: The RunAgentInputs.tsx file imports deprecated backend
types (BlockIOObjectSubSchema, BlockIOSubSchema, BlockIOTableSubSchema,
DataType, determineDataType, TableRow) from "@/lib/autogpt-server-api/*";
replace these with the corresponding types from the generated frontend API
client or a local non-deprecated abstraction. Locate usages of
determineDataType, DataType, TableRow, BlockIOSubSchema, BlockIOObjectSubSchema
and BlockIOTableSubSchema in RunAgentInputs and swap their imports to the
generated endpoints' types (or create a small local adapter type mirroring only
the properties used), update any code references to match the new type names,
and remove the deprecated import line so the component only relies on generated
or local frontend-safe types.
- Around line 184-206: The SELECT branch in RunAgentInputs.tsx (case
DataType.SELECT) currently falls through to DataType.MULTI_SELECT when
schema.enum is missing and also drops valid falsy enum values; fix by ensuring
the SELECT case always exits (add an explicit break/return at the end of the
DataType.SELECT case so it cannot fall through to MULTI_SELECT when schema.enum
is absent) and change the options filtering for DSSelect from .filter((opt) =>
opt) to .filter((opt) => opt !== undefined && opt !== null) so 0, false, and ""
are preserved while only null/undefined are removed; keep the DSSelect usage (id
`${baseId}-select`, label, value, onValueChange, placeholder) unchanged.

In
`@autogpt_platform/frontend/src/components/contextual/RunAgentInputs/useRunAgentInputs.ts`:
- Line 1: The import of the deprecated BackendAPI should be removed and replaced
by the generated React hook for the corresponding backend endpoint (use the
pattern use{Method}{Version}{OperationName}, e.g., usePostV2UploadFile) from
`@/app/api/__generated__/endpoints/` and invoked inside the useRunAgentInputs hook
instead of calling BackendAPI methods; run pnpm generate:api to regenerate the
hooks if the OpenAPI spec changed, import the correct hook(s) matching the
operation you need, and update any calls that referenced BackendAPI to call the
hook's mutate/execute function and handle its returned state
(loading/error/data) accordingly.
🟡 Minor comments (25)
autogpt_platform/frontend/src/app/api/chat/sessions/[sessionId]/stream/route.ts-44-46 (1)

44-46: Token check allows invalid sentinel value to be sent.

Per the getServerAuthToken() implementation, it returns "no-token-found" string when no session exists. The current truthy check if (token) will pass for this sentinel value, sending Authorization: Bearer no-token-found to the backend.

Consider checking for the sentinel value explicitly:

Proposed fix
-    if (token) {
+    if (token && token !== "no-token-found") {
       headers["Authorization"] = `Bearer ${token}`;
     }
autogpt_platform/frontend/src/app/api/chat/sessions/[sessionId]/stream/route.ts-58-64 (1)

58-64: Content-Type mismatch on error response.

The error response sets Content-Type: application/json but returns response.text() which may not be valid JSON depending on what the backend returns. This could cause client-side parsing errors.

Proposed fix
     if (!response.ok) {
       const error = await response.text();
       return new Response(error, {
         status: response.status,
-        headers: { "Content-Type": "application/json" },
+        headers: { "Content-Type": response.headers.get("Content-Type") || "text/plain" },
       });
     }
autogpt_platform/frontend/src/components/contextual/CredentialsInputs/components/CredentialRow/CredentialRow.tsx-71-76 (1)

71-76: Typo in className: lex-[0_0_40%] should be flex-[0_0_40%].

Missing 'f' prefix causes the Tailwind class to not apply, breaking the flex sizing for the masked key display.

Fix
         <Text
           variant="large"
-          className="lex-[0_0_40%] relative top-1 hidden overflow-hidden whitespace-nowrap font-mono tracking-tight md:block"
+          className="flex-[0_0_40%] relative top-1 hidden overflow-hidden whitespace-nowrap font-mono tracking-tight md:block"
         >
           {"*".repeat(MASKED_KEY_LENGTH)}
         </Text>
autogpt_platform/frontend/src/components/contextual/Chat/components/ChatCredentialsSetup/ChatCredentialsSetup.tsx-17-24 (1)

17-24: Unused props: className, agentName, and onCancel are defined but not used.

className is declared in Props (line 23) but never destructured or applied. agentName and onCancel are destructured with underscore prefix but not utilized. Either implement these props or remove them from the interface.

🔧 Suggested fix

If these props are needed for future use, add a TODO comment. Otherwise, remove them:

 interface Props {
   credentials: CredentialInfo[];
-  agentName?: string;
   message: string;
   onAllCredentialsComplete: () => void;
-  onCancel: () => void;
   className?: string;
 }
 
 export function ChatCredentialsSetup({
   credentials,
-  agentName: _agentName,
   message,
   onAllCredentialsComplete,
-  onCancel: _onCancel,
+  className,
 }: Props) {

Then apply className to the root div:

-    <div className="group relative flex w-full justify-start gap-3 px-4 py-3">
+    <div className={cn("group relative flex w-full justify-start gap-3 px-4 py-3", className)}>

Also applies to: 41-47

autogpt_platform/frontend/src/components/contextual/RunAgentInputs/useRunAgentInputs.ts-4-13 (1)

4-13: API instance created on every render; missing error handling and progress reset.

Three issues:

  1. new BackendAPI() is instantiated on every render. Consider memoizing or moving outside the hook.
  2. uploadProgress is never reset before a new upload starts, which could show stale progress.
  3. No error handling—failed uploads will cause unhandled promise rejections.
Suggested improvements
+import { useCallback, useState } from "react";
+
+const api = new BackendAPI(); // Move outside if BackendAPI is stateless
+
 export function useRunAgentInputs() {
-  const api = new BackendAPI();
   const [uploadProgress, setUploadProgress] = useState(0);
 
-  async function handleUploadFile(file: File) {
-    const result = await api.uploadFile(file, "gcs", 24, (progress) =>
-      setUploadProgress(progress),
-    );
-    return result;
-  }
+  const handleUploadFile = useCallback(async (file: File) => {
+    setUploadProgress(0); // Reset progress
+    try {
+      const result = await api.uploadFile(file, "gcs", 24, (progress) =>
+        setUploadProgress(progress),
+      );
+      return result;
+    } catch (error) {
+      setUploadProgress(0);
+      throw error; // Re-throw for caller to handle
+    }
+  }, []);
autogpt_platform/frontend/src/components/contextual/Chat/components/ChatLoadingState/ChatLoadingState.tsx-4-9 (1)

4-9: The message prop is declared but never used.

The message prop is defined in ChatLoadingStateProps but is not destructured in the function signature or rendered in the component. This appears to be either incomplete implementation or dead code.

Option 1: Remove unused prop
 export interface ChatLoadingStateProps {
-  message?: string;
   className?: string;
 }
 
 export function ChatLoadingState({ className }: ChatLoadingStateProps) {
Option 2: Implement the message display
-export function ChatLoadingState({ className }: ChatLoadingStateProps) {
+export function ChatLoadingState({ message, className }: ChatLoadingStateProps) {
   return (
     <div
       className={cn("flex flex-1 items-center justify-center p-6", className)}
     >
       <div className="flex flex-col items-center gap-4 text-center">
         <LoadingSpinner />
+        {message && (
+          <p className="text-sm text-muted-foreground">{message}</p>
+        )}
       </div>
     </div>
   );
 }
autogpt_platform/frontend/src/components/contextual/Chat/components/ExecutionStartedMessage/ExecutionStartedMessage.tsx-56-62 (1)

56-62: Avoid always appending ellipsis to short execution IDs.

When executionId is ≤16 chars, the UI still shows ... and hides the full ID. Prefer conditional truncation.

🐛 Proposed fix
-            <Text variant="small" className="font-mono text-green-800">
-              {executionId.slice(0, 16)}...
-            </Text>
+            <Text variant="small" className="font-mono text-green-800">
+              {executionId.length > 16
+                ? `${executionId.slice(0, 16)}...`
+                : executionId}
+            </Text>
autogpt_platform/frontend/src/components/contextual/Chat/components/StreamingMessage/StreamingMessage.tsx-7-18 (1)

7-18: Implement completion detection in useStreamingMessage or remove the onComplete prop.

useStreamingMessage initializes isComplete to false (line 12) but never calls _setIsComplete, so the useEffect condition if (isComplete && onComplete) will never be true. Either add logic to detect when streaming is complete and set isComplete = true, or remove the onComplete prop entirely to avoid misleading callers.

autogpt_platform/frontend/src/components/contextual/OutputRenderers/renderers/VideoRenderer.tsx-126-145 (1)

126-145: Data URL parsing assumes base64 encoding without validation.

The code assumes data:video/...;base64,... format, but data URLs can also use URL encoding (without the ;base64 part). Calling atob() on URL-encoded data will fail.

🐛 Proposed fix
   if (videoUrl.startsWith("data:")) {
     const [mimeInfo, base64Data] = videoUrl.split(",");
     const mimeType = mimeInfo.match(/data:([^;]+)/)?.[1] || "video/mp4";
+    const isBase64 = mimeInfo.includes(";base64");
+    
+    let byteArray: Uint8Array;
+    if (isBase64) {
       const byteCharacters = atob(base64Data);
       const byteNumbers = new Array(byteCharacters.length);
-
       for (let i = 0; i < byteCharacters.length; i++) {
         byteNumbers[i] = byteCharacters.charCodeAt(i);
       }
-
-    const byteArray = new Uint8Array(byteNumbers);
+      byteArray = new Uint8Array(byteNumbers);
+    } else {
+      // URL-encoded data
+      const decoded = decodeURIComponent(base64Data);
+      byteArray = new TextEncoder().encode(decoded);
+    }
     const blob = new Blob([byteArray], { type: mimeType });
autogpt_platform/frontend/src/components/contextual/OutputRenderers/renderers/VideoRenderer.tsx-55-57 (1)

55-57: URL extension matching is too permissive and may cause false positives.

Using .includes(ext) matches the extension anywhere in the URL, not just at the end. For example, https://example.com/.mp4-folder/document.txt would incorrectly match .mp4.

🐛 Proposed fix
     if (value.startsWith("http://") || value.startsWith("https://")) {
-      return videoExtensions.some((ext) => value.toLowerCase().includes(ext));
+      const url = value.toLowerCase();
+      return videoExtensions.some((ext) => {
+        const extIndex = url.lastIndexOf(ext);
+        // Check if extension is at the end or followed by query params/hash
+        return extIndex !== -1 && (extIndex + ext.length === url.length || 
+          url[extIndex + ext.length] === '?' || url[extIndex + ext.length] === '#');
+      });
     }

Alternatively, use a URL parser:

const urlPath = new URL(value).pathname.toLowerCase();
return videoExtensions.some((ext) => urlPath.endsWith(ext));
autogpt_platform/frontend/src/components/contextual/OutputRenderers/renderers/VideoRenderer.tsx-109-118 (1)

109-118: Add error handling for fetch in copy content.

The async fetch for remote URLs has no error handling. Network failures will cause unhandled promise rejections that could break clipboard operations.

🐛 Proposed fix
   return {
     mimeType: mimeType,
     data: async () => {
-      const response = await fetch(videoUrl);
-      return await response.blob();
+      try {
+        const response = await fetch(videoUrl);
+        if (!response.ok) {
+          throw new Error(`Failed to fetch video: ${response.status}`);
+        }
+        return await response.blob();
+      } catch {
+        // Return URL as fallback text on fetch failure
+        return videoUrl;
+      }
     },
     alternativeMimeTypes: ["text/plain"],
     fallbackText: videoUrl,
   };
autogpt_platform/frontend/src/components/contextual/OutputRenderers/utils/download.ts-36-50 (1)

36-50: Non-URL string data is silently skipped.

When downloadContent.data is a string that doesn't start with "http", the item is silently ignored. This could result in unexpected data loss for items with data URLs (e.g., data:...) or relative paths.

🐛 Proposed fix
       if (downloadContent) {
         if (typeof downloadContent.data === "string") {
           if (downloadContent.data.startsWith("http")) {
             const link = document.createElement("a");
             link.href = downloadContent.data;
             link.download = downloadContent.filename;
             link.click();
+          } else {
+            // Handle non-URL strings (data URLs, relative paths, or raw content)
+            const blob = new Blob([downloadContent.data], { type: downloadContent.mimeType });
+            nonConcatenableDownloads.push({
+              blob,
+              filename: downloadContent.filename,
+            });
           }
         } else {
autogpt_platform/frontend/src/components/contextual/OutputRenderers/utils/copy.ts-3-14 (1)

3-14: Add SSR safety guard for window access.

isClipboardTypeSupported accesses window directly without checking if it exists. This can cause issues during server-side rendering.

🐛 Proposed fix
 export function isClipboardTypeSupported(mimeType: string): boolean {
+  if (typeof window === "undefined") {
+    return false;
+  }
+
   // ClipboardItem.supports() is the proper way to check
   if ("ClipboardItem" in window && "supports" in ClipboardItem) {
     return ClipboardItem.supports(mimeType);
   }
autogpt_platform/frontend/src/components/contextual/OutputRenderers/renderers/JSONRenderer.tsx-12-31 (1)

12-31: canRenderJSON may be overly permissive for object types.

The check at lines 17-18 returns true for any non-null object, which includes DOM nodes, class instances, functions (which are objects), and other non-serializable values. This could cause JSON.stringify to fail or produce unexpected results in getCopyContentJSON and getDownloadContentJSON.

🐛 Suggested fix
 function canRenderJSON(value: unknown, _metadata?: OutputMetadata): boolean {
   if (_metadata?.type === "json") {
     return true;
   }
 
   if (typeof value === "object" && value !== null) {
-    return true;
+    // Verify it's a plain object or array that can be serialized
+    try {
+      JSON.stringify(value);
+      return true;
+    } catch {
+      return false;
+    }
   }
 
   if (typeof value === "string") {
autogpt_platform/frontend/src/components/contextual/Chat/components/ChatMessage/ChatMessage.tsx-27-42 (1)

27-42: Unused onDismissLogin prop in destructuring.

The onDismissLogin prop is defined in ChatMessageProps (line 30) but is not destructured in the function parameters (line 39-42) and therefore not used anywhere in the component.

🐛 Proposed fix

Either remove it from the interface if not needed:

 export interface ChatMessageProps {
   message: ChatMessageData;
   className?: string;
-  onDismissLogin?: () => void;
   onDismissCredentials?: () => void;
   onSendMessage?: (content: string, isUserMessage?: boolean) => void;
   agentOutput?: ChatMessageData;
 }

Or destructure and use it if intended:

 export function ChatMessage({
   message,
   className,
+  onDismissLogin,
   onDismissCredentials,
   onSendMessage,
   agentOutput,
 }: ChatMessageProps) {
autogpt_platform/frontend/src/components/contextual/OutputRenderers/utils/download.ts-19-29 (1)

19-29: Incomplete handling of CopyContent.data types.

Per the CopyContent interface in types.ts, data can be Blob | string | (() => Promise<Blob | string>). This code only handles string and fallbackText, skipping items where data is a Blob or async function.

🐛 Proposed fix
       if (copyContent) {
         // Extract text from CopyContent
         let text: string;
         if (typeof copyContent.data === "string") {
           text = copyContent.data;
+        } else if (typeof copyContent.data === "function") {
+          const resolved = await copyContent.data();
+          if (typeof resolved === "string") {
+            text = resolved;
+          } else if (copyContent.fallbackText) {
+            text = copyContent.fallbackText;
+          } else {
+            continue;
+          }
+        } else if (copyContent.data instanceof Blob) {
+          // Try to read blob as text if it's a text type
+          if (copyContent.data.type.startsWith("text/")) {
+            text = await copyContent.data.text();
+          } else if (copyContent.fallbackText) {
+            text = copyContent.fallbackText;
+          } else {
+            continue;
+          }
         } else if (copyContent.fallbackText) {
           text = copyContent.fallbackText;
         } else {
           continue;
         }
autogpt_platform/frontend/src/components/contextual/OutputRenderers/components/OutputActions.tsx-10-18 (1)

10-18: Wire className into the root container.

className is accepted but never applied, so consumers can’t style the wrapper.

🐛 Proposed fix
-export function OutputActions({
-  items,
-  isPrimary = false,
-}: OutputActionsProps) {
+export function OutputActions({
+  items,
+  isPrimary = false,
+  className,
+}: OutputActionsProps) {
...
-  return (
-    <div className="flex items-center gap-3">
+  return (
+    <div className={cn("flex items-center gap-3", className)}>

Also applies to: 67-68

autogpt_platform/frontend/src/components/contextual/OutputRenderers/index.ts-9-15 (1)

9-15: Add duplicate-prevention logic to the renderer registry to ensure idempotency.

The register() method in OutputRendererRegistry lacks duplicate checks. If the module is re-evaluated (e.g., during HMR in development), all 6 renderers will be re-registered, adding duplicates to the array. Guard against this by checking if a renderer with the same name already exists before registration, or wrap registrations in an initialization guard.

autogpt_platform/frontend/src/components/contextual/OutputRenderers/renderers/MarkdownRenderer.tsx-145-152 (1)

145-152: Stateful regex with g/m flags can cause intermittent detection failures.

Regex patterns with the g flag (lines 22, 30-34) maintain lastIndex state between .test() calls. On repeated invocations of canRenderMarkdown, the pattern may start matching from a non-zero index, causing false negatives.

Suggested fix: reset lastIndex before testing
  for (const pattern of markdownPatterns) {
+   pattern.lastIndex = 0; // Reset stateful regex
    if (pattern.test(value)) {
      matchCount++;
      if (matchCount >= requiredMatches) {
        return true;
      }
    }
  }
autogpt_platform/frontend/src/components/contextual/Chat/components/ChatContainer/helpers.ts-254-301 (1)

254-301: Fix singular grammar in credentials message.
For a single credential, the message currently says “1 credentials.”

✏️ Suggested tweak
-      return {
+      const countLabel =
+        credentials.length === 1
+          ? "1 credential"
+          : `${credentials.length} credentials`;
+      return {
         type: "credentials_needed",
         toolName,
         credentials,
-        message: `To run ${agentName}, you need to add ${credentials.length === 1 ? "credentials" : `${credentials.length} credentials`}.`,
+        message: `To run ${agentName}, you need to add ${countLabel}.`,
         agentName,
         timestamp: new Date(),
       };
autogpt_platform/frontend/src/components/contextual/Chat/usePageContext.ts-29-33 (1)

29-33: Regex order makes the second replacement ineffective.

Line 31 replaces all whitespace (including newlines) with single spaces, so line 32's \n\s*\n pattern will never match. If you want to preserve paragraph breaks, swap the order:

♻️ Suggested fix
     // Clean up whitespace
     const cleanedContent = content
-      .replace(/\s+/g, " ")
-      .replace(/\n\s*\n/g, "\n")
+      .replace(/\n\s*\n/g, "\n")  // Collapse multiple blank lines first
+      .replace(/[ \t]+/g, " ")     // Then collapse horizontal whitespace
       .trim();
autogpt_platform/frontend/src/components/contextual/Chat/components/ChatContainer/createStreamEventDispatcher.ts-17-58 (1)

17-58: Add missing handler for credentials_needed chunk type.

The StreamChunk type includes credentials_needed as a valid chunk type and it's listed in LEGACY_STREAM_TYPES, but the dispatcher has no case for it. If the backend emits this event, it will fall through to the default case and only log a warning, missing an opportunity to handle credentials prompts properly.

Suggested handler
      case "login_needed":
      case "need_login":
        handleLoginNeeded(chunk, deps);
        break;

+     case "credentials_needed":
+       // TODO: Handle credentials_needed - prompt user for credentials
+       console.warn("Credentials needed event not yet implemented:", chunk);
+       break;
+
      case "stream_end":
autogpt_platform/frontend/src/components/contextual/Chat/components/ChatContainer/useChatContainer.ts-16-25 (1)

16-25: Unused parameter onRefreshSession in hook args.

The onRefreshSession is defined in UseChatContainerArgs but is not destructured or used in the hook implementation. Either remove it from the interface or implement its usage.

🐛 Either remove or use the parameter

If unused, remove from interface:

 interface UseChatContainerArgs {
   sessionId: string | null;
   initialMessages: SessionDetailResponse["messages"];
-  onRefreshSession: () => Promise<void>;
 }

 export function useChatContainer({
   sessionId,
   initialMessages,
-}: UseChatContainerArgs) {
+}: Omit<UseChatContainerArgs, 'onRefreshSession'>) {

Or if needed, destructure and use it:

 export function useChatContainer({
   sessionId,
   initialMessages,
+  onRefreshSession,
 }: UseChatContainerArgs) {
autogpt_platform/frontend/src/components/contextual/Chat/components/ChatContainer/useChatContainer.handlers.ts-91-96 (1)

91-96: Add defensive checks for optional properties.

chunk.result and chunk.tool_id are optional per the StreamChunk type, but non-null assertions are used here. If these are undefined, parseToolResponse will receive invalid arguments.

🛡️ Suggested fix
+  if (!chunk.tool_id || chunk.result === undefined) {
+    console.warn("[Tool Response] Missing tool_id or result:", chunk);
+    return;
+  }
   const responseMessage = parseToolResponse(
-    chunk.result!,
-    chunk.tool_id!,
+    chunk.result,
+    chunk.tool_id,
     toolName,
     new Date(),
   );
autogpt_platform/frontend/src/components/contextual/Chat/components/ChatContainer/useChatContainer.handlers.ts-216-223 (1)

216-223: Error is not surfaced to the user.

The error message is logged to console but not added to chat messages. Users won't see what went wrong in the chat UI. Consider adding an error message to the chat or displaying a toast notification.

💡 Suggested approach to surface errors
 export function handleError(chunk: StreamChunk, deps: HandlerDependencies) {
   const errorMessage = chunk.message || chunk.content || "An error occurred";
   console.error("Stream error:", errorMessage);
+  
+  const errorChatMessage: ChatMessageData = {
+    type: "message",
+    role: "assistant",
+    content: `⚠️ ${errorMessage}`,
+    timestamp: new Date(),
+  };
+  deps.setMessages((prev) => [...prev, errorChatMessage]);
+  
   deps.setIsStreamingInitiated(false);
   deps.setHasTextChunks(false);
   deps.setStreamingChunks([]);
   deps.streamingChunksRef.current = [];
 }

Alternatively, if error handling is intentionally done elsewhere, consider adding a comment documenting this design decision.

@github-actions github-actions bot removed the conflicts Automatically applied to PRs with merge conflicts label Jan 16, 2026
@github-actions
Copy link
Contributor

Conflicts have been resolved! 🎉 A maintainer will review the pull request shortly.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/AuthPromptWidget/AuthPromptWidget.tsx (1)

29-38: Use next parameter instead of returnUrl to match login/signup page expectations.

The AuthPromptWidget passes returnUrl query parameter (lines 38, 50) but the login and signup pages read from next parameter (useLoginPage.ts:25, useSignupPage.ts:25). This causes the redirect to fail silently—users will be sent to /marketplace instead of returning to the chat after authentication.

Additionally, the unconditional ?session_id= append could break if returnUrl contained existing query params. Use URLSearchParams or URL.searchParams for safe query string composition, and ensure the parameter value is a relative same-origin path.

✅ Suggested fix
   function handleSignIn() {
     if (typeof window !== "undefined") {
       localStorage.setItem("pending_chat_session", sessionId);
       if (agentInfo) {
         localStorage.setItem("pending_agent_setup", JSON.stringify(agentInfo));
       }
     }
-    const returnUrlWithSession = `${returnUrl}?session_id=${sessionId}`;
-    const encodedReturnUrl = encodeURIComponent(returnUrlWithSession);
-    router.push(`/login?returnUrl=${encodedReturnUrl}`);
+    const safeReturnUrl = returnUrl.startsWith("/") ? returnUrl : "/chat";
+    const url = new URL(safeReturnUrl, window.location.origin);
+    url.searchParams.set("session_id", sessionId);
+    const nextUrl = encodeURIComponent(`${url.pathname}${url.search}`);
+    router.push(`/login?next=${nextUrl}`);
   }

   function handleSignUp() {
     if (typeof window !== "undefined") {
       localStorage.setItem("pending_chat_session", sessionId);
       if (agentInfo) {
         localStorage.setItem("pending_agent_setup", JSON.stringify(agentInfo));
       }
     }
-    const returnUrlWithSession = `${returnUrl}?session_id=${sessionId}`;
-    const encodedReturnUrl = encodeURIComponent(returnUrlWithSession);
-    router.push(`/signup?returnUrl=${encodedReturnUrl}`);
+    const safeReturnUrl = returnUrl.startsWith("/") ? returnUrl : "/chat";
+    const url = new URL(safeReturnUrl, window.location.origin);
+    url.searchParams.set("session_id", sessionId);
+    const nextUrl = encodeURIComponent(`${url.pathname}${url.search}`);
+    router.push(`/signup?next=${nextUrl}`);
   }

Also applies to: 41-50

autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChat.ts (1)

56-66: Silent error swallowing may hide issues.

The sendStreamMessage call uses an empty callback and the .catch(() => {}) silently discards any errors. If the login notification fails, there's no feedback or logging. Consider at minimum logging the error for debugging purposes.

🔧 Suggested improvement
         claimSession(sessionIdFromHook)
           .then(() => {
             sendStreamMessage(
               sessionIdFromHook,
               "User has successfully logged in.",
               () => {},
               false,
-            ).catch(() => {});
+            ).catch((err) => {
+              console.warn("Failed to send login notification:", err);
+            });
           })
🤖 Fix all issues with AI agents
In
`@autogpt_platform/frontend/src/app/`(platform)/chat/components/Chat/ChatDrawer.tsx:
- Around line 68-72: The Close button inside ChatDrawer (headerActions)
currently uses <button aria-label="Close" onClick={close} className="size-8">
with no visible hover/focus styles; update the button styling to match other
header buttons by adding accessible hover and focus states (e.g., hover
background/foreground change and a focus-visible outline or ring) and ensure
keyboard users see a clear focus indicator; target the element referenced by
headerActions / the close handler close and the X icon to apply the same utility
classes or CSS module used by other header buttons so hover/focus behavior is
consistent and meets accessibility expectations.
- Around line 54-60: The onInteractOutside prop on Drawer.Content (the prop
using onInteractOutside={blurBackground ? close : undefined}) is unreliable with
Vaul when modal={false} and is redundant because the custom backdrop (the
element at lines ~42-47 that sets pointerEvents: "auto" and handles clicks when
blurBackground is true) already implements outside-click closing; either remove
the onInteractOutside prop entirely or add a clear inline comment next to the
Drawer.Content usage explaining the Vaul limitation (that onInteractOutside is
flaky for non-modal drawers) and that the custom backdrop is the intended
outside-click handler when blurBackground is true; if you prefer to keep it,
guard it by only passing it when the Drawer is modal to avoid false
expectations.

In
`@autogpt_platform/frontend/src/app/`(platform)/chat/components/Chat/components/AgentInputsSetup/AgentInputsSetup.tsx:
- Around line 45-70: The validation and submission ignore default values and
treat every non-hidden field as required; update allRequiredInputsAreSet and
allCredentialsAreSet to merge defaults into inputValues/credentialsValues before
checking, and update handleRun to pass the merged values to onRun; also accept
an optional requiredFields?: string[] prop (or use a provided requiredFields
array) so allRequiredInputsAreSet filters non-hidden fields by requiredFields
instead of assuming every field is required. Specifically modify the functions
allRequiredInputsAreSet, allCredentialsAreSet, canRun, and handleRun to compute
mergedInputValues = { ...defaultsFromSchema, ...inputValues } and
mergedCredentialsValues = { ...defaultsFromCredentialsSchema,
...credentialsValues } and use merged* for validation and passing to onRun, and
change the required-field logic to reference requiredFields when determining
which non-hidden keys must be present.

In
`@autogpt_platform/frontend/src/app/`(platform)/chat/components/Chat/components/ChatContainer/helpers.ts:
- Around line 2-21: The sanitizer removePageContext currently strips any
occurrence of "Page URL:", "Page Content:" and "User Message:" anywhere in the
text; update the regexes in removePageContext to only match these markers at the
start of a line using the multiline flag so legitimate inline user content isn't
removed (e.g., change the patterns to anchor with ^\s* and use the m flag for
the replacements and match), apply the anchored replacement for "Page URL:" and
"Page Content:" and use an anchored match for "User Message:" when extracting
the trailing user text, and keep the same cleanup on the cleaned variable
afterwards.
- Around line 303-374: The inputSchema currently sets per-property required
booleans which RJSF v6 (Draft-07) ignores; inside extractInputsNeeded, change
how inputSchema is built by creating a properties object (e.g., properties[name]
= { title, description, type, default, enum, format } ) and collect required
property names into a requiredProps string[] during inputs.forEach; after the
loop set inputSchema to an object with type: "object", properties, and include
required: requiredProps only if it has entries, removing per-property required
flags so the schema complies with Draft-07 for RJSF v6.

In
`@autogpt_platform/frontend/src/app/`(platform)/chat/components/Chat/components/ChatCredentialsSetup/ChatCredentialsSetup.tsx:
- Around line 1-5: The import of BlockIOCredentialsSubSchema from
"@/lib/autogpt-server-api" is deprecated for frontend use; update
ChatCredentialsSetup.tsx to use the generated OpenAPI frontend types (or a local
equivalent) instead: remove the import of BlockIOCredentialsSubSchema and
replace all references with the appropriate generated type (or a new local type)
used by the frontend API client, ensuring the component (ChatCredentialsSetup,
CredentialsInput) type annotations are updated accordingly and the deprecated
module is no longer imported.

In
`@autogpt_platform/frontend/src/app/`(platform)/chat/components/Chat/components/ChatInput/ChatInput.tsx:
- Around line 47-58: The Send button in ChatInput lacks an explicit type which
defaults to "submit" in forms; update the JSX for the button element inside the
ChatInput component (the element using onClick={handleSend}) to include
type="button" to prevent accidental form submissions when ChatInput is rendered
inside a form.

In
`@autogpt_platform/frontend/src/app/`(platform)/chat/components/Chat/components/ChatLoadingState/ChatLoadingState.tsx:
- Around line 4-16: The ChatLoadingState component declares a message prop in
ChatLoadingStateProps but never uses it; either remove message from the
ChatLoadingStateProps and from the ChatLoadingState parameter list to keep the
API minimal, or render the message (for example, under LoadingSpinner) inside
ChatLoadingState so the prop is actually displayed; update the
ChatLoadingStateProps, the ChatLoadingState function signature, and the
component JSX (referencing ChatLoadingStateProps, ChatLoadingState, message,
LoadingSpinner, className, and cn) accordingly to keep types and implementation
consistent.

In
`@autogpt_platform/frontend/src/app/`(platform)/chat/components/Chat/components/ChatMessage/ChatMessage.tsx:
- Around line 168-198: The branch in ChatMessage.tsx that handles message.type
values "no_results", "agent_carousel", and "execution_started" drops their
payloads and only renders a generic ToolResponseMessage; update the rendering
logic so those types pass their specific payloads or use dedicated components:
detect each type (message.type === "no_results", "agent_carousel",
"execution_started") and either (a) call ToolResponseMessage with the
appropriate props (e.g., pass message.message, message.agents,
message.executionId or message.result) or (b) render new specialized components
(e.g., NoResultsMessage, AgentCarouselMessage, ExecutionStartedMessage) that
consume the payload; ensure getToolActionPhrase is only used for actual tool
responses and keep the existing agent_output parsing logic intact.

In
`@autogpt_platform/frontend/src/app/`(platform)/chat/components/Chat/components/MessageList/MessageList.tsx:
- Around line 1-6: This file uses React hooks (useMessageList) and must be a
Next.js client component — add the "use client" directive as the very first line
of the file (before any imports) so hooks work correctly; update the top of the
MessageList.tsx file to include the directive and keep existing imports for cn,
ChatMessage, StreamingMessage, ThinkingMessage and useMessageList unchanged.

In
`@autogpt_platform/frontend/src/app/`(platform)/chat/components/Chat/components/QuickActionsWelcome/QuickActionsWelcome.tsx:
- Around line 1-3: This file is missing the "use client" directive required for
interactive client components; add the literal string "use client" as the very
first line of QuickActionsWelcome.tsx (before any imports) so the
QuickActionsWelcome component and its onClick handlers are treated as a client
component.

In
`@autogpt_platform/frontend/src/app/`(platform)/chat/components/Chat/useChatStream.ts:
- Around line 153-163: stopStreaming and another reset are zeroing
retryCountRef.current which causes each retry to start at 0 and bypass
MAX_RETRIES; remove resetting of retryCountRef.current in stopStreaming (and the
other spot that unconditionally sets it to 0) and instead only reset
retryCountRef.current when a brand-new stream is initiated or after a
successful/terminal completion in the start/handle stream logic (reference
retryCountRef, stopStreaming, the start/stream retry path that checks
MAX_RETRIES, and MAX_RETRIES) so retries increment across attempts and
eventually hit the limit.

In
`@autogpt_platform/frontend/src/app/`(platform)/chat/components/Chat/usePageContext.ts:
- Around line 19-38: The current page text extraction (in usePageContext: clone,
scripts, body, cleanedContent) can leak sensitive inputs and produce huge
payloads; update the logic to first remove/ignore sensitive elements
(querySelectorAll for input, textarea, [contenteditable], password fields, form
elements, and any elements with a data-sensitive attribute) and strip value/text
from inputs rather than their displayed values, then proceed to extract text;
finally enforce a maximum content size (e.g., MAX_CONTENT_CHARS) and truncate
cleanedContent to that limit (adding an ellipsis or marker), so the returned
content is both privacy-hardened and bounded.
♻️ Duplicate comments (3)
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatInput/ChatInput.tsx (1)

29-45: Connect the sr-only hint via aria-describedby.

This was already raised previously: add aria-describedby="chat-input-hint" to the <Input /> and ensure the underlying textarea forwards aria-describedby.

autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ThinkingMessage/ThinkingMessage.tsx (1)

47-59: Replace raw CSS loader/shimmer with Tailwind or design-system spinner.
Current .loader + inline animation violates the Tailwind-only styling rule. Based on learnings, use Tailwind utilities or a design-system spinner.

Proposed change
-import { cn } from "@/lib/utils";
+import { LoadingSpinner } from "@/components/atoms/LoadingSpinner/LoadingSpinner";
+import { cn } from "@/lib/utils";
 ...
               {showSlowLoader ? (
                 <div className="flex flex-col items-center gap-3 py-2">
-                  <div className="loader" style={{ flexShrink: 0 }} />
+                  <LoadingSpinner size="small" />
                   <p className="text-sm text-slate-700">
                     Taking a bit longer to think, wait a moment please
                   </p>
                 </div>
               ) : (
-                <span
-                  className="inline-block bg-gradient-to-r from-neutral-400 via-neutral-600 to-neutral-400 bg-clip-text text-transparent"
-                  style={{
-                    backgroundSize: "200% 100%",
-                    animation: "shimmer 2s ease-in-out infinite",
-                  }}
-                >
+                <span className="inline-block animate-pulse text-slate-600">
                   Thinking...
                 </span>
               )}
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatStream.ts (1)

324-334: Retries can duplicate persisted user messages.

The retry path re-calls sendMessage (Line 324-334). If the backend persists the user message before streaming begins, each retry can append duplicate messages. Consider an idempotency key or retry only before the server persists the message.

🧹 Nitpick comments (21)
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/QuickActionsWelcome/QuickActionsWelcome.tsx (1)

38-48: Hoist the theme object outside the map callback.

The theme object is identical for every action but is recreated on each iteration. Moving it outside the loop (or outside the component entirely as a constant) eliminates unnecessary object allocations per render.

Proposed refactor
+const actionTheme = {
+  bg: "bg-slate-50/10",
+  border: "border-slate-100",
+  hoverBg: "hover:bg-slate-50/20",
+  hoverBorder: "hover:border-slate-200",
+  gradient: "from-slate-200/20 via-slate-300/10 to-transparent",
+  text: "text-slate-900",
+  hoverText: "group-hover:text-slate-900",
+};
+
 export function QuickActionsWelcome({
   ...
 }: QuickActionsWelcomeProps) {
   return (
     ...
         <div className="grid gap-3 sm:grid-cols-2">
           {actions.map((action) => {
-            // Use slate theme for all cards
-            const theme = {
-              bg: "bg-slate-50/10",
-              border: "border-slate-100",
-              hoverBg: "hover:bg-slate-50/20",
-              hoverBorder: "hover:border-slate-200",
-              gradient: "from-slate-200/20 via-slate-300/10 to-transparent",
-              text: "text-slate-900",
-              hoverText: "group-hover:text-slate-900",
-            };
-
             return (
               <button
                 ...
-                  theme.bg,
-                  theme.border,
+                  actionTheme.bg,
+                  actionTheme.border,
                   ...
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ExecutionStartedMessage/ExecutionStartedMessage.tsx (1)

81-87: Consider adding explicit color class for consistency.

The Text component on line 83 relies on CSS color inheritance from the parent div's text-green-600, whereas all other Text components in this file specify explicit color classes (e.g., text-green-900, text-green-800). While inheritance works correctly here, adding an explicit class would improve consistency and make the styling intent clearer.

💅 Optional: Add explicit color class
       <div className="flex items-center gap-2 text-green-600">
         <Play size={16} weight="fill" />
-        <Text variant="small">
+        <Text variant="small" className="text-green-600">
           Your agent is now running. You can monitor its progress in the monitor
           page.
         </Text>
       </div>
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/NoResultsMessage/NoResultsMessage.tsx (1)

19-53: Consider replacing hardcoded gray palette with design tokens.

With dark-mode branches removed, this is a good moment to migrate these bg-gray-* / text-gray-* classes to semantic design tokens for consistency and easier future theming. As per coding guidelines, use design tokens for frontend styling.

autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.handlers.ts (3)

79-89: Unconventional state read pattern.

Using setMessages solely to read current state (returning prev unchanged) is an anti-pattern. While it works because React's setState callback is synchronous, it's semantically confusing and could behave unexpectedly in concurrent mode.

Consider passing messages as a dependency or using a ref to track the latest messages:

♻️ Alternative approach using a messagesRef
// In HandlerDependencies, add:
messagesRef: MutableRefObject<ChatMessageData[]>;

// Then in handleToolResponse:
if (!chunk.tool_name || chunk.tool_name === "unknown") {
  const matchingToolCall = [...deps.messagesRef.current]
    .reverse()
    .find(
      (msg) => msg.type === "tool_call" && msg.toolId === chunk.tool_id,
    );
  if (matchingToolCall && matchingToolCall.type === "tool_call") {
    toolName = matchingToolCall.toolName;
  }
}

91-96: Non-null assertions on potentially undefined values.

chunk.result! and chunk.tool_id! use non-null assertions, but tool_response chunks could arrive malformed. Consider adding defensive guards or early returns:

+if (!chunk.result || !chunk.tool_id) {
+  console.warn("[Tool Response] Missing result or tool_id:", chunk);
+  return;
+}
 const responseMessage = parseToolResponse(
-  chunk.result!,
-  chunk.tool_id!,
+  chunk.result,
+  chunk.tool_id,
   toolName,
   new Date(),
 );

33-33: Consider removing or gating verbose console logs for production.

The file contains numerous console.log and console.warn statements (lines 33, 61-65, 72-76, 133-138, 141-145, 173-175, 184-203, 207, 213, 218). While useful during development, these could clutter browser consoles in production. Consider:

  1. Removing them before merge, or
  2. Gating behind a debug flag (e.g., if (process.env.NODE_ENV === 'development'))
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatInput/ChatInput.tsx (1)

13-25: Allow a unique inputId to avoid DOM collisions.

useChatInput relies on document.getElementById(inputId), so hard-coding "chat-input" can break if more than one ChatInput renders on the page. Consider exposing inputId as an optional prop (or generate a unique id) and pass it through.

♻️ Example change
 export interface ChatInputProps {
   onSend: (message: string) => void;
   disabled?: boolean;
   placeholder?: string;
   className?: string;
+  inputId?: string;
 }

 export function ChatInput({
   onSend,
   disabled = false,
   placeholder = "Type your message...",
   className,
+  inputId = "chat-input",
 }: ChatInputProps) {
-  const inputId = "chat-input";
   const { value, setValue, handleKeyDown, handleSend } = useChatInput({
     onSend,
     disabled,
     maxRows: 5,
     inputId,
   });
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatInput/useChatInput.ts (1)

18-40: Guard the DOM lookup to avoid wrong-element updates.

document.getElementById assumes a unique textarea; if the id is duplicated or the element isn’t a <textarea>, the resize/reset logic can misbehave. Consider guarding the element type (or passing a ref) so the hook only updates the intended textarea.

♻️ Suggested guard to prevent wrong-element updates
-    const textarea = document.getElementById(inputId) as HTMLTextAreaElement;
-    if (!textarea) return;
+    const textarea = document.getElementById(inputId);
+    if (!(textarea instanceof HTMLTextAreaElement)) return;
     textarea.style.height = "auto";
     const lineHeight = parseInt(
       window.getComputedStyle(textarea).lineHeight,
       10,
     );
@@
-    const textarea = document.getElementById(inputId) as HTMLTextAreaElement;
-    if (textarea) {
-      textarea.style.height = "auto";
-    }
+    const textarea = document.getElementById(inputId);
+    if (textarea instanceof HTMLTextAreaElement) {
+      textarea.style.height = "auto";
+    }
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/usePageContext.ts (1)

1-1: Add an explicit client boundary for this hook.
This file uses React hooks and window/document; adding "use client" makes the boundary explicit and prevents accidental server imports.

✅ Suggested change
+"use client";
+
 import { useCallback } from "react";

As per coding guidelines, default to client components.

autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.ts (2)

16-25: Clarify the onRefreshSession contract.

UseChatContainerArgs requires onRefreshSession, but the hook doesn’t consume it. Either wire it into the send/refresh flow or remove it from the interface so callers aren’t forced to pass an unused callback.


154-201: Consider guarding against overlapping sends.

If concurrent streams aren’t supported, add a defensive early return to avoid resetting streaming state mid-stream.

♻️ Suggested guard
   const sendMessage = useCallback(
     async function sendMessage(
       content: string,
       isUserMessage: boolean = true,
       context?: { url: string; content: string },
     ) {
+      if (isStreaming) {
+        return;
+      }
       if (!sessionId) {
         console.error("Cannot send message: no session ID");
         return;
       }
@@
     },
-    [sessionId, sendStreamMessage],
+    [isStreaming, sessionId, sendStreamMessage],
   );
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatLoadingState/ChatLoadingState.tsx (1)

1-2: Add "use client" if this component is consumed by client components.
This avoids RSC boundary issues and aligns with the default-to-client guideline. Based on learnings, default to client components unless there’s a server-only reason.

Proposed change
+"use client";
+
 import { LoadingSpinner } from "@/components/atoms/LoadingSpinner/LoadingSpinner";
 import { cn } from "@/lib/utils";
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/ChatContainer.tsx (2)

31-38: Consider a function declaration for sendMessageWithContext.

Line 31-38 defines a non‑inline handler as an arrow function; frontend guidelines prefer function declarations for handlers. Consider switching to a named function or a named function expression inside useCallback to align with the convention. As per coding guidelines, use function declarations for handlers.


48-56: Move the inline background pattern into Tailwind utilities/classes.

Line 48-56 uses inline styles, which bypass the Tailwind-only styling guideline. Consider converting these to Tailwind arbitrary values or a reusable class.

♻️ Proposed Tailwind-only variant
-    <div
-      className={cn("flex h-full flex-col", className)}
-      style={{
-        backgroundColor: "#ffffff",
-        backgroundImage:
-          "radial-gradient(`#e5e5e5` 0.5px, transparent 0.5px), radial-gradient(`#e5e5e5` 0.5px, `#ffffff` 0.5px)",
-        backgroundSize: "20px 20px",
-        backgroundPosition: "0 0, 10px 10px",
-      }}
-    >
+    <div
+      className={cn(
+        "flex h-full flex-col bg-white " +
+          "[background-image:radial-gradient(`#e5e5e5_0.5px`,transparent_0.5px),radial-gradient(`#e5e5e5_0.5px`,`#ffffff_0.5px`)] " +
+          "[background-size:20px_20px] [background-position:0_0,10px_10px]",
+        className,
+      )}
+    >

As per coding guidelines, use Tailwind-only styling.

autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatStream.ts (1)

193-209: Confirm whether streaming should bypass generated API hooks.

Line 193-209 uses raw fetch; frontend guidelines prefer generated API hooks. If streaming isn’t supported by the generated client, consider documenting the exception or wrapping the call in a typed helper. As per coding guidelines, prefer generated API hooks for data fetching.

autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/MessageBubble/MessageBubble.tsx (1)

15-27: Consider moving theme objects outside the component.

The userTheme and assistantTheme objects are recreated on every render. Since these are static values, extracting them as module-level constants would avoid unnecessary object allocations.

♻️ Suggested refactor
+const userTheme = {
+  bg: "bg-slate-900",
+  border: "border-slate-800",
+  gradient: "from-slate-900/30 via-slate-800/20 to-transparent",
+  text: "text-slate-50",
+};
+
+const assistantTheme = {
+  bg: "bg-slate-50/20",
+  border: "border-slate-100",
+  gradient: "from-slate-200/20 via-slate-300/10 to-transparent",
+  text: "text-slate-900",
+};
+
 export function MessageBubble({
   children,
   variant,
   className,
 }: MessageBubbleProps) {
-  const userTheme = {
-    bg: "bg-slate-900",
-    border: "border-slate-800",
-    gradient: "from-slate-900/30 via-slate-800/20 to-transparent",
-    text: "text-slate-50",
-  };
-
-  const assistantTheme = {
-    bg: "bg-slate-50/20",
-    border: "border-slate-100",
-    gradient: "from-slate-200/20 via-slate-300/10 to-transparent",
-    text: "text-slate-900",
-  };
-
   const theme = variant === "user" ? userTheme : assistantTheme;
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatSession.ts (2)

122-153: Unused mutation hook in dependency array.

createSessionMutation is listed in the dependency array but the function calls postV2CreateSession directly instead. Either use the mutation hook for consistency with the React Query pattern or remove it from dependencies.

♻️ Suggested fix - use the mutation hook
   const createSession = useCallback(
     async function createSession() {
       try {
         setError(null);
-        const response = await postV2CreateSession({
-          body: JSON.stringify({}),
-        });
+        const response = await createSessionMutation({
+          body: JSON.stringify({}),
+        });
         if (response.status !== 200) {

Or remove from dependencies if direct call is intentional:

-    [createSessionMutation],
+    [],

224-245: Consider extracting the 404 detection logic.

The 404 error detection spans multiple conditions checking both err.status and err.response.status. This verbose pattern could be extracted into a helper function for reusability and readability.

♻️ Suggested helper extraction
// In helpers.ts
export function isNotFoundError(err: unknown): boolean {
  if (typeof err !== "object" || err === null) return false;
  if ("status" in err && err.status === 404) return true;
  if (
    "response" in err &&
    typeof err.response === "object" &&
    err.response !== null &&
    "status" in err.response &&
    err.response.status === 404
  ) {
    return true;
  }
  return false;
}

Then in claimSession:

-        const is404 =
-          (typeof err === "object" &&
-            err !== null &&
-            "status" in err &&
-            err.status === 404) ||
-          (typeof err === "object" &&
-            err !== null &&
-            "response" in err &&
-            typeof err.response === "object" &&
-            err.response !== null &&
-            "status" in err.response &&
-            err.response.status === 404);
-        if (!is404) {
+        if (!isNotFoundError(err)) {
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/Chat.tsx (1)

47-58: Prefer function declarations for handlers (repo standard).

Convert handleNewChat / handleSelectSession to function declarations to match the frontend guidelines.

♻️ Suggested refactor
-  const handleNewChat = () => {
+  function handleNewChat() {
     clearSession();
     onNewChat?.();
-  };
+  }
 
-  const handleSelectSession = async (sessionId: string) => {
+  async function handleSelectSession(sessionId: string) {
     try {
       await loadSession(sessionId);
     } catch (err) {
       console.error("Failed to load session:", err);
     }
-  };
+  }

As per coding guidelines, prefer function declarations for handlers.

autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/MessageList/MessageList.tsx (1)

41-99: Avoid index as the React key for messages.

Using the array index can cause incorrect reuse when messages are inserted/filtered (e.g., when agent_output messages are skipped). Prefer a stable identifier (toolId, timestamp, or a derived stable key).

autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatMessage/ChatMessage.tsx (1)

194-197: Pass raw toolName into ToolResponseMessage.

ToolResponseMessage already applies getToolActionPhrase and Title Case formatting. Passing a pre-formatted phrase can double-transform and break the snake_case formatting path.

♻️ Suggested refactor
-        <ToolResponseMessage
-          toolName={getToolActionPhrase(message.toolName)}
-          result={message.type === "tool_response" ? message.result : undefined}
-        />
+        <ToolResponseMessage
+          toolName={message.toolName}
+          result={message.type === "tool_response" ? message.result : undefined}
+        />
-                    <ToolResponseMessage
-                      toolName={
-                        agentOutput.toolName
-                          ? getToolActionPhrase(agentOutput.toolName)
-                          : "Agent Output"
-                      }
-                      result={agentOutput.result}
-                    />
+                    <ToolResponseMessage
+                      toolName={agentOutput.toolName ?? "Agent Output"}
+                      result={agentOutput.result}
+                    />

Also applies to: 233-238

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.handlers.ts (1)

91-96: Non-null assertion on chunk.result may cause runtime error.

At line 92, chunk.result! uses a non-null assertion, but if chunk.result is undefined, parseToolResponse will receive undefined as a string, potentially causing unexpected behavior or errors.

Suggested fix
+  if (!chunk.result) {
+    console.warn("[Tool Response] No result in chunk:", chunk.tool_id);
+    return;
+  }
   const responseMessage = parseToolResponse(
-    chunk.result!,
-    chunk.tool_id!,
+    chunk.result,
+    chunk.tool_id ?? `unknown-${Date.now()}`,
     toolName,
     new Date(),
   );
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatSession.ts (1)

61-65: createSessionMutation is unused; dependency array is incorrect.

usePostV2CreateSession is destructured at lines 62-65, but createSessionMutation is never called. The createSession callback at line 126 uses postV2CreateSession directly instead, yet the dependency array at line 152 includes createSessionMutation.

This is inconsistent: either use the mutation hook (which provides better React Query integration with automatic cache invalidation) or remove the unused hook and fix the dependency array.

Option 1: Use the mutation hook (recommended)
  const createSession = useCallback(
    async function createSession() {
      try {
        setError(null);
-       const response = await postV2CreateSession({
-         body: JSON.stringify({}),
-       });
-       if (response.status !== 200) {
-         throw new Error("Failed to create session");
-       }
-       const newSessionId = response.data.id;
+       const response = await createSessionMutation({
+         body: JSON.stringify({}),
+       });
+       const newSessionId = response.id;
        setSessionId(newSessionId);
        ...
      }
    },
    [createSessionMutation],
  );
Option 2: Remove unused hook and fix dependency
- const {
-   mutateAsync: createSessionMutation,
-   isPending: isCreating,
-   error: createError,
- } = usePostV2CreateSession();
+ const [isCreating, setIsCreating] = useState(false);
+ const [createError, setCreateError] = useState<Error | null>(null);

  // ... in createSession callback:
  const createSession = useCallback(
    async function createSession() {
+     setIsCreating(true);
      try {
        ...
      } catch (err) {
+       setCreateError(err instanceof Error ? err : new Error("..."));
        ...
+     } finally {
+       setIsCreating(false);
      }
    },
-   [createSessionMutation],
+   [],
  );

Also applies to: 122-153

🤖 Fix all issues with AI agents
In
`@autogpt_platform/frontend/src/app/`(platform)/chat/components/Chat/components/AgentInputsSetup/AgentInputsSetup.tsx:
- Around line 8-11: The code imports deprecated schema types BlockIOSubSchema
and BlockIOCredentialsSubSchema from "@/lib/autogpt-server-api/types"; extract
and define these schema descriptor types into a new non-deprecated module (e.g.,
src/types/block-schema.ts), export them as BlockIOSubSchema and
BlockIOCredentialsSubSchema, then update imports in AgentInputsSetup.tsx (and
other files that import these types) to import from the new module; do the same
for CredentialsMetaInput referenced in useAgentInputsSetup.ts by moving or
re-exporting it from a non-deprecated types file or coordinating with
Orval-generated models, and run a repository-wide replace to update all ~52
affected files to the new import path.

In
`@autogpt_platform/frontend/src/app/`(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.ts:
- Around line 22-25: The hook useChatContainer currently ignores the
onRefreshSession field declared in UseChatContainerArgs; either remove
onRefreshSession from the UseChatContainerArgs type if unused, or destructure it
from the function signature (add onRefreshSession to the parameter list
alongside sessionId and initialMessages) and call it where session refresh logic
occurs (for example after session-updating effects or error recovery flows
inside useChatContainer) so the callback is invoked when a session refresh is
needed; update any callers/types accordingly to keep the signature consistent.

In
`@autogpt_platform/frontend/src/app/`(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/AgentInputsReadOnly/AgentInputsReadOnly.tsx:
- Around line 6-7: Replace the deeply nested relative imports for the
CredentialsInput module with the project path alias; specifically update the
imports that reference CredentialsInput and isSystemCredential so they use the
"@/components/contextual/CredentialsInput/CredentialsInput" and
"@/components/contextual/CredentialsInput/helpers" module paths (referencing the
CredentialsInput component and isSystemCredential helper) to match the existing
alias usage in this file.

In
`@autogpt_platform/frontend/src/app/`(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTriggerView/SelectedTriggerView.tsx:
- Line 6: The import for CredentialsInput in SelectedTriggerView.tsx uses a long
relative path and should be replaced with the project path-alias; update the
import of CredentialsInput to use the '@/...' alias consistent with the other
imports in this file (replace the
"../../../../../../../../../../components/contextual/CredentialsInput/CredentialsInput"
import with the aliased path, e.g.
'@/components/contextual/CredentialsInput/CredentialsInput') so the
CredentialsInput symbol is imported via the alias instead of deep relative
traversal.
♻️ Duplicate comments (1)
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/AgentInputsSetup/AgentInputsSetup.tsx (1)

45-70: Defaults shown in the UI aren’t used for validation or submission.

The form renders schema defaults, but allRequiredInputsAreSet and onRun only use inputValues, so defaults can block canRun and aren’t sent unless the user edits a field. Also, all non-hidden fields are treated as required. This matches a prior review comment.

Also applies to: 95-101

🧹 Nitpick comments (6)
autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentModal/useAgentRunModal.tsx (1)

25-25: Prefer a path alias over the deep relative import.
The long relative path is brittle and hard to scan; using the @/ alias keeps imports consistent and resilient to folder moves.

♻️ Suggested change
-import { getSystemCredentials } from "../../../../../../../../../../components/contextual/CredentialsInput/helpers";
+import { getSystemCredentials } from "@/components/contextual/CredentialsInput/helpers";
autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTemplateView/SelectedTemplateView.tsx (1)

7-7: Use path alias @/ instead of deep relative traversal.

The import uses 10 levels of ../ which is fragile and inconsistent with other imports in this file (lines 3-6 all use @/).

♻️ Suggested fix
-import { CredentialsInput } from "../../../../../../../../../../components/contextual/CredentialsInput/CredentialsInput";
+import { CredentialsInput } from "@/components/contextual/CredentialsInput/CredentialsInput";
autogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsx (1)

135-143: Consider using function declarations for handlers.

Per coding guidelines, function declarations are preferred over arrow functions for handlers. However, this is a minor stylistic concern.

Suggested refactor
-  const handleCredentialSelect = (
-    provider: string,
-    credential?: CredentialsMetaInput,
-  ) => {
+  function handleCredentialSelect(
+    provider: string,
+    credential?: CredentialsMetaInput,
+  ) {
     setSelectedCredentials((prev) => ({
       ...prev,
       [provider]: credential,
     }));
-  };
+  }

Apply the same pattern to handleComplete and handleCancel.

autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.handlers.ts (1)

77-90: Consider refactoring state-reading pattern in setMessages.

Using setMessages callback solely to read state (returning prev unchanged) is an anti-pattern. The outer toolName variable is mutated inside the callback closure, which works but is unconventional.

Alternative approach

Consider passing a messages ref from the dependencies to read current messages directly:

// In HandlerDependencies, add:
messagesRef: MutableRefObject<ChatMessageData[]>;

// Then use directly:
if (!chunk.tool_name || chunk.tool_name === "unknown") {
  const matchingToolCall = [...deps.messagesRef.current]
    .reverse()
    .find((msg) => msg.type === "tool_call" && msg.toolId === chunk.tool_id);
  if (matchingToolCall && matchingToolCall.type === "tool_call") {
    toolName = matchingToolCall.toolName;
  }
}
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/createStreamEventDispatcher.ts (1)

52-54: TODO: Handle usage display.

The usage chunk type is received but not yet processed. Consider implementing usage metrics display or creating a tracking issue.

Would you like me to help implement the usage handling or open an issue to track this work?

autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatSession.ts (1)

224-245: Consider extracting the 404 detection logic.

The 404 detection at lines 227-238 is verbose and handles multiple error response shapes. Consider extracting this into a reusable helper function for clarity and consistency across the codebase.

Suggested extraction
function isNotFoundError(err: unknown): boolean {
  if (typeof err !== "object" || err === null) return false;
  if ("status" in err && err.status === 404) return true;
  if (
    "response" in err &&
    typeof err.response === "object" &&
    err.response !== null &&
    "status" in err.response &&
    err.response.status === 404
  ) {
    return true;
  }
  return false;
}

Then use: if (!isNotFoundError(err)) { ... }

📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between f496740 and 492c640.

📒 Files selected for processing (34)
  • autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/AgentOnboardingCredentials/AgentOnboardingCredentials.tsx
  • autogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsx
  • autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/WebhookDisclaimer.tsx
  • autogpt_platform/frontend/src/app/(platform)/build/components/legacy-builder/NodeInputs.tsx
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/AgentInputsSetup/AgentInputsSetup.tsx
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/createStreamEventDispatcher.ts
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.handlers.ts
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.ts
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatCredentialsSetup/ChatCredentialsSetup.tsx
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/helpers.ts
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatSession.ts
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/AgentInputsReadOnly/AgentInputsReadOnly.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentModal/components/CredentialsGroupedView/CredentialsGroupedView.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentModal/components/helpers.ts
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentModal/useAgentRunModal.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTemplateView/SelectedTemplateView.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTriggerView/SelectedTriggerView.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsx
  • autogpt_platform/frontend/src/components/contextual/CredentialsInput/CredentialsInput.tsx
  • autogpt_platform/frontend/src/components/contextual/CredentialsInput/components/APIKeyCredentialsModal/APIKeyCredentialsModal.tsx
  • autogpt_platform/frontend/src/components/contextual/CredentialsInput/components/APIKeyCredentialsModal/useAPIKeyCredentialsModal.ts
  • autogpt_platform/frontend/src/components/contextual/CredentialsInput/components/CredentialRow/CredentialRow.tsx
  • autogpt_platform/frontend/src/components/contextual/CredentialsInput/components/CredentialsAccordionView/CredentialsAccordionView.tsx
  • autogpt_platform/frontend/src/components/contextual/CredentialsInput/components/CredentialsFlatView/CredentialsFlatView.tsx
  • autogpt_platform/frontend/src/components/contextual/CredentialsInput/components/CredentialsSelect/CredentialsSelect.tsx
  • autogpt_platform/frontend/src/components/contextual/CredentialsInput/components/DeleteConfirmationModal/DeleteConfirmationModal.tsx
  • autogpt_platform/frontend/src/components/contextual/CredentialsInput/components/HotScopedCredentialsModal/HotScopedCredentialsModal.tsx
  • autogpt_platform/frontend/src/components/contextual/CredentialsInput/components/OAuthWaitingModal/OAuthWaitingModal.tsx
  • autogpt_platform/frontend/src/components/contextual/CredentialsInput/components/PasswordCredentialsModal/PasswordCredentialsModal.tsx
  • autogpt_platform/frontend/src/components/contextual/CredentialsInput/helpers.ts
  • autogpt_platform/frontend/src/components/contextual/CredentialsInput/useCredentialsInput.ts
  • autogpt_platform/frontend/src/components/contextual/GoogleDrivePicker/GoogleDrivePicker.tsx
  • autogpt_platform/frontend/src/components/renderers/InputRenderer/custom/CredentialField/CredentialField.tsx
  • autogpt_platform/frontend/src/lib/utils.ts
💤 Files with no reviewable changes (1)
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/helpers.ts
✅ Files skipped from review due to trivial changes (5)
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentModal/components/CredentialsGroupedView/CredentialsGroupedView.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentModal/components/helpers.ts
  • autogpt_platform/frontend/src/components/contextual/CredentialsInput/CredentialsInput.tsx
  • autogpt_platform/frontend/src/components/contextual/GoogleDrivePicker/GoogleDrivePicker.tsx
  • autogpt_platform/frontend/src/app/(platform)/build/components/legacy-builder/NodeInputs.tsx
🚧 Files skipped from review as they are similar to previous changes (1)
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatCredentialsSetup/ChatCredentialsSetup.tsx
🧰 Additional context used
📓 Path-based instructions (10)
autogpt_platform/frontend/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

autogpt_platform/frontend/**/*.{ts,tsx}: Always run pnpm install before frontend development, then use pnpm dev to start development server on port 3000
For frontend code formatting and linting, always run pnpm format

If adding protected frontend routes, update frontend/lib/supabase/middleware.ts

autogpt_platform/frontend/**/*.{ts,tsx}: Use generated API hooks from @/app/api/__generated__/endpoints/ for data fetching in frontend
Use function declarations (not arrow functions) for components and handlers in frontend
Only use Phosphor Icons in frontend; never use other icon libraries
Never use src/components/__legacy__/* or deprecated BackendAPI in frontend

Files:

  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/AgentInputsSetup/AgentInputsSetup.tsx
  • autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/AgentOnboardingCredentials/AgentOnboardingCredentials.tsx
  • autogpt_platform/frontend/src/lib/utils.ts
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentModal/useAgentRunModal.tsx
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.handlers.ts
  • autogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsx
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/createStreamEventDispatcher.ts
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.ts
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTriggerView/SelectedTriggerView.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/AgentInputsReadOnly/AgentInputsReadOnly.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTemplateView/SelectedTemplateView.tsx
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatSession.ts
  • autogpt_platform/frontend/src/components/renderers/InputRenderer/custom/CredentialField/CredentialField.tsx
  • autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/WebhookDisclaimer.tsx
autogpt_platform/frontend/**/*.{ts,tsx,json}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use Node.js 21+ with pnpm package manager for frontend development

Files:

  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/AgentInputsSetup/AgentInputsSetup.tsx
  • autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/AgentOnboardingCredentials/AgentOnboardingCredentials.tsx
  • autogpt_platform/frontend/src/lib/utils.ts
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentModal/useAgentRunModal.tsx
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.handlers.ts
  • autogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsx
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/createStreamEventDispatcher.ts
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.ts
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTriggerView/SelectedTriggerView.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/AgentInputsReadOnly/AgentInputsReadOnly.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTemplateView/SelectedTemplateView.tsx
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatSession.ts
  • autogpt_platform/frontend/src/components/renderers/InputRenderer/custom/CredentialField/CredentialField.tsx
  • autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/WebhookDisclaimer.tsx
autogpt_platform/frontend/src/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

autogpt_platform/frontend/src/**/*.{ts,tsx}: Use generated API hooks from @/app/api/__generated__/endpoints/ (generated via Orval from backend OpenAPI spec). Pattern: use{Method}{Version}{OperationName} (e.g., useGetV2ListLibraryAgents). Regenerate with: pnpm generate:api. Never use deprecated BackendAPI or src/lib/autogpt-server-api/*
Use function declarations for components and handlers (not arrow functions). Only arrow functions for small inline lambdas (map, filter, etc.)
Use PascalCase for components, camelCase with use prefix for hooks
No barrel files or index.ts re-exports in frontend
For frontend render errors, use component. For mutation errors, display with toast notifications. For manual exceptions, use Sentry.captureException()
Default to client components (use client). Use server components only for SEO or extreme TTFB needs. Use React Query for server state via generated hooks. Co-locate UI state in components/hooks

Files:

  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/AgentInputsSetup/AgentInputsSetup.tsx
  • autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/AgentOnboardingCredentials/AgentOnboardingCredentials.tsx
  • autogpt_platform/frontend/src/lib/utils.ts
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentModal/useAgentRunModal.tsx
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.handlers.ts
  • autogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsx
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/createStreamEventDispatcher.ts
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.ts
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTriggerView/SelectedTriggerView.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/AgentInputsReadOnly/AgentInputsReadOnly.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTemplateView/SelectedTemplateView.tsx
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatSession.ts
  • autogpt_platform/frontend/src/components/renderers/InputRenderer/custom/CredentialField/CredentialField.tsx
  • autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/WebhookDisclaimer.tsx
autogpt_platform/frontend/**/*.{js,ts,jsx,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

Format frontend code using pnpm format

Files:

  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/AgentInputsSetup/AgentInputsSetup.tsx
  • autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/AgentOnboardingCredentials/AgentOnboardingCredentials.tsx
  • autogpt_platform/frontend/src/lib/utils.ts
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentModal/useAgentRunModal.tsx
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.handlers.ts
  • autogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsx
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/createStreamEventDispatcher.ts
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.ts
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTriggerView/SelectedTriggerView.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/AgentInputsReadOnly/AgentInputsReadOnly.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTemplateView/SelectedTemplateView.tsx
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatSession.ts
  • autogpt_platform/frontend/src/components/renderers/InputRenderer/custom/CredentialField/CredentialField.tsx
  • autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/WebhookDisclaimer.tsx
autogpt_platform/frontend/**

📄 CodeRabbit inference engine (autogpt_platform/CLAUDE.md)

autogpt_platform/frontend/**: Install frontend dependencies using pnpm i instead of npm
Generate API client from OpenAPI spec using pnpm generate:api
Regenerate API client hooks using pnpm generate:api when OpenAPI spec changes

Files:

  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/AgentInputsSetup/AgentInputsSetup.tsx
  • autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/AgentOnboardingCredentials/AgentOnboardingCredentials.tsx
  • autogpt_platform/frontend/src/lib/utils.ts
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentModal/useAgentRunModal.tsx
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.handlers.ts
  • autogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsx
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/createStreamEventDispatcher.ts
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.ts
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTriggerView/SelectedTriggerView.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/AgentInputsReadOnly/AgentInputsReadOnly.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTemplateView/SelectedTemplateView.tsx
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatSession.ts
  • autogpt_platform/frontend/src/components/renderers/InputRenderer/custom/CredentialField/CredentialField.tsx
  • autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/WebhookDisclaimer.tsx
autogpt_platform/frontend/src/**/*.tsx

📄 CodeRabbit inference engine (autogpt_platform/CLAUDE.md)

Use design system components from src/components/ (atoms, molecules, organisms) in frontend

Files:

  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/AgentInputsSetup/AgentInputsSetup.tsx
  • autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/AgentOnboardingCredentials/AgentOnboardingCredentials.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentModal/useAgentRunModal.tsx
  • autogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTriggerView/SelectedTriggerView.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/AgentInputsReadOnly/AgentInputsReadOnly.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTemplateView/SelectedTemplateView.tsx
  • autogpt_platform/frontend/src/components/renderers/InputRenderer/custom/CredentialField/CredentialField.tsx
  • autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/WebhookDisclaimer.tsx
autogpt_platform/frontend/src/app/**/*.tsx

📄 CodeRabbit inference engine (autogpt_platform/CLAUDE.md)

Create frontend pages in src/app/(platform)/feature-name/page.tsx with corresponding usePageName.ts hook and local components/ subfolder

Files:

  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/AgentInputsSetup/AgentInputsSetup.tsx
  • autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/AgentOnboardingCredentials/AgentOnboardingCredentials.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentModal/useAgentRunModal.tsx
  • autogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTriggerView/SelectedTriggerView.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/AgentInputsReadOnly/AgentInputsReadOnly.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTemplateView/SelectedTemplateView.tsx
  • autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/WebhookDisclaimer.tsx
autogpt_platform/frontend/**/*.{ts,tsx,css}

📄 CodeRabbit inference engine (autogpt_platform/CLAUDE.md)

Use only Tailwind CSS for styling in frontend, with design tokens and Phosphor Icons

Files:

  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/AgentInputsSetup/AgentInputsSetup.tsx
  • autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/AgentOnboardingCredentials/AgentOnboardingCredentials.tsx
  • autogpt_platform/frontend/src/lib/utils.ts
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentModal/useAgentRunModal.tsx
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.handlers.ts
  • autogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsx
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/createStreamEventDispatcher.ts
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.ts
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTriggerView/SelectedTriggerView.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/AgentInputsReadOnly/AgentInputsReadOnly.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTemplateView/SelectedTemplateView.tsx
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatSession.ts
  • autogpt_platform/frontend/src/components/renderers/InputRenderer/custom/CredentialField/CredentialField.tsx
  • autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/WebhookDisclaimer.tsx
autogpt_platform/frontend/src/components/**/*.tsx

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

autogpt_platform/frontend/src/components/**/*.tsx: Separate frontend component render logic from data/behavior. Structure: ComponentName/ComponentName.tsx + useComponentName.ts + helpers.ts. Small components (3-4 lines) can be inline. Render-only components can be direct files without folders
Use Tailwind CSS utilities only for styling in frontend. Use design system components from src/components/ (atoms, molecules, organisms). Never use src/components/legacy/*
Only use Phosphor Icons (@phosphor-icons/react) for icon components in frontend
Prefer design tokens over hardcoded values in frontend styling

Files:

  • autogpt_platform/frontend/src/components/renderers/InputRenderer/custom/CredentialField/CredentialField.tsx
autogpt_platform/frontend/src/components/**/*.{ts,tsx}

📄 CodeRabbit inference engine (autogpt_platform/CLAUDE.md)

autogpt_platform/frontend/src/components/**/*.{ts,tsx}: Separate render logic from data/behavior in components
Structure frontend components as ComponentName/ComponentName.tsx plus useComponentName.ts hook plus helpers.ts file

Files:

  • autogpt_platform/frontend/src/components/renderers/InputRenderer/custom/CredentialField/CredentialField.tsx
🧠 Learnings (18)
📚 Learning: 2025-11-25T08:48:33.246Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-25T08:48:33.246Z
Learning: Applies to autogpt_platform/frontend/src/components/**/*.tsx : Separate frontend component render logic from data/behavior. Structure: ComponentName/ComponentName.tsx + useComponentName.ts + helpers.ts. Small components (3-4 lines) can be inline. Render-only components can be direct files without folders

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/AgentInputsSetup/AgentInputsSetup.tsx
  • autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/AgentOnboardingCredentials/AgentOnboardingCredentials.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTriggerView/SelectedTriggerView.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTemplateView/SelectedTemplateView.tsx
  • autogpt_platform/frontend/src/components/renderers/InputRenderer/custom/CredentialField/CredentialField.tsx
📚 Learning: 2025-11-25T08:49:03.583Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/CLAUDE.md:0-0
Timestamp: 2025-11-25T08:49:03.583Z
Learning: Applies to autogpt_platform/frontend/**/*.{ts,tsx} : Never use `src/components/__legacy__/*` or deprecated `BackendAPI` in frontend

Applied to files:

  • autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/AgentOnboardingCredentials/AgentOnboardingCredentials.tsx
  • autogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/AgentInputsReadOnly/AgentInputsReadOnly.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTemplateView/SelectedTemplateView.tsx
  • autogpt_platform/frontend/src/components/renderers/InputRenderer/custom/CredentialField/CredentialField.tsx
📚 Learning: 2025-11-25T08:48:33.246Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-25T08:48:33.246Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : Use PascalCase for components, camelCase with use prefix for hooks

Applied to files:

  • autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/AgentOnboardingCredentials/AgentOnboardingCredentials.tsx
  • autogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsx
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/createStreamEventDispatcher.ts
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.ts
  • autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/WebhookDisclaimer.tsx
📚 Learning: 2025-11-25T08:48:33.246Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-25T08:48:33.246Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : Use generated API hooks from `@/app/api/__generated__/endpoints/` (generated via Orval from backend OpenAPI spec). Pattern: use{Method}{Version}{OperationName} (e.g., useGetV2ListLibraryAgents). Regenerate with: pnpm generate:api. Never use deprecated BackendAPI or src/lib/autogpt-server-api/*

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentModal/useAgentRunModal.tsx
  • autogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/AgentInputsReadOnly/AgentInputsReadOnly.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTemplateView/SelectedTemplateView.tsx
📚 Learning: 2025-11-25T08:48:33.246Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-25T08:48:33.246Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : No barrel files or index.ts re-exports in frontend

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTemplateView/SelectedTemplateView.tsx
📚 Learning: 2025-11-25T08:49:03.583Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/CLAUDE.md:0-0
Timestamp: 2025-11-25T08:49:03.583Z
Learning: Applies to autogpt_platform/frontend/src/**/*.tsx : Use design system components from `src/components/` (atoms, molecules, organisms) in frontend

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTemplateView/SelectedTemplateView.tsx
📚 Learning: 2025-11-25T08:49:03.583Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/CLAUDE.md:0-0
Timestamp: 2025-11-25T08:49:03.583Z
Learning: Applies to autogpt_platform/frontend/**/*.{ts,tsx} : Use function declarations (not arrow functions) for components and handlers in frontend

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsx
📚 Learning: 2025-11-25T08:48:55.844Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-25T08:48:55.844Z
Learning: Applies to autogpt_platform/frontend/**/*.{ts,tsx} : If adding protected frontend routes, update `frontend/lib/supabase/middleware.ts`

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsx
📚 Learning: 2025-11-25T08:49:03.583Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/CLAUDE.md:0-0
Timestamp: 2025-11-25T08:49:03.583Z
Learning: Applies to autogpt_platform/frontend/src/components/**/*.{ts,tsx} : Structure frontend components as `ComponentName/ComponentName.tsx` plus `useComponentName.ts` hook plus `helpers.ts` file

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsx
  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTemplateView/SelectedTemplateView.tsx
📚 Learning: 2025-11-25T08:49:03.583Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/CLAUDE.md:0-0
Timestamp: 2025-11-25T08:49:03.583Z
Learning: Applies to autogpt_platform/frontend/src/app/**/*.tsx : Create frontend pages in `src/app/(platform)/feature-name/page.tsx` with corresponding `usePageName.ts` hook and local `components/` subfolder

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsx
📚 Learning: 2025-11-25T08:48:33.246Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-25T08:48:33.246Z
Learning: Applies to autogpt_platform/frontend/src/components/**/*.tsx : Prefer design tokens over hardcoded values in frontend styling

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsx
📚 Learning: 2025-11-25T08:49:03.583Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/CLAUDE.md:0-0
Timestamp: 2025-11-25T08:49:03.583Z
Learning: Applies to autogpt_platform/frontend/**/*.{ts,tsx} : Use generated API hooks from `@/app/api/__generated__/endpoints/` for data fetching in frontend

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsx
📚 Learning: 2025-11-25T08:48:33.246Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-25T08:48:33.246Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : Default to client components (use client). Use server components only for SEO or extreme TTFB needs. Use React Query for server state via generated hooks. Co-locate UI state in components/hooks

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsx
  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatSession.ts
📚 Learning: 2025-11-25T08:48:33.246Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-25T08:48:33.246Z
Learning: Applies to autogpt_platform/frontend/src/components/**/*.tsx : Only use Phosphor Icons (phosphor-icons/react) for icon components in frontend

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsx
📚 Learning: 2025-11-25T08:48:33.246Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-25T08:48:33.246Z
Learning: Applies to autogpt_platform/frontend/src/components/**/*.tsx : Use Tailwind CSS utilities only for styling in frontend. Use design system components from src/components/ (atoms, molecules, organisms). Never use src/components/__legacy__/*

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsx
📚 Learning: 2025-11-25T08:49:03.583Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/CLAUDE.md:0-0
Timestamp: 2025-11-25T08:49:03.583Z
Learning: Frontend UI components use shadcn/ui (Radix UI primitives) with Tailwind CSS styling

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsx
📚 Learning: 2025-11-25T08:48:33.246Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-25T08:48:33.246Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : Use function declarations for components and handlers (not arrow functions). Only arrow functions for small inline lambdas (map, filter, etc.)

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/createStreamEventDispatcher.ts
📚 Learning: 2025-11-25T08:49:03.583Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/CLAUDE.md:0-0
Timestamp: 2025-11-25T08:49:03.583Z
Learning: Frontend state management uses React Query for server state and co-located UI state in components/hooks

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatSession.ts
🧬 Code graph analysis (1)
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatSession.ts (1)
autogpt_platform/frontend/src/services/storage/local-storage.ts (1)
  • storage (44-48)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: test
  • GitHub Check: types
  • GitHub Check: Check PR Status
🔇 Additional comments (19)
autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsx (1)

32-32: LGTM — centralized import is clearer.

autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTriggerView/SelectedTriggerView.tsx (1)

28-184: LGTM on component structure and logic.

The component correctly follows function declaration patterns, uses ErrorCard for error display, and properly separates loading/error/success states. The credential and input field handling is well-structured.

autogpt_platform/frontend/src/lib/utils.ts (1)

500-504: LGTM!

The UUID validation function is well-implemented with a correct regex pattern for the standard 8-4-4-4-12 format. Centralizing this utility in @/lib/utils is a good refactor, making it reusable across the codebase.

autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/WebhookDisclaimer.tsx (1)

3-7: LGTM!

The import path change for isValidUUID from @/app/(platform)/chat/helpers to @/lib/utils aligns with the broader centralization refactor in this PR. This improves maintainability by keeping general-purpose utilities in a single location.

autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/AgentInputsReadOnly/AgentInputsReadOnly.tsx (1)

17-90: LGTM!

The component logic is well-structured with proper conditional rendering, appropriate use of function declarations, and correct handling of the read-only state for both inputs and credentials.

autogpt_platform/frontend/src/components/renderers/InputRenderer/custom/CredentialField/CredentialField.tsx (1)

3-3: LGTM!

The import path consolidation to @/components/contextual/CredentialsInput/CredentialsInput aligns with the broader PR effort to centralize the CredentialsInput component.

autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/AgentOnboardingCredentials/AgentOnboardingCredentials.tsx (1)

3-3: LGTM!

The import path update to the centralized contextual module is correct. The component properly follows function declaration patterns as per coding guidelines.

autogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsx (2)

8-8: LGTM!

The import path consolidation to the centralized contextual module is correct and aligns with the broader PR refactoring effort.


98-98: The setup-wizard route is already properly protected. The /auth/integrations path is included in PROTECTED_PAGES in frontend/lib/supabase/helpers.ts, and the middleware's isProtectedPage() function uses startsWith() matching, so all subroutes including /auth/integrations/setup-wizard will require authentication and redirect unauthenticated users to the login page. No additional middleware configuration is needed.

autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.ts (3)

140-148: LGTM on the fallback message handling.

The role type assertion at line 144 is acceptable since explicit checks for user, assistant, and tool roles occur earlier in the control flow, leaving only legitimate "other" roles (like system) to reach this branch.


154-201: LGTM on sendMessage implementation.

The callback properly validates sessionId, manages streaming state, and handles errors with appropriate toast notifications as per coding guidelines. The streaming state cleanup is correctly handled in both the error path (catch block) and success path (via the dispatcher's handleStreamEnd).


203-210: LGTM on the hook's public API.

The return object provides a clean interface with derived isStreaming state and proper separation of concerns. The hook follows naming conventions (camelCase with use prefix) as per coding guidelines.

autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.handlers.ts (2)

216-222: LGTM on handleError state cleanup.

The error handler comprehensively resets all streaming-related state (isStreamingInitiated, hasTextChunks, streamingChunks, streamingChunksRef). The error notification is appropriately handled at the dispatcher level via toast.


107-123: LGTM on the input and credential extraction logic.

The conditional properly handles both run_agent and run_block tool names, with null-safe access to parsedResult.type. The independent extraction and addition of inputs and credentials messages is clean.

autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/createStreamEventDispatcher.ts (1)

14-59: LGTM on the stream event dispatcher implementation.

The dispatcher cleanly routes stream chunks to appropriate handlers using a switch statement. The error handling approach (toast at dispatcher level) is well-documented to avoid circular dependencies. The default case provides good defensive logging for unknown chunk types.

autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatSession.ts (4)

86-99: LGTM!

Good use of optimistic UI pattern - providing a synthetic session object immediately after creation while the real data loads.


190-207: LGTM!

Clean implementation of session refresh using the React Query refetch function.


251-256: LGTM!

Complete session cleanup implementation with proper state and storage reset.


167-179: No action needed. The code correctly handles the fetchQuery response shape.

The fetchQuery result at line 174 properly receives the full response object { status: number; data: T; headers: Headers } from the custom mutator, and the check ("status" in result && result.status !== 200) is appropriate. The deliberate omission of select: okData here is correct—the code validates the raw response status before proceeding. Using select: okData in the useGetV2GetSession hook (line 75) and not using it in the loadSession callback are different patterns serving different purposes: the hook needs transformed data for UI consumption, while loadSession needs the raw response to validate the status.

Likely an incorrect or invalid review comment.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

@github-project-automation github-project-automation bot moved this from 🆕 Needs initial review to 👍🏼 Mergeable in AutoGPT development kanban Jan 16, 2026
@0ubbe 0ubbe merged commit 4a9b13a into dev Jan 16, 2026
24 checks passed
@0ubbe 0ubbe deleted the hackathon-copilot-frontend branch January 16, 2026 15:15
@github-project-automation github-project-automation bot moved this from 👍🏼 Mergeable to ✅ Done in AutoGPT development kanban Jan 16, 2026
@github-project-automation github-project-automation bot moved this to Done in Frontend Jan 16, 2026
@coderabbitai coderabbitai bot mentioned this pull request Feb 5, 2026
11 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Status: Done
Status: Done

Development

Successfully merging this pull request may close these issues.

4 participants