-
Notifications
You must be signed in to change notification settings - Fork 46.2k
feat(frontend): extract frontend changes from hackathon/copilot branch #11717
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
This PR targets the Automatically setting the base branch to |
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (15)
WalkthroughCentralizes many UI imports, adds a pluggable OutputRenderers system with multiple renderers and copy/download utilities, introduces RunAgentInputs and related upload hook, implements a new modular chat system with streaming (including a POST SSE proxy), and adds CSS loader/shimmer styles. Changes
Sequence Diagram(s)mermaid Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
|
Thanks for submitting this PR to extract frontend changes from the hackathon/copilot branch. The changes look well-organized and the PR description provides good context. However, there's an issue that needs to be addressed before this can be merged:
Please update your PR by:
The code changes themselves look good and align with the PR description, but we need to ensure all checklist items are properly addressed before merging. Let me know if you have any questions or need assistance completing the checklist. |
|
Thank you for submitting your PR to extract frontend changes from the hackathon/copilot branch! The changes look substantial and include important new features like the chat system, form renderer, and output renderers. However, before we can merge this PR, there's one issue that needs to be addressed: Required Changes
Once you've completed these tests, please update your PR by checking off these items. Additional NotesThe changes look well-structured, with components being moved to more appropriate locations. The new chat system appears to be a significant feature addition that will enhance the platform's capabilities. Please complete the test plan checklist, and we'll be happy to review this PR again for merging. |
|
This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request. |
|
Conflicts have been resolved! 🎉 A maintainer will review the pull request shortly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 20
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
🤖 Fix all issues with AI agents
In
`@autogpt_platform/frontend/src/components/contextual/Chat/components/ChatContainer/helpers.ts`:
- Around line 137-219: The parseToolResponse function currently overwrites the
originating toolName and collapses "no_results" into a generic tool_response,
losing suggestions/session info and the original tool identity; update
parseToolResponse to (1) preserve and return the incoming toolName and toolId
for all responses instead of replacing toolName with response-type labels, (2)
emit a distinct "no_results" (or keep responseType === "no_results") return
shape that includes parsedResult.message, any suggestions/session_id/agent_info
present, and success flag, and (3) for special responseType branches
(agent_carousel, execution_started, need_login, setup_requirements) ensure you
map responseType to the correct returned "type" value but keep toolId/toolName
from the parameters and include any additional parsedResult fields (agents,
execution_id, session_id, agent_info, total_count, etc.) so no useful fields are
dropped; locate changes in parseToolResponse to adjust returned objects for
responseType checks and to stop setting toolName =
"agent_carousel"/"execution_started"/"login_needed".
In
`@autogpt_platform/frontend/src/components/contextual/Chat/components/ChatCredentialsSetup/useChatCredentialsSetup.ts`:
- Around line 1-3: The import of CredentialsMetaInput from
"@/lib/autogpt-server-api" is deprecated; replace it with the generated OpenAPI
type (or a local type alias) and update any usages in this hook: change the
import statement that references CredentialsMetaInput and update types in
useChatCredentialsSetup and related symbols (e.g., CredentialInfo) to use the
new generated type/hook names from the OpenAPI client (or your local type) so
the hook aligns with the current API layer and frontend guidelines.
In
`@autogpt_platform/frontend/src/components/contextual/Chat/components/ChatInput/ChatInput.tsx`:
- Around line 29-45: The Input component isn't forwarding aria-describedby to
its textarea, so update the textarea rendering in Input (where it conditionally
sets aria-label) to also set aria-describedby={props['aria-describedby']} (or
equivalent from the incoming props) so the hint can be associated with the
control; then in ChatInput, add aria-describedby="chat-input-hint" to the <Input
... /> instance so the screen-reader-only hint is connected to the textarea.
In
`@autogpt_platform/frontend/src/components/contextual/Chat/components/StreamingMessage/useStreamingMessage.ts`:
- Around line 8-24: useStreamingMessage never updates isComplete because the
setter _setIsComplete is unused; change isComplete to be derived from the
incoming chunks instead of local state (in useStreamingMessage) by detecting the
stream-complete condition based on your chunk shape (e.g., check a done flag on
the last chunk like chunk.done === true or a sentinel string such as '[DONE]'),
remove _setIsComplete, compute isComplete from chunks, and keep the useEffect
that calls onComplete when that derived isComplete flips true so onComplete
fires reliably.
In
`@autogpt_platform/frontend/src/components/contextual/Chat/components/ThinkingMessage/ThinkingMessage.tsx`:
- Line 48: Replace the raw CSS-driven loader in ThinkingMessage (the div with
className="loader" and the inline animation style) with Tailwind utilities or
the design-system loading component: remove usage of the global .loader and
shimmer keyframes, import and render the canonical spinner/loading component
from src/components (e.g., Spinner or LoadingIndicator) if available, or replace
the div with a Tailwind-styled element using utility classes (fixed
width/height, rounded, bg-gradient or bg-gray with animate-pulse/animate-spin as
appropriate) instead of inline animation: "shimmer..."; ensure the new element
uses Tailwind classes only and lives inside the ThinkingMessage component to
match the design system API.
In
`@autogpt_platform/frontend/src/components/contextual/Chat/components/ToolResponseMessage/ToolResponseMessage.tsx`:
- Around line 82-115: The duplicated rendering logic for agent_output and
block_output should be extracted into a shared component (e.g., OutputsGrid) and
reused; create an OutputsGrid component that accepts outputs: Record<string,
unknown[]> and optional className, move the map logic that iterates
Object.entries(outputs), calls globalRegistry.getRenderer(value), returns
<OutputItem .../> or the fallback <div> with Text and JSON.stringify, and
replace the existing duplicated blocks in ToolResponseMessage (the sections
rendering agent_output and block_output) with <OutputsGrid
outputs={agent_output} /> and <OutputsGrid outputs={block_output} />
respectively; ensure you preserve keys (`${outputName}-${index}`), props (value,
renderer, label) and any className/cn usage so behavior and styling remain
identical.
In `@autogpt_platform/frontend/src/components/contextual/Chat/useChatStream.ts`:
- Around line 315-345: The retry logic in useChatStream.ts currently re-calls
sendMessage with identical params causing duplicate persisted messages; fix by
adding an idempotency token: generate a unique id (e.g., idempotencyKeyRef or
messageIdRef) when starting a send, pass that token into sendMessage and ensure
the client includes it in the API request, and wire the backend to use that
token to detect and skip duplicate persists; update places referencing
sendMessage, retryTimeoutRef, retryCountRef and MAX_RETRIES so retries reuse the
same idempotency token instead of creating a new message on each retry.
In
`@autogpt_platform/frontend/src/components/contextual/CredentialsInputs/components/APIKeyCredentialsModal/useAPIKeyCredentialsModal.ts`:
- Around line 5-8: Replace the deprecated import of BlockIOCredentialsSubSchema
and CredentialsMetaInput in the useAPIKeyCredentialsModal hook with the
generated OpenAPI types (or a local type definition) used by the frontend;
locate the import statement that brings in BlockIOCredentialsSubSchema and
CredentialsMetaInput and either import their equivalents from the generated
OpenAPI client/types package used elsewhere in the frontend or define a small
local interface matching the fields the hook needs, then update any references
inside useAPIKeyCredentialsModal to use the new type names so the hook no longer
depends on "@/lib/autogpt-server-api/types".
- Around line 52-67: Wrap the async body of onSubmit in a try/catch around the
call to credentials.createAPIKeyCredentials so failures are handled: on error
call the app toast error helper (e.g., toast.error) with a user-friendly message
and capture the exception with Sentry.captureException(error), then return/exit
so onCredentialsCreate is not invoked for failed creation; keep the existing
successful flow (creating newCredentials and calling onCredentialsCreate)
unchanged inside the try block and include references to APIKeyFormValues,
values.expiresAt conversion, and credentials.provider/id/title as-is.
In
`@autogpt_platform/frontend/src/components/contextual/CredentialsInputs/components/HotScopedCredentialsModal/HotScopedCredentialsModal.tsx`:
- Around line 8-13: The imports in HotScopedCredentialsModal.tsx reference the
disallowed legacy module (__legacy__/ui/form) for Form, FormDescription,
FormField, and FormLabel; replace that import with the current design-system
form components (the modern form module used across the frontend) and update any
component usage to match the new API (ensure prop names and component wrappers
used in HotScopedCredentialsModal still align with the new Form, FormField,
FormLabel, and FormDescription exports), preserving behavior and types.
- Around line 16-18: The component HotScopedCredentialsModal imports
BlockIOCredentialsSubSchema and CredentialsMetaInput from the deprecated
"@/lib/autogpt-server-api/types"; replace those with the generated OpenAPI types
(or a local equivalent) used elsewhere in the frontend, updating the import to
the generated types module (or local type file) and adjusting any usages/props
in HotScopedCredentialsModal to match the generated type names and shapes;
ensure you update any type references within the component (e.g., prop/type
annotations and form handlers) to the new types so there are no remaining
references to "@/lib/autogpt-server-api/types".
- Around line 84-102: The handlers addHeaderPair, removeHeaderPair, and
updateHeaderPair are implemented as arrow functions; change them to named
function declarations (e.g., function addHeaderPair() { ... }, function
removeHeaderPair(index: number) { ... }, function updateHeaderPair(index:
number, field: "key" | "value", value: string) { ... }) to follow frontend
conventions, keep their internal logic identical (including the setHeaderPairs
calls and guard in removeHeaderPair), and ensure any references or exports to
these names remain valid after the refactor.
- Around line 104-128: Wrap the createHostScopedCredentials call inside onSubmit
in a try/catch: import useToast and Sentry (import { useToast } from
"@/components/molecules/Toast/use-toast" and import * as Sentry from
"@sentry/nextjs"), call const { toast } = useToast() in the component, then in
onSubmit await createHostScopedCredentials inside try; on success proceed to
call onCredentialsCreate as before; in catch call toast({ title: "Failed to
create credentials", description: error.message || "An unexpected error
occurred", variant: "destructive" }) and Sentry.captureException(error) and
return/exit early to avoid calling onCredentialsCreate with undefined data.
In
`@autogpt_platform/frontend/src/components/contextual/CredentialsInputs/useCredentialsInput.ts`:
- Around line 131-238: The handleOAuthLogin function has two problems:
api.oAuthLogin can throw and the OAUTH timeout callback runs even after a
successful flow, overwriting state. Fix by wrapping the api.oAuthLogin(...) call
in try/catch and setOAuthError + return on failure; create a timeoutId from
setTimeout and store it in a local variable, then clearTimeout(timeoutId) when
the flow completes successfully (in the try block after oAuthCallback) or when
controller.abort is invoked (e.g., in controller.signal.onabort), and/or make
the timeout handler first check controller.signal.aborted before mutating state;
reference functions/vars: handleOAuthLogin, api.oAuthLogin, oAuthCallback,
controller.abort, controller.signal.onabort, setTimeout/clearTimeout, and
OAUTH_TIMEOUT_MS.
- Around line 56-71: The onSuccess callback in deleteCredentialsMutation closes
over credentialToDelete and can read a stale value; fix by using a stable
identifier instead of the state capture: either capture the id at mutation
invocation (e.g., const idToDelete = credentialToDelete?.id and pass it into the
mutate call or mutation context) or maintain a ref (credentialToDeleteRef) that
you update whenever credentialToDelete changes and read
credentialToDeleteRef.current inside onSuccess; then compare
selectedCredential?.id to that stable id and call onSelectCredential(undefined)
accordingly, keeping the existing invalidateQueries and
setCredentialToDelete(null) behavior.
In
`@autogpt_platform/frontend/src/components/contextual/OutputRenderers/index.ts`:
- Around line 1-20: This file creates a barrel index.ts and registers renderers;
replace it with explicit exports and a dedicated bootstrap module: move the
registration logic (calls to globalRegistry.register for videoRenderer,
imageRenderer, codeRenderer, markdownRenderer, jsonRenderer, textRenderer) into
a new named module (e.g., renderersRegistry or registerRenderers) that you
import where the app initializes, and remove the re-export barrel exports;
instead export symbols directly from their source files (export { globalRegistry
} from "./types"; export type { OutputRenderer, OutputMetadata, DownloadContent
} from "./types"; export { OutputItem } from "./components/OutputItem"; export {
OutputActions } from "./components/OutputActions";) so there is no index.ts
barrel—ensure globalRegistry and renderer identifiers remain referenced from
their original modules.
In
`@autogpt_platform/frontend/src/components/contextual/OutputRenderers/renderers/ImageRenderer.tsx`:
- Around line 106-124: renderImage currently coerces value to String(value)
which yields "[object Object]" when value is an object; update renderImage to
mirror canRenderImage's behavior by extracting the actual image source from an
object (prefer value.url, then value.data, then value.path) and falling back to
String(value) for primitives, and ensure data (base64/data URLs) are returned
as-is; apply the same extraction logic to getCopyContentImage and
getDownloadContentImage so all three functions consistently handle objects with
url/data/path properties.
In
`@autogpt_platform/frontend/src/components/contextual/RunAgentInputs/RunAgentInputs.tsx`:
- Around line 14-21: The RunAgentInputs.tsx file imports deprecated backend
types (BlockIOObjectSubSchema, BlockIOSubSchema, BlockIOTableSubSchema,
DataType, determineDataType, TableRow) from "@/lib/autogpt-server-api/*";
replace these with the corresponding types from the generated frontend API
client or a local non-deprecated abstraction. Locate usages of
determineDataType, DataType, TableRow, BlockIOSubSchema, BlockIOObjectSubSchema
and BlockIOTableSubSchema in RunAgentInputs and swap their imports to the
generated endpoints' types (or create a small local adapter type mirroring only
the properties used), update any code references to match the new type names,
and remove the deprecated import line so the component only relies on generated
or local frontend-safe types.
- Around line 184-206: The SELECT branch in RunAgentInputs.tsx (case
DataType.SELECT) currently falls through to DataType.MULTI_SELECT when
schema.enum is missing and also drops valid falsy enum values; fix by ensuring
the SELECT case always exits (add an explicit break/return at the end of the
DataType.SELECT case so it cannot fall through to MULTI_SELECT when schema.enum
is absent) and change the options filtering for DSSelect from .filter((opt) =>
opt) to .filter((opt) => opt !== undefined && opt !== null) so 0, false, and ""
are preserved while only null/undefined are removed; keep the DSSelect usage (id
`${baseId}-select`, label, value, onValueChange, placeholder) unchanged.
In
`@autogpt_platform/frontend/src/components/contextual/RunAgentInputs/useRunAgentInputs.ts`:
- Line 1: The import of the deprecated BackendAPI should be removed and replaced
by the generated React hook for the corresponding backend endpoint (use the
pattern use{Method}{Version}{OperationName}, e.g., usePostV2UploadFile) from
`@/app/api/__generated__/endpoints/` and invoked inside the useRunAgentInputs hook
instead of calling BackendAPI methods; run pnpm generate:api to regenerate the
hooks if the OpenAPI spec changed, import the correct hook(s) matching the
operation you need, and update any calls that referenced BackendAPI to call the
hook's mutate/execute function and handle its returned state
(loading/error/data) accordingly.
🟡 Minor comments (25)
autogpt_platform/frontend/src/app/api/chat/sessions/[sessionId]/stream/route.ts-44-46 (1)
44-46: Token check allows invalid sentinel value to be sent.Per the
getServerAuthToken()implementation, it returns"no-token-found"string when no session exists. The current truthy checkif (token)will pass for this sentinel value, sendingAuthorization: Bearer no-token-foundto the backend.Consider checking for the sentinel value explicitly:
Proposed fix
- if (token) { + if (token && token !== "no-token-found") { headers["Authorization"] = `Bearer ${token}`; }autogpt_platform/frontend/src/app/api/chat/sessions/[sessionId]/stream/route.ts-58-64 (1)
58-64: Content-Type mismatch on error response.The error response sets
Content-Type: application/jsonbut returnsresponse.text()which may not be valid JSON depending on what the backend returns. This could cause client-side parsing errors.Proposed fix
if (!response.ok) { const error = await response.text(); return new Response(error, { status: response.status, - headers: { "Content-Type": "application/json" }, + headers: { "Content-Type": response.headers.get("Content-Type") || "text/plain" }, }); }autogpt_platform/frontend/src/components/contextual/CredentialsInputs/components/CredentialRow/CredentialRow.tsx-71-76 (1)
71-76: Typo in className:lex-[0_0_40%]should beflex-[0_0_40%].Missing 'f' prefix causes the Tailwind class to not apply, breaking the flex sizing for the masked key display.
Fix
<Text variant="large" - className="lex-[0_0_40%] relative top-1 hidden overflow-hidden whitespace-nowrap font-mono tracking-tight md:block" + className="flex-[0_0_40%] relative top-1 hidden overflow-hidden whitespace-nowrap font-mono tracking-tight md:block" > {"*".repeat(MASKED_KEY_LENGTH)} </Text>autogpt_platform/frontend/src/components/contextual/Chat/components/ChatCredentialsSetup/ChatCredentialsSetup.tsx-17-24 (1)
17-24: Unused props:className,agentName, andonCancelare defined but not used.
classNameis declared in Props (line 23) but never destructured or applied.agentNameandonCancelare destructured with underscore prefix but not utilized. Either implement these props or remove them from the interface.🔧 Suggested fix
If these props are needed for future use, add a TODO comment. Otherwise, remove them:
interface Props { credentials: CredentialInfo[]; - agentName?: string; message: string; onAllCredentialsComplete: () => void; - onCancel: () => void; className?: string; } export function ChatCredentialsSetup({ credentials, - agentName: _agentName, message, onAllCredentialsComplete, - onCancel: _onCancel, + className, }: Props) {Then apply
classNameto the root div:- <div className="group relative flex w-full justify-start gap-3 px-4 py-3"> + <div className={cn("group relative flex w-full justify-start gap-3 px-4 py-3", className)}>Also applies to: 41-47
autogpt_platform/frontend/src/components/contextual/RunAgentInputs/useRunAgentInputs.ts-4-13 (1)
4-13: API instance created on every render; missing error handling and progress reset.Three issues:
new BackendAPI()is instantiated on every render. Consider memoizing or moving outside the hook.uploadProgressis never reset before a new upload starts, which could show stale progress.- No error handling—failed uploads will cause unhandled promise rejections.
Suggested improvements
+import { useCallback, useState } from "react"; + +const api = new BackendAPI(); // Move outside if BackendAPI is stateless + export function useRunAgentInputs() { - const api = new BackendAPI(); const [uploadProgress, setUploadProgress] = useState(0); - async function handleUploadFile(file: File) { - const result = await api.uploadFile(file, "gcs", 24, (progress) => - setUploadProgress(progress), - ); - return result; - } + const handleUploadFile = useCallback(async (file: File) => { + setUploadProgress(0); // Reset progress + try { + const result = await api.uploadFile(file, "gcs", 24, (progress) => + setUploadProgress(progress), + ); + return result; + } catch (error) { + setUploadProgress(0); + throw error; // Re-throw for caller to handle + } + }, []);autogpt_platform/frontend/src/components/contextual/Chat/components/ChatLoadingState/ChatLoadingState.tsx-4-9 (1)
4-9: Themessageprop is declared but never used.The
messageprop is defined inChatLoadingStatePropsbut is not destructured in the function signature or rendered in the component. This appears to be either incomplete implementation or dead code.Option 1: Remove unused prop
export interface ChatLoadingStateProps { - message?: string; className?: string; } export function ChatLoadingState({ className }: ChatLoadingStateProps) {Option 2: Implement the message display
-export function ChatLoadingState({ className }: ChatLoadingStateProps) { +export function ChatLoadingState({ message, className }: ChatLoadingStateProps) { return ( <div className={cn("flex flex-1 items-center justify-center p-6", className)} > <div className="flex flex-col items-center gap-4 text-center"> <LoadingSpinner /> + {message && ( + <p className="text-sm text-muted-foreground">{message}</p> + )} </div> </div> ); }autogpt_platform/frontend/src/components/contextual/Chat/components/ExecutionStartedMessage/ExecutionStartedMessage.tsx-56-62 (1)
56-62: Avoid always appending ellipsis to short execution IDs.When
executionIdis ≤16 chars, the UI still shows...and hides the full ID. Prefer conditional truncation.🐛 Proposed fix
- <Text variant="small" className="font-mono text-green-800"> - {executionId.slice(0, 16)}... - </Text> + <Text variant="small" className="font-mono text-green-800"> + {executionId.length > 16 + ? `${executionId.slice(0, 16)}...` + : executionId} + </Text>autogpt_platform/frontend/src/components/contextual/Chat/components/StreamingMessage/StreamingMessage.tsx-7-18 (1)
7-18: Implement completion detection inuseStreamingMessageor remove theonCompleteprop.
useStreamingMessageinitializesisCompletetofalse(line 12) but never calls_setIsComplete, so theuseEffectconditionif (isComplete && onComplete)will never be true. Either add logic to detect when streaming is complete and setisComplete = true, or remove theonCompleteprop entirely to avoid misleading callers.autogpt_platform/frontend/src/components/contextual/OutputRenderers/renderers/VideoRenderer.tsx-126-145 (1)
126-145: Data URL parsing assumes base64 encoding without validation.The code assumes
data:video/...;base64,...format, but data URLs can also use URL encoding (without the;base64part). Callingatob()on URL-encoded data will fail.🐛 Proposed fix
if (videoUrl.startsWith("data:")) { const [mimeInfo, base64Data] = videoUrl.split(","); const mimeType = mimeInfo.match(/data:([^;]+)/)?.[1] || "video/mp4"; + const isBase64 = mimeInfo.includes(";base64"); + + let byteArray: Uint8Array; + if (isBase64) { const byteCharacters = atob(base64Data); const byteNumbers = new Array(byteCharacters.length); - for (let i = 0; i < byteCharacters.length; i++) { byteNumbers[i] = byteCharacters.charCodeAt(i); } - - const byteArray = new Uint8Array(byteNumbers); + byteArray = new Uint8Array(byteNumbers); + } else { + // URL-encoded data + const decoded = decodeURIComponent(base64Data); + byteArray = new TextEncoder().encode(decoded); + } const blob = new Blob([byteArray], { type: mimeType });autogpt_platform/frontend/src/components/contextual/OutputRenderers/renderers/VideoRenderer.tsx-55-57 (1)
55-57: URL extension matching is too permissive and may cause false positives.Using
.includes(ext)matches the extension anywhere in the URL, not just at the end. For example,https://example.com/.mp4-folder/document.txtwould incorrectly match.mp4.🐛 Proposed fix
if (value.startsWith("http://") || value.startsWith("https://")) { - return videoExtensions.some((ext) => value.toLowerCase().includes(ext)); + const url = value.toLowerCase(); + return videoExtensions.some((ext) => { + const extIndex = url.lastIndexOf(ext); + // Check if extension is at the end or followed by query params/hash + return extIndex !== -1 && (extIndex + ext.length === url.length || + url[extIndex + ext.length] === '?' || url[extIndex + ext.length] === '#'); + }); }Alternatively, use a URL parser:
const urlPath = new URL(value).pathname.toLowerCase(); return videoExtensions.some((ext) => urlPath.endsWith(ext));autogpt_platform/frontend/src/components/contextual/OutputRenderers/renderers/VideoRenderer.tsx-109-118 (1)
109-118: Add error handling for fetch in copy content.The async fetch for remote URLs has no error handling. Network failures will cause unhandled promise rejections that could break clipboard operations.
🐛 Proposed fix
return { mimeType: mimeType, data: async () => { - const response = await fetch(videoUrl); - return await response.blob(); + try { + const response = await fetch(videoUrl); + if (!response.ok) { + throw new Error(`Failed to fetch video: ${response.status}`); + } + return await response.blob(); + } catch { + // Return URL as fallback text on fetch failure + return videoUrl; + } }, alternativeMimeTypes: ["text/plain"], fallbackText: videoUrl, };autogpt_platform/frontend/src/components/contextual/OutputRenderers/utils/download.ts-36-50 (1)
36-50: Non-URL string data is silently skipped.When
downloadContent.datais a string that doesn't start with "http", the item is silently ignored. This could result in unexpected data loss for items with data URLs (e.g.,data:...) or relative paths.🐛 Proposed fix
if (downloadContent) { if (typeof downloadContent.data === "string") { if (downloadContent.data.startsWith("http")) { const link = document.createElement("a"); link.href = downloadContent.data; link.download = downloadContent.filename; link.click(); + } else { + // Handle non-URL strings (data URLs, relative paths, or raw content) + const blob = new Blob([downloadContent.data], { type: downloadContent.mimeType }); + nonConcatenableDownloads.push({ + blob, + filename: downloadContent.filename, + }); } } else {autogpt_platform/frontend/src/components/contextual/OutputRenderers/utils/copy.ts-3-14 (1)
3-14: Add SSR safety guard for window access.
isClipboardTypeSupportedaccesseswindowdirectly without checking if it exists. This can cause issues during server-side rendering.🐛 Proposed fix
export function isClipboardTypeSupported(mimeType: string): boolean { + if (typeof window === "undefined") { + return false; + } + // ClipboardItem.supports() is the proper way to check if ("ClipboardItem" in window && "supports" in ClipboardItem) { return ClipboardItem.supports(mimeType); }autogpt_platform/frontend/src/components/contextual/OutputRenderers/renderers/JSONRenderer.tsx-12-31 (1)
12-31:canRenderJSONmay be overly permissive for object types.The check at lines 17-18 returns
truefor any non-null object, which includes DOM nodes, class instances, functions (which are objects), and other non-serializable values. This could causeJSON.stringifyto fail or produce unexpected results ingetCopyContentJSONandgetDownloadContentJSON.🐛 Suggested fix
function canRenderJSON(value: unknown, _metadata?: OutputMetadata): boolean { if (_metadata?.type === "json") { return true; } if (typeof value === "object" && value !== null) { - return true; + // Verify it's a plain object or array that can be serialized + try { + JSON.stringify(value); + return true; + } catch { + return false; + } } if (typeof value === "string") {autogpt_platform/frontend/src/components/contextual/Chat/components/ChatMessage/ChatMessage.tsx-27-42 (1)
27-42: UnusedonDismissLoginprop in destructuring.The
onDismissLoginprop is defined inChatMessageProps(line 30) but is not destructured in the function parameters (line 39-42) and therefore not used anywhere in the component.🐛 Proposed fix
Either remove it from the interface if not needed:
export interface ChatMessageProps { message: ChatMessageData; className?: string; - onDismissLogin?: () => void; onDismissCredentials?: () => void; onSendMessage?: (content: string, isUserMessage?: boolean) => void; agentOutput?: ChatMessageData; }Or destructure and use it if intended:
export function ChatMessage({ message, className, + onDismissLogin, onDismissCredentials, onSendMessage, agentOutput, }: ChatMessageProps) {autogpt_platform/frontend/src/components/contextual/OutputRenderers/utils/download.ts-19-29 (1)
19-29: Incomplete handling ofCopyContent.datatypes.Per the
CopyContentinterface intypes.ts,datacan beBlob | string | (() => Promise<Blob | string>). This code only handlesstringandfallbackText, skipping items wheredatais aBlobor async function.🐛 Proposed fix
if (copyContent) { // Extract text from CopyContent let text: string; if (typeof copyContent.data === "string") { text = copyContent.data; + } else if (typeof copyContent.data === "function") { + const resolved = await copyContent.data(); + if (typeof resolved === "string") { + text = resolved; + } else if (copyContent.fallbackText) { + text = copyContent.fallbackText; + } else { + continue; + } + } else if (copyContent.data instanceof Blob) { + // Try to read blob as text if it's a text type + if (copyContent.data.type.startsWith("text/")) { + text = await copyContent.data.text(); + } else if (copyContent.fallbackText) { + text = copyContent.fallbackText; + } else { + continue; + } } else if (copyContent.fallbackText) { text = copyContent.fallbackText; } else { continue; }autogpt_platform/frontend/src/components/contextual/OutputRenderers/components/OutputActions.tsx-10-18 (1)
10-18: WireclassNameinto the root container.
classNameis accepted but never applied, so consumers can’t style the wrapper.🐛 Proposed fix
-export function OutputActions({ - items, - isPrimary = false, -}: OutputActionsProps) { +export function OutputActions({ + items, + isPrimary = false, + className, +}: OutputActionsProps) { ... - return ( - <div className="flex items-center gap-3"> + return ( + <div className={cn("flex items-center gap-3", className)}>Also applies to: 67-68
autogpt_platform/frontend/src/components/contextual/OutputRenderers/index.ts-9-15 (1)
9-15: Add duplicate-prevention logic to the renderer registry to ensure idempotency.The
register()method inOutputRendererRegistrylacks duplicate checks. If the module is re-evaluated (e.g., during HMR in development), all 6 renderers will be re-registered, adding duplicates to the array. Guard against this by checking if a renderer with the same name already exists before registration, or wrap registrations in an initialization guard.autogpt_platform/frontend/src/components/contextual/OutputRenderers/renderers/MarkdownRenderer.tsx-145-152 (1)
145-152: Stateful regex withg/mflags can cause intermittent detection failures.Regex patterns with the
gflag (lines 22, 30-34) maintainlastIndexstate between.test()calls. On repeated invocations ofcanRenderMarkdown, the pattern may start matching from a non-zero index, causing false negatives.Suggested fix: reset lastIndex before testing
for (const pattern of markdownPatterns) { + pattern.lastIndex = 0; // Reset stateful regex if (pattern.test(value)) { matchCount++; if (matchCount >= requiredMatches) { return true; } } }autogpt_platform/frontend/src/components/contextual/Chat/components/ChatContainer/helpers.ts-254-301 (1)
254-301: Fix singular grammar in credentials message.
For a single credential, the message currently says “1 credentials.”✏️ Suggested tweak
- return { + const countLabel = + credentials.length === 1 + ? "1 credential" + : `${credentials.length} credentials`; + return { type: "credentials_needed", toolName, credentials, - message: `To run ${agentName}, you need to add ${credentials.length === 1 ? "credentials" : `${credentials.length} credentials`}.`, + message: `To run ${agentName}, you need to add ${countLabel}.`, agentName, timestamp: new Date(), };autogpt_platform/frontend/src/components/contextual/Chat/usePageContext.ts-29-33 (1)
29-33: Regex order makes the second replacement ineffective.Line 31 replaces all whitespace (including newlines) with single spaces, so line 32's
\n\s*\npattern will never match. If you want to preserve paragraph breaks, swap the order:♻️ Suggested fix
// Clean up whitespace const cleanedContent = content - .replace(/\s+/g, " ") - .replace(/\n\s*\n/g, "\n") + .replace(/\n\s*\n/g, "\n") // Collapse multiple blank lines first + .replace(/[ \t]+/g, " ") // Then collapse horizontal whitespace .trim();autogpt_platform/frontend/src/components/contextual/Chat/components/ChatContainer/createStreamEventDispatcher.ts-17-58 (1)
17-58: Add missing handler forcredentials_neededchunk type.The
StreamChunktype includescredentials_neededas a valid chunk type and it's listed inLEGACY_STREAM_TYPES, but the dispatcher has no case for it. If the backend emits this event, it will fall through to the default case and only log a warning, missing an opportunity to handle credentials prompts properly.Suggested handler
case "login_needed": case "need_login": handleLoginNeeded(chunk, deps); break; + case "credentials_needed": + // TODO: Handle credentials_needed - prompt user for credentials + console.warn("Credentials needed event not yet implemented:", chunk); + break; + case "stream_end":autogpt_platform/frontend/src/components/contextual/Chat/components/ChatContainer/useChatContainer.ts-16-25 (1)
16-25: Unused parameteronRefreshSessionin hook args.The
onRefreshSessionis defined inUseChatContainerArgsbut is not destructured or used in the hook implementation. Either remove it from the interface or implement its usage.🐛 Either remove or use the parameter
If unused, remove from interface:
interface UseChatContainerArgs { sessionId: string | null; initialMessages: SessionDetailResponse["messages"]; - onRefreshSession: () => Promise<void>; } export function useChatContainer({ sessionId, initialMessages, -}: UseChatContainerArgs) { +}: Omit<UseChatContainerArgs, 'onRefreshSession'>) {Or if needed, destructure and use it:
export function useChatContainer({ sessionId, initialMessages, + onRefreshSession, }: UseChatContainerArgs) {autogpt_platform/frontend/src/components/contextual/Chat/components/ChatContainer/useChatContainer.handlers.ts-91-96 (1)
91-96: Add defensive checks for optional properties.
chunk.resultandchunk.tool_idare optional per theStreamChunktype, but non-null assertions are used here. If these are undefined,parseToolResponsewill receive invalid arguments.🛡️ Suggested fix
+ if (!chunk.tool_id || chunk.result === undefined) { + console.warn("[Tool Response] Missing tool_id or result:", chunk); + return; + } const responseMessage = parseToolResponse( - chunk.result!, - chunk.tool_id!, + chunk.result, + chunk.tool_id, toolName, new Date(), );autogpt_platform/frontend/src/components/contextual/Chat/components/ChatContainer/useChatContainer.handlers.ts-216-223 (1)
216-223: Error is not surfaced to the user.The error message is logged to console but not added to chat messages. Users won't see what went wrong in the chat UI. Consider adding an error message to the chat or displaying a toast notification.
💡 Suggested approach to surface errors
export function handleError(chunk: StreamChunk, deps: HandlerDependencies) { const errorMessage = chunk.message || chunk.content || "An error occurred"; console.error("Stream error:", errorMessage); + + const errorChatMessage: ChatMessageData = { + type: "message", + role: "assistant", + content: `⚠️ ${errorMessage}`, + timestamp: new Date(), + }; + deps.setMessages((prev) => [...prev, errorChatMessage]); + deps.setIsStreamingInitiated(false); deps.setHasTextChunks(false); deps.setStreamingChunks([]); deps.streamingChunksRef.current = []; }Alternatively, if error handling is intentionally done elsewhere, consider adding a comment documenting this design decision.
autogpt_platform/frontend/src/components/contextual/Chat/components/ChatContainer/helpers.ts
Show resolved
Hide resolved
...nd/src/components/contextual/Chat/components/ChatCredentialsSetup/useChatCredentialsSetup.ts
Outdated
Show resolved
Hide resolved
...platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatInput/ChatInput.tsx
Show resolved
Hide resolved
...m/frontend/src/components/contextual/Chat/components/StreamingMessage/useStreamingMessage.ts
Outdated
Show resolved
Hide resolved
...ntend/src/app/(platform)/chat/components/Chat/components/ThinkingMessage/ThinkingMessage.tsx
Show resolved
Hide resolved
autogpt_platform/frontend/src/components/contextual/OutputRenderers/index.ts
Show resolved
Hide resolved
autogpt_platform/frontend/src/components/contextual/OutputRenderers/renderers/ImageRenderer.tsx
Show resolved
Hide resolved
autogpt_platform/frontend/src/components/contextual/RunAgentInputs/RunAgentInputs.tsx
Show resolved
Hide resolved
autogpt_platform/frontend/src/components/contextual/RunAgentInputs/RunAgentInputs.tsx
Show resolved
Hide resolved
autogpt_platform/frontend/src/components/contextual/RunAgentInputs/useRunAgentInputs.ts
Show resolved
Hide resolved
|
Conflicts have been resolved! 🎉 A maintainer will review the pull request shortly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/AuthPromptWidget/AuthPromptWidget.tsx (1)
29-38: Usenextparameter instead ofreturnUrlto match login/signup page expectations.The AuthPromptWidget passes
returnUrlquery parameter (lines 38, 50) but the login and signup pages read fromnextparameter (useLoginPage.ts:25,useSignupPage.ts:25). This causes the redirect to fail silently—users will be sent to/marketplaceinstead of returning to the chat after authentication.Additionally, the unconditional
?session_id=append could break ifreturnUrlcontained existing query params. UseURLSearchParamsorURL.searchParamsfor safe query string composition, and ensure the parameter value is a relative same-origin path.✅ Suggested fix
function handleSignIn() { if (typeof window !== "undefined") { localStorage.setItem("pending_chat_session", sessionId); if (agentInfo) { localStorage.setItem("pending_agent_setup", JSON.stringify(agentInfo)); } } - const returnUrlWithSession = `${returnUrl}?session_id=${sessionId}`; - const encodedReturnUrl = encodeURIComponent(returnUrlWithSession); - router.push(`/login?returnUrl=${encodedReturnUrl}`); + const safeReturnUrl = returnUrl.startsWith("/") ? returnUrl : "/chat"; + const url = new URL(safeReturnUrl, window.location.origin); + url.searchParams.set("session_id", sessionId); + const nextUrl = encodeURIComponent(`${url.pathname}${url.search}`); + router.push(`/login?next=${nextUrl}`); } function handleSignUp() { if (typeof window !== "undefined") { localStorage.setItem("pending_chat_session", sessionId); if (agentInfo) { localStorage.setItem("pending_agent_setup", JSON.stringify(agentInfo)); } } - const returnUrlWithSession = `${returnUrl}?session_id=${sessionId}`; - const encodedReturnUrl = encodeURIComponent(returnUrlWithSession); - router.push(`/signup?returnUrl=${encodedReturnUrl}`); + const safeReturnUrl = returnUrl.startsWith("/") ? returnUrl : "/chat"; + const url = new URL(safeReturnUrl, window.location.origin); + url.searchParams.set("session_id", sessionId); + const nextUrl = encodeURIComponent(`${url.pathname}${url.search}`); + router.push(`/signup?next=${nextUrl}`); }Also applies to: 41-50
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChat.ts (1)
56-66: Silent error swallowing may hide issues.The
sendStreamMessagecall uses an empty callback and the.catch(() => {})silently discards any errors. If the login notification fails, there's no feedback or logging. Consider at minimum logging the error for debugging purposes.🔧 Suggested improvement
claimSession(sessionIdFromHook) .then(() => { sendStreamMessage( sessionIdFromHook, "User has successfully logged in.", () => {}, false, - ).catch(() => {}); + ).catch((err) => { + console.warn("Failed to send login notification:", err); + }); })
🤖 Fix all issues with AI agents
In
`@autogpt_platform/frontend/src/app/`(platform)/chat/components/Chat/ChatDrawer.tsx:
- Around line 68-72: The Close button inside ChatDrawer (headerActions)
currently uses <button aria-label="Close" onClick={close} className="size-8">
with no visible hover/focus styles; update the button styling to match other
header buttons by adding accessible hover and focus states (e.g., hover
background/foreground change and a focus-visible outline or ring) and ensure
keyboard users see a clear focus indicator; target the element referenced by
headerActions / the close handler close and the X icon to apply the same utility
classes or CSS module used by other header buttons so hover/focus behavior is
consistent and meets accessibility expectations.
- Around line 54-60: The onInteractOutside prop on Drawer.Content (the prop
using onInteractOutside={blurBackground ? close : undefined}) is unreliable with
Vaul when modal={false} and is redundant because the custom backdrop (the
element at lines ~42-47 that sets pointerEvents: "auto" and handles clicks when
blurBackground is true) already implements outside-click closing; either remove
the onInteractOutside prop entirely or add a clear inline comment next to the
Drawer.Content usage explaining the Vaul limitation (that onInteractOutside is
flaky for non-modal drawers) and that the custom backdrop is the intended
outside-click handler when blurBackground is true; if you prefer to keep it,
guard it by only passing it when the Drawer is modal to avoid false
expectations.
In
`@autogpt_platform/frontend/src/app/`(platform)/chat/components/Chat/components/AgentInputsSetup/AgentInputsSetup.tsx:
- Around line 45-70: The validation and submission ignore default values and
treat every non-hidden field as required; update allRequiredInputsAreSet and
allCredentialsAreSet to merge defaults into inputValues/credentialsValues before
checking, and update handleRun to pass the merged values to onRun; also accept
an optional requiredFields?: string[] prop (or use a provided requiredFields
array) so allRequiredInputsAreSet filters non-hidden fields by requiredFields
instead of assuming every field is required. Specifically modify the functions
allRequiredInputsAreSet, allCredentialsAreSet, canRun, and handleRun to compute
mergedInputValues = { ...defaultsFromSchema, ...inputValues } and
mergedCredentialsValues = { ...defaultsFromCredentialsSchema,
...credentialsValues } and use merged* for validation and passing to onRun, and
change the required-field logic to reference requiredFields when determining
which non-hidden keys must be present.
In
`@autogpt_platform/frontend/src/app/`(platform)/chat/components/Chat/components/ChatContainer/helpers.ts:
- Around line 2-21: The sanitizer removePageContext currently strips any
occurrence of "Page URL:", "Page Content:" and "User Message:" anywhere in the
text; update the regexes in removePageContext to only match these markers at the
start of a line using the multiline flag so legitimate inline user content isn't
removed (e.g., change the patterns to anchor with ^\s* and use the m flag for
the replacements and match), apply the anchored replacement for "Page URL:" and
"Page Content:" and use an anchored match for "User Message:" when extracting
the trailing user text, and keep the same cleanup on the cleaned variable
afterwards.
- Around line 303-374: The inputSchema currently sets per-property required
booleans which RJSF v6 (Draft-07) ignores; inside extractInputsNeeded, change
how inputSchema is built by creating a properties object (e.g., properties[name]
= { title, description, type, default, enum, format } ) and collect required
property names into a requiredProps string[] during inputs.forEach; after the
loop set inputSchema to an object with type: "object", properties, and include
required: requiredProps only if it has entries, removing per-property required
flags so the schema complies with Draft-07 for RJSF v6.
In
`@autogpt_platform/frontend/src/app/`(platform)/chat/components/Chat/components/ChatCredentialsSetup/ChatCredentialsSetup.tsx:
- Around line 1-5: The import of BlockIOCredentialsSubSchema from
"@/lib/autogpt-server-api" is deprecated for frontend use; update
ChatCredentialsSetup.tsx to use the generated OpenAPI frontend types (or a local
equivalent) instead: remove the import of BlockIOCredentialsSubSchema and
replace all references with the appropriate generated type (or a new local type)
used by the frontend API client, ensuring the component (ChatCredentialsSetup,
CredentialsInput) type annotations are updated accordingly and the deprecated
module is no longer imported.
In
`@autogpt_platform/frontend/src/app/`(platform)/chat/components/Chat/components/ChatInput/ChatInput.tsx:
- Around line 47-58: The Send button in ChatInput lacks an explicit type which
defaults to "submit" in forms; update the JSX for the button element inside the
ChatInput component (the element using onClick={handleSend}) to include
type="button" to prevent accidental form submissions when ChatInput is rendered
inside a form.
In
`@autogpt_platform/frontend/src/app/`(platform)/chat/components/Chat/components/ChatLoadingState/ChatLoadingState.tsx:
- Around line 4-16: The ChatLoadingState component declares a message prop in
ChatLoadingStateProps but never uses it; either remove message from the
ChatLoadingStateProps and from the ChatLoadingState parameter list to keep the
API minimal, or render the message (for example, under LoadingSpinner) inside
ChatLoadingState so the prop is actually displayed; update the
ChatLoadingStateProps, the ChatLoadingState function signature, and the
component JSX (referencing ChatLoadingStateProps, ChatLoadingState, message,
LoadingSpinner, className, and cn) accordingly to keep types and implementation
consistent.
In
`@autogpt_platform/frontend/src/app/`(platform)/chat/components/Chat/components/ChatMessage/ChatMessage.tsx:
- Around line 168-198: The branch in ChatMessage.tsx that handles message.type
values "no_results", "agent_carousel", and "execution_started" drops their
payloads and only renders a generic ToolResponseMessage; update the rendering
logic so those types pass their specific payloads or use dedicated components:
detect each type (message.type === "no_results", "agent_carousel",
"execution_started") and either (a) call ToolResponseMessage with the
appropriate props (e.g., pass message.message, message.agents,
message.executionId or message.result) or (b) render new specialized components
(e.g., NoResultsMessage, AgentCarouselMessage, ExecutionStartedMessage) that
consume the payload; ensure getToolActionPhrase is only used for actual tool
responses and keep the existing agent_output parsing logic intact.
In
`@autogpt_platform/frontend/src/app/`(platform)/chat/components/Chat/components/MessageList/MessageList.tsx:
- Around line 1-6: This file uses React hooks (useMessageList) and must be a
Next.js client component — add the "use client" directive as the very first line
of the file (before any imports) so hooks work correctly; update the top of the
MessageList.tsx file to include the directive and keep existing imports for cn,
ChatMessage, StreamingMessage, ThinkingMessage and useMessageList unchanged.
In
`@autogpt_platform/frontend/src/app/`(platform)/chat/components/Chat/components/QuickActionsWelcome/QuickActionsWelcome.tsx:
- Around line 1-3: This file is missing the "use client" directive required for
interactive client components; add the literal string "use client" as the very
first line of QuickActionsWelcome.tsx (before any imports) so the
QuickActionsWelcome component and its onClick handlers are treated as a client
component.
In
`@autogpt_platform/frontend/src/app/`(platform)/chat/components/Chat/useChatStream.ts:
- Around line 153-163: stopStreaming and another reset are zeroing
retryCountRef.current which causes each retry to start at 0 and bypass
MAX_RETRIES; remove resetting of retryCountRef.current in stopStreaming (and the
other spot that unconditionally sets it to 0) and instead only reset
retryCountRef.current when a brand-new stream is initiated or after a
successful/terminal completion in the start/handle stream logic (reference
retryCountRef, stopStreaming, the start/stream retry path that checks
MAX_RETRIES, and MAX_RETRIES) so retries increment across attempts and
eventually hit the limit.
In
`@autogpt_platform/frontend/src/app/`(platform)/chat/components/Chat/usePageContext.ts:
- Around line 19-38: The current page text extraction (in usePageContext: clone,
scripts, body, cleanedContent) can leak sensitive inputs and produce huge
payloads; update the logic to first remove/ignore sensitive elements
(querySelectorAll for input, textarea, [contenteditable], password fields, form
elements, and any elements with a data-sensitive attribute) and strip value/text
from inputs rather than their displayed values, then proceed to extract text;
finally enforce a maximum content size (e.g., MAX_CONTENT_CHARS) and truncate
cleanedContent to that limit (adding an ellipsis or marker), so the returned
content is both privacy-hardened and bounded.
♻️ Duplicate comments (3)
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatInput/ChatInput.tsx (1)
29-45: Connect the sr-only hint viaaria-describedby.This was already raised previously: add
aria-describedby="chat-input-hint"to the<Input />and ensure the underlying textarea forwardsaria-describedby.autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ThinkingMessage/ThinkingMessage.tsx (1)
47-59: Replace raw CSS loader/shimmer with Tailwind or design-system spinner.
Current.loader+ inline animation violates the Tailwind-only styling rule. Based on learnings, use Tailwind utilities or a design-system spinner.Proposed change
-import { cn } from "@/lib/utils"; +import { LoadingSpinner } from "@/components/atoms/LoadingSpinner/LoadingSpinner"; +import { cn } from "@/lib/utils"; ... {showSlowLoader ? ( <div className="flex flex-col items-center gap-3 py-2"> - <div className="loader" style={{ flexShrink: 0 }} /> + <LoadingSpinner size="small" /> <p className="text-sm text-slate-700"> Taking a bit longer to think, wait a moment please </p> </div> ) : ( - <span - className="inline-block bg-gradient-to-r from-neutral-400 via-neutral-600 to-neutral-400 bg-clip-text text-transparent" - style={{ - backgroundSize: "200% 100%", - animation: "shimmer 2s ease-in-out infinite", - }} - > + <span className="inline-block animate-pulse text-slate-600"> Thinking... </span> )}autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatStream.ts (1)
324-334: Retries can duplicate persisted user messages.The retry path re-calls
sendMessage(Line 324-334). If the backend persists the user message before streaming begins, each retry can append duplicate messages. Consider an idempotency key or retry only before the server persists the message.
🧹 Nitpick comments (21)
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/QuickActionsWelcome/QuickActionsWelcome.tsx (1)
38-48: Hoist thethemeobject outside the map callback.The
themeobject is identical for every action but is recreated on each iteration. Moving it outside the loop (or outside the component entirely as a constant) eliminates unnecessary object allocations per render.Proposed refactor
+const actionTheme = { + bg: "bg-slate-50/10", + border: "border-slate-100", + hoverBg: "hover:bg-slate-50/20", + hoverBorder: "hover:border-slate-200", + gradient: "from-slate-200/20 via-slate-300/10 to-transparent", + text: "text-slate-900", + hoverText: "group-hover:text-slate-900", +}; + export function QuickActionsWelcome({ ... }: QuickActionsWelcomeProps) { return ( ... <div className="grid gap-3 sm:grid-cols-2"> {actions.map((action) => { - // Use slate theme for all cards - const theme = { - bg: "bg-slate-50/10", - border: "border-slate-100", - hoverBg: "hover:bg-slate-50/20", - hoverBorder: "hover:border-slate-200", - gradient: "from-slate-200/20 via-slate-300/10 to-transparent", - text: "text-slate-900", - hoverText: "group-hover:text-slate-900", - }; - return ( <button ... - theme.bg, - theme.border, + actionTheme.bg, + actionTheme.border, ...autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ExecutionStartedMessage/ExecutionStartedMessage.tsx (1)
81-87: Consider adding explicit color class for consistency.The
Textcomponent on line 83 relies on CSS color inheritance from the parent div'stext-green-600, whereas all otherTextcomponents in this file specify explicit color classes (e.g.,text-green-900,text-green-800). While inheritance works correctly here, adding an explicit class would improve consistency and make the styling intent clearer.💅 Optional: Add explicit color class
<div className="flex items-center gap-2 text-green-600"> <Play size={16} weight="fill" /> - <Text variant="small"> + <Text variant="small" className="text-green-600"> Your agent is now running. You can monitor its progress in the monitor page. </Text> </div>autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/NoResultsMessage/NoResultsMessage.tsx (1)
19-53: Consider replacing hardcoded gray palette with design tokens.With dark-mode branches removed, this is a good moment to migrate these
bg-gray-*/text-gray-*classes to semantic design tokens for consistency and easier future theming. As per coding guidelines, use design tokens for frontend styling.autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.handlers.ts (3)
79-89: Unconventional state read pattern.Using
setMessagessolely to read current state (returningprevunchanged) is an anti-pattern. While it works because React's setState callback is synchronous, it's semantically confusing and could behave unexpectedly in concurrent mode.Consider passing messages as a dependency or using a ref to track the latest messages:
♻️ Alternative approach using a messagesRef
// In HandlerDependencies, add: messagesRef: MutableRefObject<ChatMessageData[]>; // Then in handleToolResponse: if (!chunk.tool_name || chunk.tool_name === "unknown") { const matchingToolCall = [...deps.messagesRef.current] .reverse() .find( (msg) => msg.type === "tool_call" && msg.toolId === chunk.tool_id, ); if (matchingToolCall && matchingToolCall.type === "tool_call") { toolName = matchingToolCall.toolName; } }
91-96: Non-null assertions on potentially undefined values.
chunk.result!andchunk.tool_id!use non-null assertions, buttool_responsechunks could arrive malformed. Consider adding defensive guards or early returns:+if (!chunk.result || !chunk.tool_id) { + console.warn("[Tool Response] Missing result or tool_id:", chunk); + return; +} const responseMessage = parseToolResponse( - chunk.result!, - chunk.tool_id!, + chunk.result, + chunk.tool_id, toolName, new Date(), );
33-33: Consider removing or gating verbose console logs for production.The file contains numerous
console.logandconsole.warnstatements (lines 33, 61-65, 72-76, 133-138, 141-145, 173-175, 184-203, 207, 213, 218). While useful during development, these could clutter browser consoles in production. Consider:
- Removing them before merge, or
- Gating behind a debug flag (e.g.,
if (process.env.NODE_ENV === 'development'))autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatInput/ChatInput.tsx (1)
13-25: Allow a uniqueinputIdto avoid DOM collisions.
useChatInputrelies ondocument.getElementById(inputId), so hard-coding"chat-input"can break if more than oneChatInputrenders on the page. Consider exposinginputIdas an optional prop (or generate a unique id) and pass it through.♻️ Example change
export interface ChatInputProps { onSend: (message: string) => void; disabled?: boolean; placeholder?: string; className?: string; + inputId?: string; } export function ChatInput({ onSend, disabled = false, placeholder = "Type your message...", className, + inputId = "chat-input", }: ChatInputProps) { - const inputId = "chat-input"; const { value, setValue, handleKeyDown, handleSend } = useChatInput({ onSend, disabled, maxRows: 5, inputId, });autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatInput/useChatInput.ts (1)
18-40: Guard the DOM lookup to avoid wrong-element updates.
document.getElementByIdassumes a unique textarea; if the id is duplicated or the element isn’t a<textarea>, the resize/reset logic can misbehave. Consider guarding the element type (or passing a ref) so the hook only updates the intended textarea.♻️ Suggested guard to prevent wrong-element updates
- const textarea = document.getElementById(inputId) as HTMLTextAreaElement; - if (!textarea) return; + const textarea = document.getElementById(inputId); + if (!(textarea instanceof HTMLTextAreaElement)) return; textarea.style.height = "auto"; const lineHeight = parseInt( window.getComputedStyle(textarea).lineHeight, 10, ); @@ - const textarea = document.getElementById(inputId) as HTMLTextAreaElement; - if (textarea) { - textarea.style.height = "auto"; - } + const textarea = document.getElementById(inputId); + if (textarea instanceof HTMLTextAreaElement) { + textarea.style.height = "auto"; + }autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/usePageContext.ts (1)
1-1: Add an explicit client boundary for this hook.
This file uses React hooks andwindow/document; adding"use client"makes the boundary explicit and prevents accidental server imports.✅ Suggested change
+"use client"; + import { useCallback } from "react";As per coding guidelines, default to client components.
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.ts (2)
16-25: Clarify theonRefreshSessioncontract.
UseChatContainerArgsrequiresonRefreshSession, but the hook doesn’t consume it. Either wire it into the send/refresh flow or remove it from the interface so callers aren’t forced to pass an unused callback.
154-201: Consider guarding against overlapping sends.If concurrent streams aren’t supported, add a defensive early return to avoid resetting streaming state mid-stream.
♻️ Suggested guard
const sendMessage = useCallback( async function sendMessage( content: string, isUserMessage: boolean = true, context?: { url: string; content: string }, ) { + if (isStreaming) { + return; + } if (!sessionId) { console.error("Cannot send message: no session ID"); return; } @@ }, - [sessionId, sendStreamMessage], + [isStreaming, sessionId, sendStreamMessage], );autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatLoadingState/ChatLoadingState.tsx (1)
1-2: Add"use client"if this component is consumed by client components.
This avoids RSC boundary issues and aligns with the default-to-client guideline. Based on learnings, default to client components unless there’s a server-only reason.Proposed change
+"use client"; + import { LoadingSpinner } from "@/components/atoms/LoadingSpinner/LoadingSpinner"; import { cn } from "@/lib/utils";autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/ChatContainer.tsx (2)
31-38: Consider a function declaration forsendMessageWithContext.Line 31-38 defines a non‑inline handler as an arrow function; frontend guidelines prefer function declarations for handlers. Consider switching to a named
functionor a named function expression insideuseCallbackto align with the convention. As per coding guidelines, use function declarations for handlers.
48-56: Move the inline background pattern into Tailwind utilities/classes.Line 48-56 uses inline styles, which bypass the Tailwind-only styling guideline. Consider converting these to Tailwind arbitrary values or a reusable class.
♻️ Proposed Tailwind-only variant
- <div - className={cn("flex h-full flex-col", className)} - style={{ - backgroundColor: "#ffffff", - backgroundImage: - "radial-gradient(`#e5e5e5` 0.5px, transparent 0.5px), radial-gradient(`#e5e5e5` 0.5px, `#ffffff` 0.5px)", - backgroundSize: "20px 20px", - backgroundPosition: "0 0, 10px 10px", - }} - > + <div + className={cn( + "flex h-full flex-col bg-white " + + "[background-image:radial-gradient(`#e5e5e5_0.5px`,transparent_0.5px),radial-gradient(`#e5e5e5_0.5px`,`#ffffff_0.5px`)] " + + "[background-size:20px_20px] [background-position:0_0,10px_10px]", + className, + )} + >As per coding guidelines, use Tailwind-only styling.
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatStream.ts (1)
193-209: Confirm whether streaming should bypass generated API hooks.Line 193-209 uses raw
fetch; frontend guidelines prefer generated API hooks. If streaming isn’t supported by the generated client, consider documenting the exception or wrapping the call in a typed helper. As per coding guidelines, prefer generated API hooks for data fetching.autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/MessageBubble/MessageBubble.tsx (1)
15-27: Consider moving theme objects outside the component.The
userThemeandassistantThemeobjects are recreated on every render. Since these are static values, extracting them as module-level constants would avoid unnecessary object allocations.♻️ Suggested refactor
+const userTheme = { + bg: "bg-slate-900", + border: "border-slate-800", + gradient: "from-slate-900/30 via-slate-800/20 to-transparent", + text: "text-slate-50", +}; + +const assistantTheme = { + bg: "bg-slate-50/20", + border: "border-slate-100", + gradient: "from-slate-200/20 via-slate-300/10 to-transparent", + text: "text-slate-900", +}; + export function MessageBubble({ children, variant, className, }: MessageBubbleProps) { - const userTheme = { - bg: "bg-slate-900", - border: "border-slate-800", - gradient: "from-slate-900/30 via-slate-800/20 to-transparent", - text: "text-slate-50", - }; - - const assistantTheme = { - bg: "bg-slate-50/20", - border: "border-slate-100", - gradient: "from-slate-200/20 via-slate-300/10 to-transparent", - text: "text-slate-900", - }; - const theme = variant === "user" ? userTheme : assistantTheme;autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatSession.ts (2)
122-153: Unused mutation hook in dependency array.
createSessionMutationis listed in the dependency array but the function callspostV2CreateSessiondirectly instead. Either use the mutation hook for consistency with the React Query pattern or remove it from dependencies.♻️ Suggested fix - use the mutation hook
const createSession = useCallback( async function createSession() { try { setError(null); - const response = await postV2CreateSession({ - body: JSON.stringify({}), - }); + const response = await createSessionMutation({ + body: JSON.stringify({}), + }); if (response.status !== 200) {Or remove from dependencies if direct call is intentional:
- [createSessionMutation], + [],
224-245: Consider extracting the 404 detection logic.The 404 error detection spans multiple conditions checking both
err.statusanderr.response.status. This verbose pattern could be extracted into a helper function for reusability and readability.♻️ Suggested helper extraction
// In helpers.ts export function isNotFoundError(err: unknown): boolean { if (typeof err !== "object" || err === null) return false; if ("status" in err && err.status === 404) return true; if ( "response" in err && typeof err.response === "object" && err.response !== null && "status" in err.response && err.response.status === 404 ) { return true; } return false; }Then in claimSession:
- const is404 = - (typeof err === "object" && - err !== null && - "status" in err && - err.status === 404) || - (typeof err === "object" && - err !== null && - "response" in err && - typeof err.response === "object" && - err.response !== null && - "status" in err.response && - err.response.status === 404); - if (!is404) { + if (!isNotFoundError(err)) {autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/Chat.tsx (1)
47-58: Prefer function declarations for handlers (repo standard).Convert
handleNewChat/handleSelectSessionto function declarations to match the frontend guidelines.♻️ Suggested refactor
- const handleNewChat = () => { + function handleNewChat() { clearSession(); onNewChat?.(); - }; + } - const handleSelectSession = async (sessionId: string) => { + async function handleSelectSession(sessionId: string) { try { await loadSession(sessionId); } catch (err) { console.error("Failed to load session:", err); } - }; + }As per coding guidelines, prefer function declarations for handlers.
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/MessageList/MessageList.tsx (1)
41-99: Avoidindexas the React key for messages.Using the array index can cause incorrect reuse when messages are inserted/filtered (e.g., when agent_output messages are skipped). Prefer a stable identifier (toolId, timestamp, or a derived stable key).
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatMessage/ChatMessage.tsx (1)
194-197: Pass raw toolName intoToolResponseMessage.
ToolResponseMessagealready appliesgetToolActionPhraseand Title Case formatting. Passing a pre-formatted phrase can double-transform and break the snake_case formatting path.♻️ Suggested refactor
- <ToolResponseMessage - toolName={getToolActionPhrase(message.toolName)} - result={message.type === "tool_response" ? message.result : undefined} - /> + <ToolResponseMessage + toolName={message.toolName} + result={message.type === "tool_response" ? message.result : undefined} + />- <ToolResponseMessage - toolName={ - agentOutput.toolName - ? getToolActionPhrase(agentOutput.toolName) - : "Agent Output" - } - result={agentOutput.result} - /> + <ToolResponseMessage + toolName={agentOutput.toolName ?? "Agent Output"} + result={agentOutput.result} + />Also applies to: 233-238
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/ChatDrawer.tsx
Outdated
Show resolved
Hide resolved
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/ChatDrawer.tsx
Outdated
Show resolved
Hide resolved
...end/src/app/(platform)/chat/components/Chat/components/AgentInputsSetup/AgentInputsSetup.tsx
Outdated
Show resolved
Hide resolved
...latform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/helpers.ts
Show resolved
Hide resolved
...latform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/helpers.ts
Show resolved
Hide resolved
...form/frontend/src/app/(platform)/chat/components/Chat/components/ChatMessage/ChatMessage.tsx
Show resolved
Hide resolved
...form/frontend/src/app/(platform)/chat/components/Chat/components/MessageList/MessageList.tsx
Show resolved
Hide resolved
...c/app/(platform)/chat/components/Chat/components/QuickActionsWelcome/QuickActionsWelcome.tsx
Show resolved
Hide resolved
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatStream.ts
Show resolved
Hide resolved
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/usePageContext.ts
Outdated
Show resolved
Hide resolved
autogpt_platform/frontend/src/components/contextual/RunAgentInputs/RunAgentInputs.tsx
Outdated
Show resolved
Hide resolved
autogpt_platform/frontend/src/components/contextual/RunAgentInputs/RunAgentInputs.tsx
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.handlers.ts (1)
91-96: Non-null assertion onchunk.resultmay cause runtime error.At line 92,
chunk.result!uses a non-null assertion, but ifchunk.resultisundefined,parseToolResponsewill receiveundefinedas a string, potentially causing unexpected behavior or errors.Suggested fix
+ if (!chunk.result) { + console.warn("[Tool Response] No result in chunk:", chunk.tool_id); + return; + } const responseMessage = parseToolResponse( - chunk.result!, - chunk.tool_id!, + chunk.result, + chunk.tool_id ?? `unknown-${Date.now()}`, toolName, new Date(), );autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatSession.ts (1)
61-65:createSessionMutationis unused; dependency array is incorrect.
usePostV2CreateSessionis destructured at lines 62-65, butcreateSessionMutationis never called. ThecreateSessioncallback at line 126 usespostV2CreateSessiondirectly instead, yet the dependency array at line 152 includescreateSessionMutation.This is inconsistent: either use the mutation hook (which provides better React Query integration with automatic cache invalidation) or remove the unused hook and fix the dependency array.
Option 1: Use the mutation hook (recommended)
const createSession = useCallback( async function createSession() { try { setError(null); - const response = await postV2CreateSession({ - body: JSON.stringify({}), - }); - if (response.status !== 200) { - throw new Error("Failed to create session"); - } - const newSessionId = response.data.id; + const response = await createSessionMutation({ + body: JSON.stringify({}), + }); + const newSessionId = response.id; setSessionId(newSessionId); ... } }, [createSessionMutation], );Option 2: Remove unused hook and fix dependency
- const { - mutateAsync: createSessionMutation, - isPending: isCreating, - error: createError, - } = usePostV2CreateSession(); + const [isCreating, setIsCreating] = useState(false); + const [createError, setCreateError] = useState<Error | null>(null); // ... in createSession callback: const createSession = useCallback( async function createSession() { + setIsCreating(true); try { ... } catch (err) { + setCreateError(err instanceof Error ? err : new Error("...")); ... + } finally { + setIsCreating(false); } }, - [createSessionMutation], + [], );Also applies to: 122-153
🤖 Fix all issues with AI agents
In
`@autogpt_platform/frontend/src/app/`(platform)/chat/components/Chat/components/AgentInputsSetup/AgentInputsSetup.tsx:
- Around line 8-11: The code imports deprecated schema types BlockIOSubSchema
and BlockIOCredentialsSubSchema from "@/lib/autogpt-server-api/types"; extract
and define these schema descriptor types into a new non-deprecated module (e.g.,
src/types/block-schema.ts), export them as BlockIOSubSchema and
BlockIOCredentialsSubSchema, then update imports in AgentInputsSetup.tsx (and
other files that import these types) to import from the new module; do the same
for CredentialsMetaInput referenced in useAgentInputsSetup.ts by moving or
re-exporting it from a non-deprecated types file or coordinating with
Orval-generated models, and run a repository-wide replace to update all ~52
affected files to the new import path.
In
`@autogpt_platform/frontend/src/app/`(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.ts:
- Around line 22-25: The hook useChatContainer currently ignores the
onRefreshSession field declared in UseChatContainerArgs; either remove
onRefreshSession from the UseChatContainerArgs type if unused, or destructure it
from the function signature (add onRefreshSession to the parameter list
alongside sessionId and initialMessages) and call it where session refresh logic
occurs (for example after session-updating effects or error recovery flows
inside useChatContainer) so the callback is invoked when a session refresh is
needed; update any callers/types accordingly to keep the signature consistent.
In
`@autogpt_platform/frontend/src/app/`(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/AgentInputsReadOnly/AgentInputsReadOnly.tsx:
- Around line 6-7: Replace the deeply nested relative imports for the
CredentialsInput module with the project path alias; specifically update the
imports that reference CredentialsInput and isSystemCredential so they use the
"@/components/contextual/CredentialsInput/CredentialsInput" and
"@/components/contextual/CredentialsInput/helpers" module paths (referencing the
CredentialsInput component and isSystemCredential helper) to match the existing
alias usage in this file.
In
`@autogpt_platform/frontend/src/app/`(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTriggerView/SelectedTriggerView.tsx:
- Line 6: The import for CredentialsInput in SelectedTriggerView.tsx uses a long
relative path and should be replaced with the project path-alias; update the
import of CredentialsInput to use the '@/...' alias consistent with the other
imports in this file (replace the
"../../../../../../../../../../components/contextual/CredentialsInput/CredentialsInput"
import with the aliased path, e.g.
'@/components/contextual/CredentialsInput/CredentialsInput') so the
CredentialsInput symbol is imported via the alias instead of deep relative
traversal.
♻️ Duplicate comments (1)
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/AgentInputsSetup/AgentInputsSetup.tsx (1)
45-70: Defaults shown in the UI aren’t used for validation or submission.The form renders schema defaults, but
allRequiredInputsAreSetandonRunonly useinputValues, so defaults can blockcanRunand aren’t sent unless the user edits a field. Also, all non-hidden fields are treated as required. This matches a prior review comment.Also applies to: 95-101
🧹 Nitpick comments (6)
autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentModal/useAgentRunModal.tsx (1)
25-25: Prefer a path alias over the deep relative import.
The long relative path is brittle and hard to scan; using the@/alias keeps imports consistent and resilient to folder moves.♻️ Suggested change
-import { getSystemCredentials } from "../../../../../../../../../../components/contextual/CredentialsInput/helpers"; +import { getSystemCredentials } from "@/components/contextual/CredentialsInput/helpers";autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTemplateView/SelectedTemplateView.tsx (1)
7-7: Use path alias@/instead of deep relative traversal.The import uses 10 levels of
../which is fragile and inconsistent with other imports in this file (lines 3-6 all use@/).♻️ Suggested fix
-import { CredentialsInput } from "../../../../../../../../../../components/contextual/CredentialsInput/CredentialsInput"; +import { CredentialsInput } from "@/components/contextual/CredentialsInput/CredentialsInput";autogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsx (1)
135-143: Consider using function declarations for handlers.Per coding guidelines, function declarations are preferred over arrow functions for handlers. However, this is a minor stylistic concern.
Suggested refactor
- const handleCredentialSelect = ( - provider: string, - credential?: CredentialsMetaInput, - ) => { + function handleCredentialSelect( + provider: string, + credential?: CredentialsMetaInput, + ) { setSelectedCredentials((prev) => ({ ...prev, [provider]: credential, })); - }; + }Apply the same pattern to
handleCompleteandhandleCancel.autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.handlers.ts (1)
77-90: Consider refactoring state-reading pattern insetMessages.Using
setMessagescallback solely to read state (returningprevunchanged) is an anti-pattern. The outertoolNamevariable is mutated inside the callback closure, which works but is unconventional.Alternative approach
Consider passing a messages ref from the dependencies to read current messages directly:
// In HandlerDependencies, add: messagesRef: MutableRefObject<ChatMessageData[]>; // Then use directly: if (!chunk.tool_name || chunk.tool_name === "unknown") { const matchingToolCall = [...deps.messagesRef.current] .reverse() .find((msg) => msg.type === "tool_call" && msg.toolId === chunk.tool_id); if (matchingToolCall && matchingToolCall.type === "tool_call") { toolName = matchingToolCall.toolName; } }autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/createStreamEventDispatcher.ts (1)
52-54: TODO: Handle usage display.The
usagechunk type is received but not yet processed. Consider implementing usage metrics display or creating a tracking issue.Would you like me to help implement the usage handling or open an issue to track this work?
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatSession.ts (1)
224-245: Consider extracting the 404 detection logic.The 404 detection at lines 227-238 is verbose and handles multiple error response shapes. Consider extracting this into a reusable helper function for clarity and consistency across the codebase.
Suggested extraction
function isNotFoundError(err: unknown): boolean { if (typeof err !== "object" || err === null) return false; if ("status" in err && err.status === 404) return true; if ( "response" in err && typeof err.response === "object" && err.response !== null && "status" in err.response && err.response.status === 404 ) { return true; } return false; }Then use:
if (!isNotFoundError(err)) { ... }
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (34)
autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/AgentOnboardingCredentials/AgentOnboardingCredentials.tsxautogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsxautogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/WebhookDisclaimer.tsxautogpt_platform/frontend/src/app/(platform)/build/components/legacy-builder/NodeInputs.tsxautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/AgentInputsSetup/AgentInputsSetup.tsxautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/createStreamEventDispatcher.tsautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.handlers.tsautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.tsautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatCredentialsSetup/ChatCredentialsSetup.tsxautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/helpers.tsautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatSession.tsautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/AgentInputsReadOnly/AgentInputsReadOnly.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentModal/components/CredentialsGroupedView/CredentialsGroupedView.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentModal/components/helpers.tsautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentModal/useAgentRunModal.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTemplateView/SelectedTemplateView.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTriggerView/SelectedTriggerView.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsxautogpt_platform/frontend/src/components/contextual/CredentialsInput/CredentialsInput.tsxautogpt_platform/frontend/src/components/contextual/CredentialsInput/components/APIKeyCredentialsModal/APIKeyCredentialsModal.tsxautogpt_platform/frontend/src/components/contextual/CredentialsInput/components/APIKeyCredentialsModal/useAPIKeyCredentialsModal.tsautogpt_platform/frontend/src/components/contextual/CredentialsInput/components/CredentialRow/CredentialRow.tsxautogpt_platform/frontend/src/components/contextual/CredentialsInput/components/CredentialsAccordionView/CredentialsAccordionView.tsxautogpt_platform/frontend/src/components/contextual/CredentialsInput/components/CredentialsFlatView/CredentialsFlatView.tsxautogpt_platform/frontend/src/components/contextual/CredentialsInput/components/CredentialsSelect/CredentialsSelect.tsxautogpt_platform/frontend/src/components/contextual/CredentialsInput/components/DeleteConfirmationModal/DeleteConfirmationModal.tsxautogpt_platform/frontend/src/components/contextual/CredentialsInput/components/HotScopedCredentialsModal/HotScopedCredentialsModal.tsxautogpt_platform/frontend/src/components/contextual/CredentialsInput/components/OAuthWaitingModal/OAuthWaitingModal.tsxautogpt_platform/frontend/src/components/contextual/CredentialsInput/components/PasswordCredentialsModal/PasswordCredentialsModal.tsxautogpt_platform/frontend/src/components/contextual/CredentialsInput/helpers.tsautogpt_platform/frontend/src/components/contextual/CredentialsInput/useCredentialsInput.tsautogpt_platform/frontend/src/components/contextual/GoogleDrivePicker/GoogleDrivePicker.tsxautogpt_platform/frontend/src/components/renderers/InputRenderer/custom/CredentialField/CredentialField.tsxautogpt_platform/frontend/src/lib/utils.ts
💤 Files with no reviewable changes (1)
- autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/helpers.ts
✅ Files skipped from review due to trivial changes (5)
- autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentModal/components/CredentialsGroupedView/CredentialsGroupedView.tsx
- autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentModal/components/helpers.ts
- autogpt_platform/frontend/src/components/contextual/CredentialsInput/CredentialsInput.tsx
- autogpt_platform/frontend/src/components/contextual/GoogleDrivePicker/GoogleDrivePicker.tsx
- autogpt_platform/frontend/src/app/(platform)/build/components/legacy-builder/NodeInputs.tsx
🚧 Files skipped from review as they are similar to previous changes (1)
- autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatCredentialsSetup/ChatCredentialsSetup.tsx
🧰 Additional context used
📓 Path-based instructions (10)
autogpt_platform/frontend/**/*.{ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
autogpt_platform/frontend/**/*.{ts,tsx}: Always run pnpm install before frontend development, then use pnpm dev to start development server on port 3000
For frontend code formatting and linting, always run pnpm formatIf adding protected frontend routes, update
frontend/lib/supabase/middleware.ts
autogpt_platform/frontend/**/*.{ts,tsx}: Use generated API hooks from@/app/api/__generated__/endpoints/for data fetching in frontend
Use function declarations (not arrow functions) for components and handlers in frontend
Only use Phosphor Icons in frontend; never use other icon libraries
Never usesrc/components/__legacy__/*or deprecatedBackendAPIin frontend
Files:
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/AgentInputsSetup/AgentInputsSetup.tsxautogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/AgentOnboardingCredentials/AgentOnboardingCredentials.tsxautogpt_platform/frontend/src/lib/utils.tsautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentModal/useAgentRunModal.tsxautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.handlers.tsautogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsxautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/createStreamEventDispatcher.tsautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.tsautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTriggerView/SelectedTriggerView.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/AgentInputsReadOnly/AgentInputsReadOnly.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTemplateView/SelectedTemplateView.tsxautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatSession.tsautogpt_platform/frontend/src/components/renderers/InputRenderer/custom/CredentialField/CredentialField.tsxautogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/WebhookDisclaimer.tsx
autogpt_platform/frontend/**/*.{ts,tsx,json}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use Node.js 21+ with pnpm package manager for frontend development
Files:
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/AgentInputsSetup/AgentInputsSetup.tsxautogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/AgentOnboardingCredentials/AgentOnboardingCredentials.tsxautogpt_platform/frontend/src/lib/utils.tsautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentModal/useAgentRunModal.tsxautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.handlers.tsautogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsxautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/createStreamEventDispatcher.tsautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.tsautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTriggerView/SelectedTriggerView.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/AgentInputsReadOnly/AgentInputsReadOnly.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTemplateView/SelectedTemplateView.tsxautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatSession.tsautogpt_platform/frontend/src/components/renderers/InputRenderer/custom/CredentialField/CredentialField.tsxautogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/WebhookDisclaimer.tsx
autogpt_platform/frontend/src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
autogpt_platform/frontend/src/**/*.{ts,tsx}: Use generated API hooks from@/app/api/__generated__/endpoints/(generated via Orval from backend OpenAPI spec). Pattern: use{Method}{Version}{OperationName} (e.g., useGetV2ListLibraryAgents). Regenerate with: pnpm generate:api. Never use deprecated BackendAPI or src/lib/autogpt-server-api/*
Use function declarations for components and handlers (not arrow functions). Only arrow functions for small inline lambdas (map, filter, etc.)
Use PascalCase for components, camelCase with use prefix for hooks
No barrel files or index.ts re-exports in frontend
For frontend render errors, use component. For mutation errors, display with toast notifications. For manual exceptions, use Sentry.captureException()
Default to client components (use client). Use server components only for SEO or extreme TTFB needs. Use React Query for server state via generated hooks. Co-locate UI state in components/hooks
Files:
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/AgentInputsSetup/AgentInputsSetup.tsxautogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/AgentOnboardingCredentials/AgentOnboardingCredentials.tsxautogpt_platform/frontend/src/lib/utils.tsautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentModal/useAgentRunModal.tsxautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.handlers.tsautogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsxautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/createStreamEventDispatcher.tsautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.tsautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTriggerView/SelectedTriggerView.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/AgentInputsReadOnly/AgentInputsReadOnly.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTemplateView/SelectedTemplateView.tsxautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatSession.tsautogpt_platform/frontend/src/components/renderers/InputRenderer/custom/CredentialField/CredentialField.tsxautogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/WebhookDisclaimer.tsx
autogpt_platform/frontend/**/*.{js,ts,jsx,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
Format frontend code using
pnpm format
Files:
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/AgentInputsSetup/AgentInputsSetup.tsxautogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/AgentOnboardingCredentials/AgentOnboardingCredentials.tsxautogpt_platform/frontend/src/lib/utils.tsautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentModal/useAgentRunModal.tsxautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.handlers.tsautogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsxautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/createStreamEventDispatcher.tsautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.tsautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTriggerView/SelectedTriggerView.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/AgentInputsReadOnly/AgentInputsReadOnly.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTemplateView/SelectedTemplateView.tsxautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatSession.tsautogpt_platform/frontend/src/components/renderers/InputRenderer/custom/CredentialField/CredentialField.tsxautogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/WebhookDisclaimer.tsx
autogpt_platform/frontend/**
📄 CodeRabbit inference engine (autogpt_platform/CLAUDE.md)
autogpt_platform/frontend/**: Install frontend dependencies usingpnpm iinstead of npm
Generate API client from OpenAPI spec usingpnpm generate:api
Regenerate API client hooks usingpnpm generate:apiwhen OpenAPI spec changes
Files:
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/AgentInputsSetup/AgentInputsSetup.tsxautogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/AgentOnboardingCredentials/AgentOnboardingCredentials.tsxautogpt_platform/frontend/src/lib/utils.tsautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentModal/useAgentRunModal.tsxautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.handlers.tsautogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsxautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/createStreamEventDispatcher.tsautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.tsautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTriggerView/SelectedTriggerView.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/AgentInputsReadOnly/AgentInputsReadOnly.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTemplateView/SelectedTemplateView.tsxautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatSession.tsautogpt_platform/frontend/src/components/renderers/InputRenderer/custom/CredentialField/CredentialField.tsxautogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/WebhookDisclaimer.tsx
autogpt_platform/frontend/src/**/*.tsx
📄 CodeRabbit inference engine (autogpt_platform/CLAUDE.md)
Use design system components from
src/components/(atoms, molecules, organisms) in frontend
Files:
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/AgentInputsSetup/AgentInputsSetup.tsxautogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/AgentOnboardingCredentials/AgentOnboardingCredentials.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentModal/useAgentRunModal.tsxautogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTriggerView/SelectedTriggerView.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/AgentInputsReadOnly/AgentInputsReadOnly.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTemplateView/SelectedTemplateView.tsxautogpt_platform/frontend/src/components/renderers/InputRenderer/custom/CredentialField/CredentialField.tsxautogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/WebhookDisclaimer.tsx
autogpt_platform/frontend/src/app/**/*.tsx
📄 CodeRabbit inference engine (autogpt_platform/CLAUDE.md)
Create frontend pages in
src/app/(platform)/feature-name/page.tsxwith correspondingusePageName.tshook and localcomponents/subfolder
Files:
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/AgentInputsSetup/AgentInputsSetup.tsxautogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/AgentOnboardingCredentials/AgentOnboardingCredentials.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentModal/useAgentRunModal.tsxautogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTriggerView/SelectedTriggerView.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/AgentInputsReadOnly/AgentInputsReadOnly.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTemplateView/SelectedTemplateView.tsxautogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/WebhookDisclaimer.tsx
autogpt_platform/frontend/**/*.{ts,tsx,css}
📄 CodeRabbit inference engine (autogpt_platform/CLAUDE.md)
Use only Tailwind CSS for styling in frontend, with design tokens and Phosphor Icons
Files:
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/AgentInputsSetup/AgentInputsSetup.tsxautogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/AgentOnboardingCredentials/AgentOnboardingCredentials.tsxautogpt_platform/frontend/src/lib/utils.tsautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentModal/useAgentRunModal.tsxautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.handlers.tsautogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsxautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/createStreamEventDispatcher.tsautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.tsautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTriggerView/SelectedTriggerView.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/AgentInputsReadOnly/AgentInputsReadOnly.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTemplateView/SelectedTemplateView.tsxautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatSession.tsautogpt_platform/frontend/src/components/renderers/InputRenderer/custom/CredentialField/CredentialField.tsxautogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/WebhookDisclaimer.tsx
autogpt_platform/frontend/src/components/**/*.tsx
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
autogpt_platform/frontend/src/components/**/*.tsx: Separate frontend component render logic from data/behavior. Structure: ComponentName/ComponentName.tsx + useComponentName.ts + helpers.ts. Small components (3-4 lines) can be inline. Render-only components can be direct files without folders
Use Tailwind CSS utilities only for styling in frontend. Use design system components from src/components/ (atoms, molecules, organisms). Never use src/components/legacy/*
Only use Phosphor Icons (@phosphor-icons/react) for icon components in frontend
Prefer design tokens over hardcoded values in frontend styling
Files:
autogpt_platform/frontend/src/components/renderers/InputRenderer/custom/CredentialField/CredentialField.tsx
autogpt_platform/frontend/src/components/**/*.{ts,tsx}
📄 CodeRabbit inference engine (autogpt_platform/CLAUDE.md)
autogpt_platform/frontend/src/components/**/*.{ts,tsx}: Separate render logic from data/behavior in components
Structure frontend components asComponentName/ComponentName.tsxplususeComponentName.tshook plushelpers.tsfile
Files:
autogpt_platform/frontend/src/components/renderers/InputRenderer/custom/CredentialField/CredentialField.tsx
🧠 Learnings (18)
📚 Learning: 2025-11-25T08:48:33.246Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-25T08:48:33.246Z
Learning: Applies to autogpt_platform/frontend/src/components/**/*.tsx : Separate frontend component render logic from data/behavior. Structure: ComponentName/ComponentName.tsx + useComponentName.ts + helpers.ts. Small components (3-4 lines) can be inline. Render-only components can be direct files without folders
Applied to files:
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/AgentInputsSetup/AgentInputsSetup.tsxautogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/AgentOnboardingCredentials/AgentOnboardingCredentials.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTriggerView/SelectedTriggerView.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTemplateView/SelectedTemplateView.tsxautogpt_platform/frontend/src/components/renderers/InputRenderer/custom/CredentialField/CredentialField.tsx
📚 Learning: 2025-11-25T08:49:03.583Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/CLAUDE.md:0-0
Timestamp: 2025-11-25T08:49:03.583Z
Learning: Applies to autogpt_platform/frontend/**/*.{ts,tsx} : Never use `src/components/__legacy__/*` or deprecated `BackendAPI` in frontend
Applied to files:
autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/AgentOnboardingCredentials/AgentOnboardingCredentials.tsxautogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/AgentInputsReadOnly/AgentInputsReadOnly.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTemplateView/SelectedTemplateView.tsxautogpt_platform/frontend/src/components/renderers/InputRenderer/custom/CredentialField/CredentialField.tsx
📚 Learning: 2025-11-25T08:48:33.246Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-25T08:48:33.246Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : Use PascalCase for components, camelCase with use prefix for hooks
Applied to files:
autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/AgentOnboardingCredentials/AgentOnboardingCredentials.tsxautogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsxautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/createStreamEventDispatcher.tsautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.tsautogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/WebhookDisclaimer.tsx
📚 Learning: 2025-11-25T08:48:33.246Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-25T08:48:33.246Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : Use generated API hooks from `@/app/api/__generated__/endpoints/` (generated via Orval from backend OpenAPI spec). Pattern: use{Method}{Version}{OperationName} (e.g., useGetV2ListLibraryAgents). Regenerate with: pnpm generate:api. Never use deprecated BackendAPI or src/lib/autogpt-server-api/*
Applied to files:
autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentModal/useAgentRunModal.tsxautogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/AgentInputsReadOnly/AgentInputsReadOnly.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTemplateView/SelectedTemplateView.tsx
📚 Learning: 2025-11-25T08:48:33.246Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-25T08:48:33.246Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : No barrel files or index.ts re-exports in frontend
Applied to files:
autogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTemplateView/SelectedTemplateView.tsx
📚 Learning: 2025-11-25T08:49:03.583Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/CLAUDE.md:0-0
Timestamp: 2025-11-25T08:49:03.583Z
Learning: Applies to autogpt_platform/frontend/src/**/*.tsx : Use design system components from `src/components/` (atoms, molecules, organisms) in frontend
Applied to files:
autogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTemplateView/SelectedTemplateView.tsx
📚 Learning: 2025-11-25T08:49:03.583Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/CLAUDE.md:0-0
Timestamp: 2025-11-25T08:49:03.583Z
Learning: Applies to autogpt_platform/frontend/**/*.{ts,tsx} : Use function declarations (not arrow functions) for components and handlers in frontend
Applied to files:
autogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsx
📚 Learning: 2025-11-25T08:48:55.844Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-25T08:48:55.844Z
Learning: Applies to autogpt_platform/frontend/**/*.{ts,tsx} : If adding protected frontend routes, update `frontend/lib/supabase/middleware.ts`
Applied to files:
autogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsx
📚 Learning: 2025-11-25T08:49:03.583Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/CLAUDE.md:0-0
Timestamp: 2025-11-25T08:49:03.583Z
Learning: Applies to autogpt_platform/frontend/src/components/**/*.{ts,tsx} : Structure frontend components as `ComponentName/ComponentName.tsx` plus `useComponentName.ts` hook plus `helpers.ts` file
Applied to files:
autogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsxautogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTemplateView/SelectedTemplateView.tsx
📚 Learning: 2025-11-25T08:49:03.583Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/CLAUDE.md:0-0
Timestamp: 2025-11-25T08:49:03.583Z
Learning: Applies to autogpt_platform/frontend/src/app/**/*.tsx : Create frontend pages in `src/app/(platform)/feature-name/page.tsx` with corresponding `usePageName.ts` hook and local `components/` subfolder
Applied to files:
autogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsx
📚 Learning: 2025-11-25T08:48:33.246Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-25T08:48:33.246Z
Learning: Applies to autogpt_platform/frontend/src/components/**/*.tsx : Prefer design tokens over hardcoded values in frontend styling
Applied to files:
autogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsx
📚 Learning: 2025-11-25T08:49:03.583Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/CLAUDE.md:0-0
Timestamp: 2025-11-25T08:49:03.583Z
Learning: Applies to autogpt_platform/frontend/**/*.{ts,tsx} : Use generated API hooks from `@/app/api/__generated__/endpoints/` for data fetching in frontend
Applied to files:
autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsx
📚 Learning: 2025-11-25T08:48:33.246Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-25T08:48:33.246Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : Default to client components (use client). Use server components only for SEO or extreme TTFB needs. Use React Query for server state via generated hooks. Co-locate UI state in components/hooks
Applied to files:
autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsxautogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatSession.ts
📚 Learning: 2025-11-25T08:48:33.246Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-25T08:48:33.246Z
Learning: Applies to autogpt_platform/frontend/src/components/**/*.tsx : Only use Phosphor Icons (phosphor-icons/react) for icon components in frontend
Applied to files:
autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsx
📚 Learning: 2025-11-25T08:48:33.246Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-25T08:48:33.246Z
Learning: Applies to autogpt_platform/frontend/src/components/**/*.tsx : Use Tailwind CSS utilities only for styling in frontend. Use design system components from src/components/ (atoms, molecules, organisms). Never use src/components/__legacy__/*
Applied to files:
autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsx
📚 Learning: 2025-11-25T08:49:03.583Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/CLAUDE.md:0-0
Timestamp: 2025-11-25T08:49:03.583Z
Learning: Frontend UI components use shadcn/ui (Radix UI primitives) with Tailwind CSS styling
Applied to files:
autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsx
📚 Learning: 2025-11-25T08:48:33.246Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-25T08:48:33.246Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : Use function declarations for components and handlers (not arrow functions). Only arrow functions for small inline lambdas (map, filter, etc.)
Applied to files:
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/createStreamEventDispatcher.ts
📚 Learning: 2025-11-25T08:49:03.583Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/CLAUDE.md:0-0
Timestamp: 2025-11-25T08:49:03.583Z
Learning: Frontend state management uses React Query for server state and co-located UI state in components/hooks
Applied to files:
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatSession.ts
🧬 Code graph analysis (1)
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatSession.ts (1)
autogpt_platform/frontend/src/services/storage/local-storage.ts (1)
storage(44-48)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: test
- GitHub Check: types
- GitHub Check: Check PR Status
🔇 Additional comments (19)
autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/OldAgentLibraryView/components/agent-run-draft-view.tsx (1)
32-32: LGTM — centralized import is clearer.autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedTriggerView/SelectedTriggerView.tsx (1)
28-184: LGTM on component structure and logic.The component correctly follows function declaration patterns, uses ErrorCard for error display, and properly separates loading/error/success states. The credential and input field handling is well-structured.
autogpt_platform/frontend/src/lib/utils.ts (1)
500-504: LGTM!The UUID validation function is well-implemented with a correct regex pattern for the standard 8-4-4-4-12 format. Centralizing this utility in
@/lib/utilsis a good refactor, making it reusable across the codebase.autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/nodes/CustomNode/components/WebhookDisclaimer.tsx (1)
3-7: LGTM!The import path change for
isValidUUIDfrom@/app/(platform)/chat/helpersto@/lib/utilsaligns with the broader centralization refactor in this PR. This improves maintainability by keeping general-purpose utilities in a single location.autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/AgentInputsReadOnly/AgentInputsReadOnly.tsx (1)
17-90: LGTM!The component logic is well-structured with proper conditional rendering, appropriate use of function declarations, and correct handling of the read-only state for both inputs and credentials.
autogpt_platform/frontend/src/components/renderers/InputRenderer/custom/CredentialField/CredentialField.tsx (1)
3-3: LGTM!The import path consolidation to
@/components/contextual/CredentialsInput/CredentialsInputaligns with the broader PR effort to centralize the CredentialsInput component.autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/AgentOnboardingCredentials/AgentOnboardingCredentials.tsx (1)
3-3: LGTM!The import path update to the centralized contextual module is correct. The component properly follows function declaration patterns as per coding guidelines.
autogpt_platform/frontend/src/app/(platform)/auth/integrations/setup-wizard/page.tsx (2)
8-8: LGTM!The import path consolidation to the centralized contextual module is correct and aligns with the broader PR refactoring effort.
98-98: The setup-wizard route is already properly protected. The/auth/integrationspath is included inPROTECTED_PAGESinfrontend/lib/supabase/helpers.ts, and the middleware'sisProtectedPage()function usesstartsWith()matching, so all subroutes including/auth/integrations/setup-wizardwill require authentication and redirect unauthenticated users to the login page. No additional middleware configuration is needed.autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.ts (3)
140-148: LGTM on the fallback message handling.The role type assertion at line 144 is acceptable since explicit checks for
user,assistant, andtoolroles occur earlier in the control flow, leaving only legitimate "other" roles (likesystem) to reach this branch.
154-201: LGTM on sendMessage implementation.The callback properly validates
sessionId, manages streaming state, and handles errors with appropriate toast notifications as per coding guidelines. The streaming state cleanup is correctly handled in both the error path (catch block) and success path (via the dispatcher'shandleStreamEnd).
203-210: LGTM on the hook's public API.The return object provides a clean interface with derived
isStreamingstate and proper separation of concerns. The hook follows naming conventions (camelCase withuseprefix) as per coding guidelines.autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/useChatContainer.handlers.ts (2)
216-222: LGTM onhandleErrorstate cleanup.The error handler comprehensively resets all streaming-related state (
isStreamingInitiated,hasTextChunks,streamingChunks,streamingChunksRef). The error notification is appropriately handled at the dispatcher level via toast.
107-123: LGTM on the input and credential extraction logic.The conditional properly handles both
run_agentandrun_blocktool names, with null-safe access toparsedResult.type. The independent extraction and addition of inputs and credentials messages is clean.autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/components/ChatContainer/createStreamEventDispatcher.ts (1)
14-59: LGTM on the stream event dispatcher implementation.The dispatcher cleanly routes stream chunks to appropriate handlers using a switch statement. The error handling approach (toast at dispatcher level) is well-documented to avoid circular dependencies. The default case provides good defensive logging for unknown chunk types.
autogpt_platform/frontend/src/app/(platform)/chat/components/Chat/useChatSession.ts (4)
86-99: LGTM!Good use of optimistic UI pattern - providing a synthetic session object immediately after creation while the real data loads.
190-207: LGTM!Clean implementation of session refresh using the React Query
refetchfunction.
251-256: LGTM!Complete session cleanup implementation with proper state and storage reset.
167-179: No action needed. The code correctly handles thefetchQueryresponse shape.The
fetchQueryresult at line 174 properly receives the full response object{ status: number; data: T; headers: Headers }from the custom mutator, and the check("status" in result && result.status !== 200)is appropriate. The deliberate omission ofselect: okDatahere is correct—the code validates the raw response status before proceeding. Usingselect: okDatain theuseGetV2GetSessionhook (line 75) and not using it in theloadSessioncallback are different patterns serving different purposes: the hook needs transformed data for UI consumption, whileloadSessionneeds the raw response to validate the status.Likely an incorrect or invalid review comment.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
Frontend changes extracted from the hackathon/copilot branch for the copilot feature development.
Changes 🏗️
Chat,ChatDrawer,ChatContainer,ChatMessage, etc.)Checklist 📋
For code changes:
For configuration changes:
.env.defaultis updated or already compatible with my changesdocker-compose.ymlis updated or already compatible with my changes