-
Notifications
You must be signed in to change notification settings - Fork 46
Added the HIL Feature Using LangChains interrupt() command #130
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Fixed circular import error by calling the necessary import to the function where it is called.
…tory context and pause workflow for user input. Enhanced the react_supervisor_node to check for missing repository/project context before proceeding with troubleshooting actions. If the context is absent, the node triggers a Human-in-the-Loop interrupt by setting waiting_for_user_input and sending a prompt asking the user to specify their project or repository. This ensures the agent workflow pauses appropriately to gather critical information from the user early in the interaction, improving accuracy and enabling a collaborative, stepwise support experience. The existing interrupt logic for action confirmations remains intact and integrated with this new context-gathering step.
… AgentCoordinator Enhanced the AgentCoordinator to support Human-in-the-Loop workflows by managing agent state persistence with interrupt flags and user feedback. Added handling for continuation messages triggered by user confirmation or cancellation in Discord interactions. Incorporated logic to update the agent context upon receiving human responses, disable waiting flags, and safely resume the agent workflow. Improved error handling and logging related to HIL events to ensure robustness. This enables seamless pausing, resuming, and user interaction management within multi-turn conversational troubleshooting sessions.
…nd retrieval Implemented the WeaviateAgentStateOperations class to handle creation, updating, loading, and searching of agent conversational states within the Weaviate vector database. This includes converting between AgentState objects and their JSON representations, managing serialization of complex fields like messages and context, and using Weaviate’s async client for CRUD operations on the agent_states collection. This enhancement supports persistent, scalable agent session management critical for long-running human-in-the-loop conversational workflows.
Fixed the circular error import by calling the import wherever it is required
… state processing Updated the Discord bot to detect when the agent workflow requires human input and manage Human-in-the-Loop session state accordingly. The bot now relies on plain text message responses from users to continue the workflow. Enhanced message processing to route ongoing conversations within Discord threads and maintain session continuity. This change improves bot stability by avoiding button interaction issues while preserving interactive multi-turn dialogues for technical support.
…tup orchestration Implemented main.py as the primary entry point for the Discord bot application. This script initializes and configures the bot, loads environment variables, sets intents, and starts the asynchronous event loop. It centralizes bot startup logic, providing a clean separation from business logic and handlers. The file ensures proper configuration for Discord API access and integrates queued message processing, enabling smooth, scalable bot operation.
WalkthroughAdds human-in-the-loop interruptions to the DevRel supervisor and Discord UI, introduces Weaviate-backed persistent AgentState with session-based load/save in AgentCoordinator, adds Weaviate operations/schema for agent state, performs several lazy-import refactors, and adjusts app initialization wiring. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant U as User (Discord)
participant D as DiscordBot
participant AC as AgentCoordinator
participant AG as DevRel Agent
participant W as Weaviate (AgentState)
U->>D: Message (optional session_id)
D->>AC: handle_devrel_request(message)
AC->>W: load_agent_state(session_id?)
W-->>AC: AgentState or None
AC->>AG: run(agent_state)
AG-->>AC: result_state (+final_response?, waiting_for_user_input?)
AC->>W: save_agent_state(result_state)
alt waiting_for_user_input
AC-->>D: Response with prompt + waiting flag
D->>U: HumanReviewView (Confirm/Cancel)
alt User provides feedback within timeout
U-->>D: Feedback
D->>AC: process_human_feedback(session_id, feedback)
AC->>W: load_agent_state + save updated (waiting=false, feedback)
D->>AC: enqueue continuation (HIGH priority)
else Timeout / no input
D-->>U: Continue without human input
D->>AC: enqueue continuation
end
else final_response present
AC-->>D: final_response chunks
D-->>U: Send messages
end
sequenceDiagram
autonumber
participant S as ReAct Supervisor
participant C as Supervisor Context
participant L as Logs
S->>C: Read repository context
alt Repository missing
S->>C: Set waiting_for_user_input=true + interrupt_details
S->>L: Log repo-context interrupt
S-->>C: current_task="waiting_for_user_input_repo"
else Repository present
S->>C: Decide next action
alt Action in HIL_INTERRUPT_ACTIONS
S->>C: Set waiting_for_user_input=true + interrupt_details
S->>L: Log HIL action interrupt
S-->>C: current_task="waiting_for_user_input"
else Continue autonomously
S-->>C: current_task="supervisor_decided_<action>"
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Assessment against linked issues
Assessment against linked issues: Out-of-scope changes
Possibly related PRs
Suggested reviewers
Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
backend/integrations/discord/bot.py (1)
150-167
: Includesession_id
andcontext
in yourdiscord_response
payloadThe HIL handler in
backend/integrations/discord/bot.py
(lines 150–167) expects everyresponse_data
to carry both asession_id
and acontext
dict (with keys likewaiting_for_user_input
andinterrupt_details
). Our inspection shows that the producer in
backend/app/core/orchestration/agent_coordinator.py
(_send_response_to_platform, lines 131–139) only emits:•
type
•thread_id
•channel_id
•response
•original_message_id
—and omits the crucial
session_id
andcontext
. Without those, the Human-in-the-Loop view will never render correctly after button clicks.Please update the producer as follows:
• At the call site (around line 81), pass the full
result_state
into the response helper:- await self._send_response_to_platform(message_data, result_state.final_response) + await self._send_response_to_platform( + original_message=message_data, + response=result_state.final_response, + session_id=result_state.session_id, + context=result_state.context or {} + )• Adjust the signature of
__send_response_to_platform
(around line 126) to accept the new parameters:- async def _send_response_to_platform(self, original_message: Dict[str, Any], response: str): + async def _send_response_to_platform( + self, + original_message: Dict[str, Any], + response: str, + session_id: str, + context: Dict[str, Any], + ):• And augment the
response_message
dict (lines 133–139) to include them:response_message = { "type": "discord_response", "thread_id": original_message.get("thread_id"), "channel_id": original_message.get("channel_id"), "response": response, "original_message_id": original_message.get("id"), + "session_id": session_id, + "context": context, }These changes ensure that downstream Discord handlers receive the state they need to drive the HIL workflow.
backend/app/database/weaviate/operations.py (1)
419-444
: Pydantic v2 compatibility: replace .dict() with .model_dump(); avoid overlapping client contexts.
- Using
.dict()
is inconsistent with v2 usage and may not respect field serialization. Switch to.model_dump()
.- Minor: you open a client context and then call
find_state_by_session_id
, which opens another client context. Move the existence check before theasync with
to avoid overlapping connections.- state_dict = agent_state.dict() + state_dict = agent_state.model_dump() # Serialize complex fields state_dict['messages'] = json.dumps(state_dict.get('messages', [])) state_dict['context'] = json.dumps(state_dict.get('context', {})) - async with get_weaviate_client() as client: - collection = client.collections.get(self.collection_name) - existing_uuid = await self.find_state_by_session_id(agent_state.session_id) + # Check existence first to avoid nested client contexts + existing_uuid = await self.find_state_by_session_id(agent_state.session_id) + async with get_weaviate_client() as client: + collection = client.collections.get(self.collection_name) if existing_uuid: await collection.data.update( uuid=existing_uuid, properties=state_dict ) logger.info(f"Updated agent state for session {agent_state.session_id}") else: await collection.data.insert( properties=state_dict ) logger.info(f"Created new agent state for session {agent_state.session_id}")Optional: also catch
weaviate_exceptions.WeaviateBaseError
alongsideException
for clearer diagnostics.backend/app/core/orchestration/agent_coordinator.py (1)
149-161
: Duplicate method definitions shadow persistence (bypasses Weaviate).
load_agent_state
andsave_agent_state
are defined twice. The later definitions (Lines 149–161 and 158–160) override the earlier ones and remove persistence, breaking the PR’s core objective. Remove the duplicates.- async def load_agent_state(self, session_id: str): - # Get state from in-memory cache first - state = self.active_sessions.get(session_id) - if state: - return state - - # TODO: Add persistent storage retrieval here if applicable - return None - - async def save_agent_state(self, agent_state: AgentState): - # Save state to in-memory cache - self.active_sessions[agent_state.session_id] = agent_state + # (Removed duplicate, persistence-breaking definitions)Also applies to: 158-160
🧹 Nitpick comments (10)
backend/app/services/github/issue_processor.py (2)
33-36
: Lazy import is fine; consider a guarded import for clearer failures.Moving
GitHubUserProfiler
into the method is a sensible perf/coupling tradeoff. If the module is missing or misnamed at runtime, surfacing an explicitImportError
with guidance can speed up debugging.- from app.services.github.user.profiling import GitHubUserProfiler + try: + from app.services.github.user.profiling import GitHubUserProfiler + except ImportError as e: + logger.error("Failed to import GitHubUserProfiler. Check app.services.github.user.profiling.") + raise
81-83
: Re-raise the original exception to preserve traceback.
raise e
can sometimes obscure the original traceback context. Prefer bareraise
for clearer debugging.- except Exception as e: - logger.error(f"Error processing issue {self.owner}/{self.repo}#{self.issue_number}: {str(e)}") - raise e + except Exception as e: + logger.error( + f"Error processing issue {self.owner}/{self.repo}#{self.issue_number}: {str(e)}", + exc_info=True + ) + raisebackend/app/database/weaviate/scripts/create_schemas.py (2)
37-47
: Agent state schema: add timestamps and plan for filtering by session_id.The base schema is good. Two improvements:
- Persist
created_at
/updated_at
to support retention and debugging.- You’ll likely filter on
session_id
; ensure it’s stored as a simple text field (already is) and consider setting a deterministic UUID when inserting (usingsession_id
), so upserts are stable.async def create_agent_state_schema(client): properties = [ wc.Property(name="session_id", data_type=wc.DataType.TEXT), wc.Property(name="user_id", data_type=wc.DataType.TEXT), wc.Property(name="platform", data_type=wc.DataType.TEXT), wc.Property(name="messages", data_type=wc.DataType.TEXT), wc.Property(name="context", data_type=wc.DataType.TEXT), - # Add more fields if needed + wc.Property(name="created_at", data_type=wc.DataType.DATE), + wc.Property(name="updated_at", data_type=wc.DataType.DATE), ] await create_schema(client, "agent_states", properties)Tip for inserts/updates (outside schema): use
uuid=session_id
when inserting objects, to make updates idempotent.
48-58
: Docstring and log message are out of date now that multiple schemas are created.Update wording to reflect both user profile and agent state schemas.
async def create_all_schemas(): """ - Create only the user profile schema as per the model structure. + Create required schemas (user profile, agent state). """ @@ - await create_user_profile_schema(client) - await create_agent_state_schema(client) - print("✅ User profile schema created successfully.") + await create_user_profile_schema(client) + await create_agent_state_schema(client) + print("✅ Schemas created successfully: weaviate_user_profile, agent_states.")backend/app/agents/devrel/github/tools/contributor_recommendation.py (1)
47-51
: Replace debug print with structured logging and avoid dumping full LLM object.
print(response)
is noisy and can leak PII to stdout. Uselogger.debug
with a bounded preview ofresponse.content
.- try: - import json - print(response) + try: + import json + logger.debug("Alignment LLM raw response (first 200 chars): %s", response.content[:200]) result = json.loads(response.content.strip())backend/integrations/discord/bot.py (1)
164-177
: PR text says buttons deferred, but code uses buttons. Provide a text-reply fallback to de-risk.The PR description claims “button interactions were deferred; text-based replies used,” yet this code presents buttons. Add a simple typed-response fallback if UI components fail or are undesired.
- if context.get("waiting_for_user_input", False): - view = HumanReviewView(self, response_data.get("session_id")) - await thread.send(content=context["interrupt_details"]["prompt"], view=view) - - # Wait for user interaction or timeout - await view.wait() - - if view.user_response is not None: - # Process user feedback with helper method - await self.process_human_feedback(response_data.get("session_id"), view.user_response) - else: - await thread.send("Timed out waiting for response. Continuing without human input.") + if context.get("waiting_for_user_input", False): + prompt_text = context["interrupt_details"]["prompt"] + try: + view = HumanReviewView(self, response_data.get("session_id")) + await thread.send(content=prompt_text, view=view) + await view.wait() + if view.user_response is not None: + await self.process_human_feedback(response_data.get("session_id"), view.user_response) + return + except Exception as e: + logger.warning("Button UI fallback engaged due to error: %s", e) + # Text fallback path + await thread.send(f"{prompt_text}\nReply with 'confirm' or 'cancel' within 5 minutes.") + def _check(m): return m.channel.id == thread.id and m.author != self.user and m.content.lower() in {"confirm", "cancel"} + try: + msg = await self.wait_for("message", check=_check, timeout=300) + await self.process_human_feedback(response_data.get("session_id"), msg.content.lower()) + except asyncio.TimeoutError: + await thread.send("Timed out waiting for response. Continuing without human input.")backend/app/agents/devrel/nodes/react_supervisor.py (2)
10-10
: Single source of truth for allowed actions (avoid drift).You hardcode the action list twice: once in
HIL_INTERRUPT_ACTIONS
and again inside_parse_supervisor_decision
. Unify via aSUPERVISOR_ACTIONS
constant and deriveHIL_INTERRUPT_ACTIONS
from it to prevent divergence when actions evolve.- HIL_INTERRUPT_ACTIONS = ["web_search", "faq_handler", "onboarding", "github_toolkit"] +SUPERVISOR_ACTIONS = ["web_search", "faq_handler", "onboarding", "github_toolkit", "complete"] +HIL_INTERRUPT_ACTIONS = [a for a in SUPERVISOR_ACTIONS if a != "complete"]- elif line.startswith("ACT:"): + elif line.startswith("ACT:"): action = line.replace("ACT:", "").strip().lower() - if action in ["web_search", "faq_handler", "onboarding", "github_toolkit", "complete"]: - decision["action"] = action + if action in SUPERVISOR_ACTIONS: + decision["action"] = actionAlso applies to: 96-99
107-119
: Router safety: guard unknown actions.Even with parsing guards, a malformed
supervisor_decision
in context could route to an unexpected string. Add an explicit guard against unknown actions.def supervisor_decision_router(state: AgentState) -> Literal["web_search", "faq_handler", "onboarding", "github_toolkit", "complete"]: @@ - action = decision.get("action", "complete") + action = decision.get("action", "complete") @@ - return action + # Ensure only supported routes are returned + if action not in SUPERVISOR_ACTIONS: + return "complete" + return actionbackend/app/core/orchestration/agent_coordinator.py (2)
24-36
: Add type hints and return annotations to state I/O.Tighten the earlier (kept) definitions with explicit typing.
- async def load_agent_state(self, session_id): + async def load_agent_state(self, session_id: str) -> AgentState | None: @@ - async def save_agent_state(self, agent_state): + async def save_agent_state(self, agent_state: AgentState) -> None:
79-82
: Add Human-in-the-Loop prompt handling in AgentCoordinator
Ensure that when the agent graph pauses (i.e.,context["waiting_for_user_input"]
is true) but nofinal_response
is produced, we still forward theinterrupt_details.prompt
to the user. Without this, HIL prompts on non-Discord channels would never be sent.Proposed diff in
backend/app/core/orchestration/agent_coordinator.py
:# Send HIL prompt if waiting for user input and no final_response was produced hil_ctx = result_state.context or {} if hil_ctx.get("waiting_for_user_input") and not result_state.final_response: prompt = (hil_ctx.get("interrupt_details") or {}).get("prompt") if prompt: await self._send_response_to_platform(message_data, prompt) # Send final response back to platform if present if result_state.final_response: await self._send_response_to_platform(message_data, result_state.final_response)• The only existing HIL-prompt emitter is the Discord bot (
backend/integrations/discord/bot.py
), which callsthread.send(content=context["interrupt_details"]["prompt"], …)
. Adding this in the coordinator would duplicate prompts on Discord. Consider wrapping the new logic in a feature flag or removing the manual send in the bot to avoid duplicates.
• No other integrations or nodes handlewaiting_for_user_input
today, so this change covers all remaining platforms.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (8)
backend/app/agents/devrel/github/tools/contributor_recommendation.py
(2 hunks)backend/app/agents/devrel/nodes/react_supervisor.py
(2 hunks)backend/app/core/orchestration/agent_coordinator.py
(4 hunks)backend/app/database/weaviate/operations.py
(2 hunks)backend/app/database/weaviate/scripts/create_schemas.py
(2 hunks)backend/app/services/github/issue_processor.py
(1 hunks)backend/integrations/discord/bot.py
(6 hunks)backend/main.py
(3 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-06-08T13:15:11.074Z
Learnt from: smokeyScraper
PR: AOSSIE-Org/Devr.AI#72
File: backend/app/agents/__init__.py:2-2
Timestamp: 2025-06-08T13:15:11.074Z
Learning: In backend/app/agents/shared/base_agent.py, the BaseAgent class internally imports and re-exports AgentState, making it valid to import AgentState from shared.base_agent in addition to shared.state.
Applied to files:
backend/app/database/weaviate/operations.py
🧬 Code graph analysis (7)
backend/app/services/github/issue_processor.py (1)
backend/app/services/github/user/profiling.py (1)
GitHubUserProfiler
(15-298)
backend/app/agents/devrel/nodes/react_supervisor.py (1)
backend/app/agents/state.py (1)
AgentState
(18-73)
backend/app/database/weaviate/operations.py (2)
backend/app/agents/state.py (1)
AgentState
(18-73)backend/app/database/weaviate/client.py (1)
get_weaviate_client
(19-32)
backend/app/agents/devrel/github/tools/contributor_recommendation.py (1)
backend/app/database/weaviate/operations.py (1)
search_contributors
(367-380)
backend/main.py (2)
backend/app/core/orchestration/agent_coordinator.py (1)
AgentCoordinator
(14-160)backend/integrations/discord/bot.py (1)
DiscordBot
(31-216)
backend/integrations/discord/bot.py (2)
backend/app/core/orchestration/queue_manager.py (4)
AsyncQueueManager
(17-144)QueuePriority
(12-15)stop
(58-71)enqueue
(73-92)backend/app/classification/classification_router.py (1)
ClassificationRouter
(10-59)
backend/app/core/orchestration/agent_coordinator.py (3)
backend/app/database/weaviate/operations.py (2)
WeaviateAgentStateOperations
(381-443)create_or_update_agent_state
(419-443)backend/app/agents/state.py (1)
AgentState
(18-73)backend/app/agents/base_agent.py (1)
run
(22-54)
🪛 Ruff (0.12.2)
backend/integrations/discord/bot.py
209-209: Undefined name datetime
(F821)
backend/app/core/orchestration/agent_coordinator.py
149-149: Redefinition of unused load_agent_state
from line 24
(F811)
158-158: Redefinition of unused save_agent_state
from line 33
(F811)
🔇 Additional comments (3)
backend/app/agents/devrel/github/tools/contributor_recommendation.py (1)
100-101
: LGTM on lazy import to reduce import-time coupling.Deferring
search_contributors
import to the callsite avoids heavy initialization and circular-dependency risks. Keep it.backend/main.py (1)
37-40
: Duplicate initialization overwrites instances and can register handlers twice. Remove the duplicates.Re-instantiating
AgentCoordinator
andDiscordBot
immediately after creating them discards the first instances and can cause handler re-registration or subtle wiring bugs.self.agent_coordinator = AgentCoordinator(self.queue_manager) self.discord_bot = DiscordBot(self.queue_manager) - self.agent_coordinator = AgentCoordinator(self.queue_manager) - self.discord_bot = DiscordBot(self.queue_manager) - self.discord_bot.agent_coordinator = self.agent_coordinator + # Link the existing instances + self.discord_bot.agent_coordinator = self.agent_coordinatorLikely an incorrect or invalid review comment.
backend/integrations/discord/bot.py (1)
129-149
: Thread fallback behavior is acceptable.Re-using an active thread by user ID and falling back to the channel ID if thread creation fails is reasonable for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/app/core/orchestration/agent_coordinator.py (1)
152-155
: Bug: _send_error_response calls _send_response_to_platform with the wrong signature.The signature now requires session_id and context; this will raise a TypeError on exceptions.
Apply this fix:
- async def _send_error_response(self, original_message: Dict[str, Any], error_message: str): - """Send error response to platform""" - await self._send_response_to_platform(original_message, error_message) + async def _send_error_response(self, original_message: Dict[str, Any], error_message: str): + """Send error response to platform""" + session_id = original_message.get("session_id") or str(uuid.uuid4()) + await self._send_response_to_platform( + original_message=original_message, + response=error_message, + session_id=session_id, + context={} + )
♻️ Duplicate comments (2)
backend/app/agents/devrel/nodes/react_supervisor.py (2)
17-35
: Nice: repo-context HIL now surfaces a user-visible prompt immediately.Setting both final_response and requires_human_review at the top level resolves the “silent pause” issue flagged previously. Good integration with AgentCoordinator’s response pathway.
83-86
: HIL actions still pause silently — return final_response (and requires_human_review) when waiting.When waiting_for_user_input is True, no final_response is returned, so the platform won’t notify the user. This recreates the UX bug we wanted to avoid.
Apply this diff to include a user-facing prompt and the review flag on HIL pauses:
- return { - "context": updated_context, - "current_task": f"supervisor_decided_{decision['action']}" if not waiting_for_user_input else "waiting_for_user_input" - } + return { + "context": updated_context, + "current_task": ( + f"supervisor_decided_{decision['action']}" + if not waiting_for_user_input else "waiting_for_user_input" + ), + **( + {"final_response": interrupt_details["prompt"], "requires_human_review": True} + if waiting_for_user_input else {} + ) + }
🧹 Nitpick comments (4)
backend/app/agents/devrel/nodes/react_supervisor.py (2)
10-11
: Deduplicate action lists and use a set for membership.Allowed actions are duplicated here and in _parse_supervisor_decision. Centralize the list and use a set for O(1) membership and single source of truth.
- HIL_INTERRUPT_ACTIONS = ["web_search", "faq_handler", "onboarding", "github_toolkit"] + HIL_INTERRUPT_ACTIONS = {"web_search", "faq_handler", "onboarding", "github_toolkit"} + ALLOWED_ACTIONS = HIL_INTERRUPT_ACTIONS | {"complete"} @@ - if action in ["web_search", "faq_handler", "onboarding", "github_toolkit", "complete"]: + if action in ALLOWED_ACTIONS: decision["action"] = actionAlso applies to: 99-100
71-77
: Right-size what you persist: store parsed thinking/reasoning; keep raw as optional.Persisting the entire LLM response under supervisor_thinking can bloat context and increase storage costs. Prefer parsed fields and, if needed, keep the raw under a separate key.
updated_context = { **state.context, - "supervisor_thinking": response.content, + "supervisor_thinking": decision.get("thinking", ""), + "supervisor_reasoning": decision.get("reasoning", ""), + "supervisor_raw": response.content, "supervisor_decision": decision, "iteration_count": iteration_count + 1, }backend/app/core/orchestration/agent_coordinator.py (2)
24-32
: Consider a TTL or LRU for active_sessions to avoid unbounded growth.If the bot runs long-lived, this dict can grow without bound. Optional: evict by last_interaction_time or use an LRU.
131-145
: Optional: include requires_human_review in the platform payload.If the Discord bot expects a top-level flag (rather than reading from context), forward requires_human_review to avoid ambiguity during HIL. If not needed, ignore.
response_message = { "type": "discord_response", "thread_id": original_message.get("thread_id"), "channel_id": original_message.get("channel_id"), "response": response, "original_message_id": original_message.get("id"), "session_id": session_id, - "context": context + "context": context, + "requires_human_review": bool(context.get("waiting_for_user_input")) }Confirm whether backend/integrations/discord/bot.py reads this flag or only inspects context.waiting_for_user_input.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (4)
backend/app/agents/devrel/nodes/react_supervisor.py
(2 hunks)backend/app/core/orchestration/agent_coordinator.py
(5 hunks)backend/app/database/weaviate/operations.py
(2 hunks)backend/integrations/discord/bot.py
(6 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- backend/app/database/weaviate/operations.py
- backend/integrations/discord/bot.py
🧰 Additional context used
🧬 Code graph analysis (2)
backend/app/core/orchestration/agent_coordinator.py (3)
backend/app/database/weaviate/operations.py (3)
WeaviateAgentStateOperations
(381-443)load_agent_state
(400-417)create_or_update_agent_state
(419-443)backend/app/agents/state.py (1)
AgentState
(18-73)backend/app/agents/base_agent.py (1)
run
(22-54)
backend/app/agents/devrel/nodes/react_supervisor.py (1)
backend/app/agents/state.py (1)
AgentState
(18-73)
🔇 Additional comments (2)
backend/app/core/orchestration/agent_coordinator.py (2)
22-23
: Good call: central Weaviate ops + in-memory cache.The split between active_sessions cache and Weaviate persistence is clean and easy to reason about.
79-87
: LGTM: Only send when final_response is present, with session_id and context.This aligns with the HIL UX — but make sure react_supervisor returns final_response on HIL to trigger delivery.
session_id = message_data.get("session_id") # Try to reuse if provided | ||
|
||
# Attempt to load existing state for this session, else create new | ||
agent_state = None | ||
if session_id: | ||
agent_state = await self.load_agent_state(session_id) | ||
|
||
if not agent_state: | ||
session_id = str(uuid.uuid4()) | ||
agent_state = AgentState( | ||
session_id=session_id, | ||
user_id=message_data.get("user_id", ""), | ||
platform=message_data.get("platform", "discord"), | ||
thread_id=message_data.get("thread_id"), | ||
channel_id=message_data.get("channel_id"), | ||
context={ | ||
"original_message": message_data.get("content", ""), | ||
"classification": message_data.get("classification", {}), | ||
"author": message_data.get("author", {}) | ||
} | ||
) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify message ingestion on resumed sessions.
When reusing an existing AgentState, the new user message isn’t added here. If the graph (or DevRelAgent) reliably appends the incoming message to state.messages, you’re fine; otherwise replies may lack the latest user input.
Run this to confirm where new user messages are appended:
🏁 Script executed:
#!/bin/bash
# Look for code that appends the current inbound message to AgentState.messages
rg -nP -C3 '\b(messages|state\.messages)\b.*(append|add|extend)' --glob '!**/node_modules/**' --glob '!**/dist/**'
# Inspect DevRel agent graph nodes handling inbound messages
rg -nP -C3 '\boriginal_message\b|\bHumanMessage\b|\badd_message\b|\bmessages\[-1\]' backend/app
Length of output: 26745
Critical missing message ingestion on resumed sessions
Our search didn’t find any place where the incoming user message is appended to AgentState.messages
, so when reusing an existing session the latest user input never makes it into the conversation history. Without this, downstream nodes will only ever see the original message stored in state.context
, not the full message log.
Please add an explicit append of the inbound message immediately after loading or creating the agent_state
. For example, in backend/app/core/orchestration/agent_coordinator.py
(around lines 46–67):
if session_id:
agent_state = await self.load_agent_state(session_id)
if not agent_state:
session_id = str(uuid.uuid4())
agent_state = AgentState(
session_id=session_id,
user_id=message_data.get("user_id", ""),
platform=message_data.get("platform", "discord"),
thread_id=message_data.get("thread_id"),
channel_id=message_data.get("channel_id"),
context={
"original_message": message_data.get("content", ""),
"classification": message_data.get("classification", {}),
"author": message_data.get("author", {})
}
)
+ # Ingest the current user message into the conversation history
+ from datetime import datetime
+ agent_state.messages.append({
+ "role": "user",
+ "content": message_data.get("content", ""),
+ "timestamp": datetime.utcnow().isoformat()
+ })
- Ensure you import
datetime
at the top of the file. - This guarantees every new or resumed session has its latest user input recorded in
state.messages
before the agent graph runs.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
session_id = message_data.get("session_id") # Try to reuse if provided | |
# Attempt to load existing state for this session, else create new | |
agent_state = None | |
if session_id: | |
agent_state = await self.load_agent_state(session_id) | |
if not agent_state: | |
session_id = str(uuid.uuid4()) | |
agent_state = AgentState( | |
session_id=session_id, | |
user_id=message_data.get("user_id", ""), | |
platform=message_data.get("platform", "discord"), | |
thread_id=message_data.get("thread_id"), | |
channel_id=message_data.get("channel_id"), | |
context={ | |
"original_message": message_data.get("content", ""), | |
"classification": message_data.get("classification", {}), | |
"author": message_data.get("author", {}) | |
} | |
) | |
session_id = message_data.get("session_id") # Try to reuse if provided | |
# Attempt to load existing state for this session, else create new | |
agent_state = None | |
if session_id: | |
agent_state = await self.load_agent_state(session_id) | |
if not agent_state: | |
session_id = str(uuid.uuid4()) | |
agent_state = AgentState( | |
session_id=session_id, | |
user_id=message_data.get("user_id", ""), | |
platform=message_data.get("platform", "discord"), | |
thread_id=message_data.get("thread_id"), | |
channel_id=message_data.get("channel_id"), | |
context={ | |
"original_message": message_data.get("content", ""), | |
"classification": message_data.get("classification", {}), | |
"author": message_data.get("author", {}) | |
} | |
) | |
# Ingest the current user message into the conversation history | |
from datetime import datetime | |
agent_state.messages.append({ | |
"role": "user", | |
"content": message_data.get("content", ""), | |
"timestamp": datetime.utcnow().isoformat() | |
}) |
🤖 Prompt for AI Agents
In backend/app/core/orchestration/agent_coordinator.py around lines 46 to 67,
when loading or creating the AgentState the incoming user message is never
appended to agent_state.messages so resumed sessions miss the latest user input;
fix by importing datetime at the top of the file and immediately after loading
or creating agent_state, append a new message entry to agent_state.messages
containing the message role (e.g., "user"), the message text from
message_data.get("content", ""), and a timestamp (e.g.,
datetime.datetime.utcnow().isoformat()), then persist or continue with the
updated agent_state so the latest input is included in the conversation history
before running the agent graph.
Added the new HIL feature's description to README.md
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (5)
README.md (5)
332-347
: Polish HIL bullets: sentence case, remove forced line breaks, fix hyphenationTighten wording and casing for consistency with the rest of the README and fix minor grammar. Also drop the trailing two spaces after each bullet title (they force an unintended line break in some Markdown renderers).
- - **Interactive User Input at Critical Decision Points** + - **Interactive user input at critical decision points** Devr.AI now pauses at key steps in the support or troubleshooting process to request specific user input (e.g., asking for the project or repository name before proceeding). This ensures the agent always works with the most relevant and accurate context. - - **Interrupt-Driven Context Gathering** + - **Interrupt-driven context gathering** If crucial information is missing or a decision is ambiguous, the agent interrupts its automated reasoning and prompts the user for clarification via Discord message. The conversation halts and resumes only after receiving the required input. - - **Session State Persistence** + - **Session state persistence** All interruptions and user responses are stored, allowing the agent to pick up exactly where it left off after human feedback is integrated. - - **Seamless Multi-Turn Conversation Flow** + - **Seamless multi-turn conversation flow** User input is woven naturally into the ongoing discussion, keeping troubleshooting and workflows coordinated and responsive. - - **Improved Accuracy & Automation Safety** + - **Improved accuracy and automation safety** - By involving humans in gray-area scenarios, HIL minimizes automation errors, fosters trust, and improves outcome quality for technical support interactions. + By involving humans in gray areas, HIL minimizes automation errors, fosters trust, and improves outcome quality for technical support interactions.
338-340
: Call out privacy/retention for persisted HIL inputsSince the section promises persistence of interruptions and replies, add a one‑line privacy/retention note with a pointer to the policy and user controls (delete/opt‑out). This avoids ambiguity around storage of potentially sensitive inputs.
All interruptions and user responses are stored, allowing the agent to pick up exactly where it left off after human feedback is integrated. + Note: persisted inputs may include user identifiers and repository context. See the project's privacy policy for retention and deletion controls.
I can propose a full “Data and Privacy” subsection if you want it standardized across the README. -->
347-348
: Clarify buttons vs text replies (possible mismatch with implementation)README says replies are text-only. The implementation notes in this PR mention a HumanReviewView with buttons in the Discord bot. If buttons exist but are disabled/behind a flag, clarify that here to avoid confusion.
- The current implementation uses standard Discord message replies for user responses instead of buttons, ensuring stability and broad compatibility. + The current implementation uses standard Discord message replies for user responses instead of buttons, ensuring stability and broad compatibility. (An experimental button-based UI exists but is disabled by default.)If buttons are not shipped at all, keep the original line and confirm there’s no dormant UI code that suggests otherwise. -->
350-356
: Add a concrete Discord exchange so users know how to respondA short, copy‑pasteable interaction makes the HIL flow unambiguous (e.g., expected “repo:” prefix, any free‑form variant supported, etc.).
**Example Workflow:** 1. User asks a technical question in Discord. 2. Agent pauses to ask: “Are you working in a specific project or repository? Please specify.” 3. User replies, providing the needed information. 4. Agent resumes troubleshooting with updated context, repeating pauses for further input as needed. + + Example exchange: + + ```text + User : I'm getting a build error after updating dependencies. + Bot : Interrupt — please confirm the repository context. Reply with: + repo: <owner>/<repo> (for example: repo: AOSSIE-Org/Devr.AI) + User : repo: AOSSIE-Org/Devr.AI + Bot : Thanks. Continuing the troubleshooting with this repository context. + ``` + + Tip: If a specific reply format is required (e.g., `repo:`), the bot will explicitly show it in the interrupt prompt.Please confirm the exact reply format(s) accepted so we can reflect them accurately (free‑form vs. prefixed commands, case sensitivity, etc.). -->
360-362
: Deep-link the integration points and standardize identifiersLinking to the actual modules helps readers navigate. Also, wrap code identifiers in backticks for consistency.
- - Agent workflow layer (ReAct Supervisor and AgentCoordinator) - - Discord bot message handling - - Session and conversation persistence (Weaviate, Supabase) + - Agent workflow layer (`ReAct Supervisor` and `AgentCoordinator`) — see + [react_supervisor.py](backend/app/agents/devrel/nodes/react_supervisor.py) and + [agent_coordinator.py](backend/app/core/orchestration/agent_coordinator.py) + - Discord bot message handling — see + [bot.py](backend/integrations/discord/bot.py) + - Session and conversation persistence (Weaviate, Supabase) — see + [operations.py](backend/app/database/weaviate/operations.py) and + [create_schemas.py](backend/app/database/weaviate/scripts/create_schemas.py)Paths based on the PR description—please confirm they match the repository layout. -->
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
README.md
(1 hunks)
🧰 Additional context used
🪛 LanguageTool
README.md
[grammar] ~338-~338: There might be a mistake here.
Context: ...ed input. - Session State Persistence All interruptions and user responses are...
(QB_NEW_EN)
[grammar] ~341-~341: There might be a mistake here.
Context: ... Seamless Multi-Turn Conversation Flow User input is woven naturally into the o...
(QB_NEW_EN)
[grammar] ~344-~344: There might be a mistake here.
Context: ... Improved Accuracy & Automation Safety By involving humans in gray-area scenari...
(QB_NEW_EN)
[grammar] ~347-~347: There might be a mistake here.
Context: .... - Text-Based Interactions (Discord) The current implementation uses standard...
(QB_NEW_EN)
[grammar] ~360-~360: There might be a mistake here.
Context: ... (ReAct Supervisor and AgentCoordinator) - Discord bot message handling - Session a...
(QB_NEW_EN)
[grammar] ~361-~361: There might be a mistake here.
Context: ...rdinator) - Discord bot message handling - Session and conversation persistence (We...
(QB_NEW_EN)
Suggested by code rabbit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
backend/app/core/orchestration/agent_coordinator.py (1)
48-71
: Critical: Inbound user message is not appended to AgentState.messages (resumed sessions drop latest input).
This was flagged earlier and still applies: the current inbound message never enters agent_state.messages before the graph runs, so downstream nodes may miss the latest user input on resumed sessions.Apply this diff after agent_state is created or loaded and before run():
if not agent_state: session_id = str(uuid.uuid4()) agent_state = AgentState( @@ ) + # Ingest current user message into the conversation history and bump counters + from datetime import datetime, timezone + agent_state.messages.append({ + "role": "user", + "content": message_data.get("content", ""), + "timestamp": datetime.now(timezone.utc).isoformat(), + "message_id": message_data.get("id"), + "channel_id": message_data.get("channel_id"), + "thread_id": message_data.get("thread_id"), + }) + agent_state.last_interaction_time = datetime.now(timezone.utc) + agent_state.interaction_count += 1 + logger.info( f"Running DevRel agent for session {agent_state.session_id} with memory thread {memory_thread_id}") result_state = await self.devrel_agent.run(agent_state, memory_thread_id)Run this to confirm the only ingestion point is now here (and not duplicated elsewhere):
#!/bin/bash # Expect to see the append inside agent_coordinator.py; verify there aren't conflicting appends elsewhere. rg -nP -C2 --type=py '\bmessages\s*\.\s*append\s*\(' backend/app | sed -n '1,200p' # And check that _handle_devrel_request appends before the run() call rg -nP -C3 --type=py 'def _handle_devrel_request|messages\.append|devrel_agent\.run' backend/app/core/orchestration/agent_coordinator.py
🧹 Nitpick comments (7)
backend/app/core/orchestration/agent_coordinator.py (7)
22-23
: Instantiate persistence ops once per coordinator (LGTM).
Construction cost is paid once and reused. Consider allowing an optional collection name/config injection later if you need multi-tenant isolation, but this is fine for now.
24-36
: Tighten API: add type hints and surface persistence success/failure.
Minor polish that pays off quickly:
- Add precise typing for session_id and AgentState.
- Return a bool from save_agent_state so callers can log/act on persistence failures.
- Optional: guard memory growth by adding a simple TTL/LRU to active_sessions.
Apply this diff to the selected lines:
- async def load_agent_state(self, session_id): + async def load_agent_state(self, session_id: str) -> AgentState | None: @@ - async def save_agent_state(self, agent_state): - self.active_sessions[agent_state.session_id] = agent_state - await self.weaviate_agent_state_ops.create_or_update_agent_state(agent_state) + async def save_agent_state(self, agent_state: AgentState) -> bool: + self.active_sessions[agent_state.session_id] = agent_state + ok = await self.weaviate_agent_state_ops.create_or_update_agent_state(agent_state) + if not ok: + logger.error("Failed to persist agent state for session %s", agent_state.session_id) + return okOutside the selected lines, add the missing import if you don’t already have it:
from typing import Optional # if you prefer Optional[AgentState] style
46-47
: Session reuse is correct; tune memory_thread_id fallback to reduce cross-talk.
Right now memory_thread_id falls back to user_id (Line 45), which can merge multiple concurrent conversations from the same user across channels. Prefer thread_id or session_id as fallbacks.Apply this diff near the retrieval:
- memory_thread_id = message_data.get("memory_thread_id") or message_data.get("user_id", "") + memory_thread_id = ( + message_data.get("memory_thread_id") + or message_data.get("thread_id") + or message_data.get("session_id") + or f"{message_data.get('platform','discord')}:{message_data.get('user_id','')}" + )
72-74
: Don’t ignore persistence failures.
Now that save_agent_state returns a bool, log or branch on failure.- # Save updated state after agent run - await self.save_agent_state(result_state) + # Save updated state after agent run + ok = await self.save_agent_state(result_state) + if not ok: + logger.warning("Proceeding even though state persistence failed for session %s", + result_state.session_id)
79-86
: Minimize payload context and surface HIL intent explicitly.
To avoid leaking internal context fields to the UI layer and to make HIL status obvious, send a minimal “hil” context with awaiting_user_input and next_action.- await self._send_response_to_platform( - original_message=message_data, - response=result_state.final_response, - session_id=result_state.session_id, - context=result_state.context or {} - ) + await self._send_response_to_platform( + original_message=message_data, + response=result_state.final_response, + session_id=result_state.session_id, + context={ + "hil": { + "awaiting_user_input": bool(result_state.requires_human_review), + "next_action": result_state.next_action + } + } + )
131-145
: Discord payload: add a top-level awaiting_user_input hint.
This helps the Discord worker render “awaiting input” affordances without digging into context.if platform == "discord": response_message = { "type": "discord_response", "thread_id": original_message.get("thread_id"), "channel_id": original_message.get("channel_id"), "response": response, "original_message_id": original_message.get("id"), "session_id": session_id, - "context": context + "context": context, + "awaiting_user_input": bool((context or {}).get("hil", {}).get("awaiting_user_input", False)) }
154-160
: Error responses: include stable HIL flag.
Explicitly indicate no user input is awaited on error.await self._send_response_to_platform( original_message, error_message, session_id=session_id, - context={} + context={"hil": {"awaiting_user_input": False}} )
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
backend/app/core/orchestration/agent_coordinator.py
(6 hunks)
🧰 Additional context used
🧠 Learnings (1)
📓 Common learnings
Learnt from: smokeyScraper
PR: AOSSIE-Org/Devr.AI#75
File: backend/app/agents/devrel/agent.py:34-35
Timestamp: 2025-06-13T21:56:19.183Z
Learning: In the Devr.AI backend, the DevRelAgent follows a singleton pattern where only one instance exists for the entire application lifetime, using InMemorySaver with thread-based conversation management to persist user conversations across sessions.
🧬 Code graph analysis (1)
backend/app/core/orchestration/agent_coordinator.py (3)
backend/app/database/weaviate/operations.py (3)
WeaviateAgentStateOperations
(381-443)load_agent_state
(400-417)create_or_update_agent_state
(419-443)backend/app/agents/state.py (1)
AgentState
(18-73)backend/app/agents/base_agent.py (1)
run
(22-54)
🔇 Additional comments (1)
backend/app/core/orchestration/agent_coordinator.py (1)
9-10
: Good addition: Weaviate persistence import is scoped and clear.
No issues. Importing the ops class here keeps persistence concerns localized to the coordinator.
Restored some changes that were made from suggestions from code rabbit as they were unnecessary and interrupted the bot's flow
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/app/core/orchestration/agent_coordinator.py (1)
126-141
: Bug: _send_error_response passes kwargs that _send_response_to_platform doesn’t accept (runtime TypeError).
_send_error_response
calls_send_response_to_platform(..., session_id=..., context=...)
, but the callee only takes two args. Update the callee to accept these optional parameters and forward them in the payload.- async def _send_response_to_platform(self, original_message: Dict[str, Any], response: str): + async def _send_response_to_platform( + self, + original_message: Dict[str, Any], + response: str, + session_id: "str | None" = None, + context: "Dict[str, Any] | None" = None, + ): """Send agent response back to the originating platform""" try: platform = original_message.get("platform", "discord") if platform == "discord": response_message = { "type": "discord_response", "thread_id": original_message.get("thread_id"), "channel_id": original_message.get("channel_id"), "response": response, - "original_message_id": original_message.get("id"), + "original_message_id": original_message.get("id"), + "session_id": session_id or original_message.get("session_id"), + "context": context or {}, } await self.queue_manager.enqueue(response_message)Outside this hunk, if you prefer full typing, add:
from typing import Dict, Any # already present # add only if you switch to Optional[...] instead of | None: # from typing import OptionalNote: With this change, the existing error path call becomes valid, and (optionally) the normal path can pass session/context as suggested above.
Also applies to: 145-153
♻️ Duplicate comments (2)
backend/app/agents/devrel/nodes/react_supervisor.py (2)
54-67
: HIL action pause logic looks good; logging is clear.You set
waiting_for_user_input
and capture a conciseinterrupt_details
prompt. Combined with the final return (Lines 80–89), this will notify the user.
17-33
: Set final_response (and requires_human_review) when interrupting for repository context.As-is, the early return does not surface a user-visible message, so Discord may show nothing until a later node replies. Mirror the pattern you use below for HIL actions and include
final_response
andrequires_human_review
.- return { - "context": updated_context, - "current_task": "waiting_for_user_input_repo" - } + return { + "context": updated_context, + "current_task": "waiting_for_user_input_repo", + "final_response": interrupt_details["prompt"], + "requires_human_review": True + }
🧹 Nitpick comments (3)
backend/app/agents/devrel/nodes/react_supervisor.py (1)
10-10
: Prefer an immutable set and a single source of truth for allowed actions.Using a frozenset avoids accidental mutation and is marginally faster for membership checks. Also, DRY the allowed-action list used in
_parse_supervisor_decision
.Apply within this hunk:
-HIL_INTERRUPT_ACTIONS = ["web_search", "faq_handler", "onboarding", "github_toolkit"] +HIL_INTERRUPT_ACTIONS = frozenset({"web_search", "faq_handler", "onboarding", "github_toolkit"})Outside this hunk (for
_parse_supervisor_decision
), consider:# at function top allowed_actions = HIL_INTERRUPT_ACTIONS | {"complete"} # replace the list literal if action in allowed_actions: decision["action"] = actionbackend/app/core/orchestration/agent_coordinator.py (2)
24-35
: Cache + persistence layering is sensible; add type hints and tiny docstrings.Small readability win: annotate return types and briefly document the cache behavior. Optional TTL eviction can come later if needed.
- async def load_agent_state(self, session_id): + async def load_agent_state(self, session_id: str) -> "AgentState | None": state = self.active_sessions.get(session_id) if state: return state state = await self.weaviate_agent_state_ops.load_agent_state(session_id) if state: self.active_sessions[session_id] = state return state - async def save_agent_state(self, agent_state): + async def save_agent_state(self, agent_state: "AgentState") -> None: self.active_sessions[agent_state.session_id] = agent_state await self.weaviate_agent_state_ops.create_or_update_agent_state(agent_state)Additionally, at the top:
from typing import Dict, Any # add | None in quotes above to avoid importing Optional
79-81
: Optional: include session metadata when sending normal responses.This helps the platform keep threads aligned with persisted sessions, especially for HIL resumes.
- if result_state.final_response: - await self._send_response_to_platform(message_data, result_state.final_response) + if result_state.final_response: + await self._send_response_to_platform( + message_data, + result_state.final_response, + session_id=result_state.session_id, + context=result_state.context, + )
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
backend/app/agents/devrel/nodes/react_supervisor.py
(2 hunks)backend/app/core/orchestration/agent_coordinator.py
(5 hunks)
🧰 Additional context used
🧠 Learnings (1)
📓 Common learnings
Learnt from: smokeyScraper
PR: AOSSIE-Org/Devr.AI#75
File: backend/app/agents/devrel/agent.py:34-35
Timestamp: 2025-06-13T21:56:19.183Z
Learning: In the Devr.AI backend, the DevRelAgent follows a singleton pattern where only one instance exists for the entire application lifetime, using InMemorySaver with thread-based conversation management to persist user conversations across sessions.
🧬 Code graph analysis (2)
backend/app/agents/devrel/nodes/react_supervisor.py (1)
backend/app/agents/state.py (1)
AgentState
(18-73)
backend/app/core/orchestration/agent_coordinator.py (2)
backend/app/database/weaviate/operations.py (3)
WeaviateAgentStateOperations
(381-443)load_agent_state
(400-417)create_or_update_agent_state
(419-443)backend/app/agents/state.py (1)
AgentState
(18-73)
🔇 Additional comments (2)
backend/app/agents/devrel/nodes/react_supervisor.py (2)
68-76
: Supervisor introspection persisted; iteration guard wired. LGTM.Storing
supervisor_thinking
,supervisor_decision
, and bumpingiteration_count
will aid traceability and routing.
80-89
: Good: user notification on HIL via final_response.Thanks for ensuring the user gets a message whenever
waiting_for_user_input
is true. Please apply the same to the early repository branch (Lines 17–33), as noted above.
# Attempt to load existing state for this session, else create new | ||
agent_state = None | ||
if session_id: | ||
agent_state = await self.load_agent_state(session_id) | ||
|
||
if not agent_state: | ||
session_id = str(uuid.uuid4()) | ||
agent_state = AgentState( | ||
session_id=session_id, | ||
user_id=message_data.get("user_id", ""), | ||
platform=message_data.get("platform", "discord"), | ||
thread_id=message_data.get("thread_id"), | ||
channel_id=message_data.get("channel_id"), | ||
context={ | ||
"original_message": message_data.get("content", ""), | ||
"classification": message_data.get("classification", {}), | ||
"author": message_data.get("author", {}) | ||
} | ||
) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: latest inbound user message is never appended to AgentState.messages.
On resumed sessions, downstream nodes will keep seeing the stale context["original_message"]
instead of the new user input. This breaks conversation continuity and HIL prompts.
Insert ingestion just after the new-or-loaded state is established (right before the log at Line 68):
if not agent_state:
session_id = str(uuid.uuid4())
agent_state = AgentState(
session_id=session_id,
user_id=message_data.get("user_id", ""),
platform=message_data.get("platform", "discord"),
thread_id=message_data.get("thread_id"),
channel_id=message_data.get("channel_id"),
context={
"original_message": message_data.get("content", ""),
"classification": message_data.get("classification", {}),
"author": message_data.get("author", {})
}
)
+ # Ingest the current user message into the conversation history
+ agent_state.messages.append({
+ "role": "user",
+ "content": message_data.get("content", ""),
+ "timestamp": datetime.utcnow().isoformat()
+ })
+ agent_state.interaction_count += 1
+ agent_state.last_interaction_time = datetime.utcnow()
Outside this hunk, add the import:
from datetime import datetime
logger.info( | ||
f"Running DevRel agent for session {agent_state.session_id} with memory thread {memory_thread_id}") | ||
result_state = await self.devrel_agent.run(agent_state, memory_thread_id) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify alignment with DevRelAgent’s InMemorySaver memory.
Per your prior design (learned context: DevRelAgent uses an InMemorySaver with thread-based conversation management), ensure the ingested message also flows into the DevRelAgent’s thread memory keyed by memory_thread_id
. If DevRelAgent.run(...)
does not push this message into its memory, you may see divergence between persisted state and in-memory tool memory.
🏁 Script executed:
#!/bin/bash
# 1) Confirm where DevRelAgent persists/reads conversation memory for a thread
rg -nP -C3 '\bInMemorySaver\b|\bConversationBuffer\b|\bmemory\b|\bthread\b' backend/app
# 2) Confirm whether the inbound message is appended to DevRelAgent memory on each request
rg -nP -C3 '\b(messages|state\.messages)\b.*(append|add|extend)|add_message|HumanMessage' backend/app
# 3) Verify how memory_thread_id is used to read/write memory
rg -nP -C3 '\bmemory_thread_id\b' backend/app
Length of output: 45649
Ensure incoming messages are saved to DevRelAgent’s thread memory
Our grep of the codebase shows that while DevRelAgent instantiates an InMemorySaver (self.checkpointer = InMemorySaver()
), there are no calls to push new messages into that memory before invoking the agent graph. Likewise, in agent_coordinator, we did not find evidence of appending the inbound message to the state passed into DevRelAgent.run(). Without explicitly saving each incoming human message into the InMemorySaver (or otherwise wiring it into the StateGraph), the in-memory thread history will diverge from what’s persisted.
• In backend/app/agents/devrel/agent.py, verify that _build_graph()
registers self.checkpointer
with the graph and that run()
or its pre‐invoke logic calls into self.checkpointer
(e.g., saving the updated message list).
• In backend/app/core/orchestration/agent_coordinator.py, confirm that you append the latest user message to agent_state.messages
(or equivalent) before calling self.devrel_agent.run(...)
, so that the graph sees the new input.
• Add or update code to explicitly save or push the inbound message into the InMemorySaver–backed graph state (for example, via a checkpointer.save_state(thread_id, agent_state.model_dump())
or using graph‐provided memory nodes) before ainvoke
.
Tagging as critical: until message ingestion into thread memory is implemented, the DevRelAgent will not maintain a consistent conversation history.
🤖 Prompt for AI Agents
In backend/app/core/orchestration/agent_coordinator.py around lines 68-71, the
inbound user message is not being appended to the agent_state or saved into the
DevRelAgent's InMemorySaver before invoking the agent; update the coordinator to
append the latest human message to agent_state.messages (or the equivalent
field) and explicitly persist it to the DevRelAgent's thread memory (e.g., call
the agent's checkpointer.save_state(thread_id, agent_state.model_dump()) or use
the graph's memory node API) immediately before calling await
self.devrel_agent.run(agent_state, memory_thread_id); also verify in
backend/app/agents/devrel/agent.py that _build_graph registers self.checkpointer
with the graph and that run() (or its pre-invoke hook) either saves the incoming
message list to the checkpointer or reads from the saved state so in-memory
thread history and persisted state remain consistent.
Hey @Code-Builder-io, if possible, please try attaching a few interactions with the agent. This helps a lot in reviewing and getting on what domain the preferred responses and questions are around. Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @Code-Builder-io , This isn't exactly the way we were planning this handler, with the majority of the problem on the reAct supervisor itself being the HIL node, which is supposed to be separated as a handler.
Human review and button implemented too doesn't seems to work. Please go through the documentation once Doc, the first priority it says to be given to Falkor DB which isn't yet implemented in the codebase so it's better to have that of whole side as pass
as of now with rest of the priority levels as mentioned in the document either as discussed in the document or better if all of them would be at the same levels like providing options to the HIL support node like on the basis of the user query you can either choose of web_search
, falkor_db
.....
Let's connect on a short call so that I can detail you out on how the workflow needs to be implemented
repo = state.context.get("repository") | ||
if not repo: | ||
waiting_for_user_input = True | ||
interrupt_details = { | ||
"prompt": "Before we proceed, could you please specify the project or repository you are working on?" | ||
} | ||
logger.info(f"Human-in-the-Loop interrupt: asking for repository context in session {state.session_id}") | ||
|
||
updated_context = { | ||
**state.context, | ||
"waiting_for_user_input": True, | ||
"interrupt_details": interrupt_details | ||
} | ||
return { | ||
"context": updated_context, | ||
"current_task": "waiting_for_user_input_repo" | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The HIL workflow is intended to be a separate handler as other nodes like faq_tool
, web_search
...
No need of HIL support in the decision router itself.
waiting_for_user_input = False | ||
interrupt_details = {} | ||
|
||
if decision["action"] in HIL_INTERRUPT_ACTIONS: | ||
# Here you can add logic to decide if user input is needed | ||
# For example, if decision thinking contains uncertainty or multiple options | ||
# For demo, we just always pause at these actions to ask the user | ||
waiting_for_user_input = True | ||
interrupt_details = { | ||
"prompt": f"The agent wants to execute the action: {decision['action']}. Please confirm or provide input." | ||
} | ||
logger.info( | ||
f"Human-in-the-Loop interrupt triggered for action {decision['action']} in session {state.session_id}") | ||
|
||
# Update state with supervisor's thinking and interrupt flag if needed | ||
updated_context = { | ||
**state.context, | ||
"supervisor_thinking": response.content, | ||
"supervisor_decision": decision, | ||
"iteration_count": iteration_count + 1, | ||
} | ||
|
||
if waiting_for_user_input: | ||
updated_context["waiting_for_user_input"] = True | ||
updated_context["interrupt_details"] = interrupt_details | ||
|
||
return { | ||
"context": { | ||
**state.context, | ||
"supervisor_thinking": response.content, | ||
"supervisor_decision": decision, | ||
"iteration_count": iteration_count + 1 | ||
}, | ||
"current_task": f"supervisor_decided_{decision['action']}" | ||
"context": updated_context, | ||
"current_task": ( | ||
f"supervisor_decided_{decision['action']}" | ||
if not waiting_for_user_input else "waiting_for_user_input" | ||
), | ||
**( | ||
{"final_response": interrupt_details["prompt"], "requires_human_review": True} | ||
if waiting_for_user_input else {} | ||
) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The HIL workflow is intended to be a separate handler as other nodes like faq_tool, web_search...
No need of HIL support in the decision router itself.
Hey @smokeyScraper, Sorry i was shifting so could'nt catch up with you please ping me on discord whenever, in the meantime I'll make some changes requested |
Closes #118
📝 Description
This pull request implements a Human-in-the-Loop interactive support workflow for the Discord bot. It adds the ability for the bot to pause and request user input at critical decision points during technical troubleshooting sessions. Key features include repository context gathering interrupts, user confirmation prompts, and enhanced session state persistence using Weaviate. Button interactions were deferred to ensure stability using text-based input replies.
🔧 Changes Made
Added Human-in-the-Loop interrupt logic in react_supervisor.py to pause and ask for repository/project context.
Enhanced agent_coordinator.py to handle state updates and continuation messages for HIL workflows.
Implemented persistent agent state management in Weaviate via WeaviateAgentStateOperations.
Updated Discord bot message handling in bot.py to support HIL session state and text-based user input.
Deferred Discord button interaction implementation due to stability issues.
Added detailed logging throughout the agent and bot for improved traceability.
📷 Screenshots or Visual Changes (if applicable)
N/A
✅I have read the contributing guidelines.
Summary by CodeRabbit
New Features
Improvements
Documentation