-
Notifications
You must be signed in to change notification settings - Fork 838
chore(aiagent): store memory in db #7053
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Deploying windmill with
|
| Latest commit: |
57fc166
|
| Status: | ✅ Deploy successful! |
| Preview URL: | https://4a890de5.windmill.pages.dev |
| Branch Preview URL: | https://fg-memory32.windmill.pages.dev |
|
/updatesqlx |
|
Starting sqlx update... |
|
❌ SQLx update failed. Please check the workflow logs for details. |
|
/updatesqlx |
|
Starting sqlx update... |
|
❌ SQLx update failed. Please check the workflow logs for details. |
|
/updatesqlx |
|
Starting sqlx update... |
|
❌ SQLx update failed. Please check the workflow logs for details. |
|
Claude finished @centdix's task —— View job PR Review: AI Agent Memory Storage ✅Review Complete OverviewThis PR implements database storage for AI agent memory as a fallback when S3 is not configured. The implementation adds a new Database Migration Review✅ Migration Structure (backend/migrations/20251104135208_ai_agent_memory.up.sql:4-12)The table structure looks solid:
|
|
|
||
| Ok(()) | ||
| Ok(result) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Performance: O(n²) truncation algorithm
The current truncation approach serializes the entire message array on each iteration, resulting in O(n²) complexity. For large message arrays, consider a binary search approach:
fn truncate_messages(
messages: &[OpenAIMessage],
max_size_bytes: usize,
) -> Result<Vec<OpenAIMessage>, Error> {
if messages.is_empty() {
return Ok(vec![]);
}
// Binary search for the maximum number of recent messages that fit
let mut left = 0;
let mut right = messages.len();
while left < right {
let mid = (left + right + 1) / 2;
let test_messages = &messages[messages.len() - mid..];
let test_json = serde_json::to_value(test_messages)?;
if test_json.to_string().len() <= max_size_bytes {
left = mid;
} else {
right = mid - 1;
}
}
Ok(messages[messages.len() - left..].to_vec())
}This reduces complexity from O(n²) to O(n log n).
| created_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW(), | ||
| updated_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW(), | ||
| PRIMARY KEY (workspace_id, conversation_id, step_id) | ||
| ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Add index for deletion queries
The delete_conversation_from_db function in memory_common.rs queries by (workspace_id, conversation_id) without the step_id. While the primary key covers the full tuple, an additional index would optimize conversation-wide deletions:
CREATE INDEX idx_ai_agent_memory_conversation
ON ai_agent_memory (workspace_id, conversation_id);This is especially important if conversations can have many steps, as the deletion would otherwise require a full index scan.
| Ok(Some(messages)) | ||
| match result { | ||
| Some(row) => { | ||
| let messages: Vec<OpenAIMessage> = serde_json::from_value(row.messages)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Add error context
Deserialization errors could be more descriptive to help debugging:
let messages: Vec<OpenAIMessage> = serde_json::from_value(row.messages)
.map_err(|e| Error::internal_err(format!("Failed to deserialize AI agent messages: {}", e)))?;This helps identify whether corruption occurred in the database or if there's a schema mismatch.
|
/eeref |
|
Starting ee ref update... |
|
Successfully updated ee-repo-ref.txt |
No description provided.