security: phase-1 auth + crud context hardening#101
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard. |
Reviewer's GuideImplements phase-1 security and platform hardening by normalizing AI API auth to shared helpers, enforcing client-only CRUD context with runtime guards and tests, securing and batching the import and task sync APIs with user scoping and validation, standardizing guest user handling via shared constants, adding platform detection helpers, and enhancing editor/tasks UX and the audit dashboard to track this work. Sequence diagram for secured import API flowsequenceDiagram
actor User
participant Browser
participant ImportRoute as API_Import_POST
participant Auth as requireMutation
participant DB as Database
User->>Browser: Trigger import (upload file)
Browser->>ImportRoute: POST /api/import { items }
ImportRoute->>Auth: requireMutation()
Auth-->>ImportRoute: { authenticated, userId } or { response }
alt Not authenticated
ImportRoute-->>Browser: auth.response (401/403)
else Authenticated
ImportRoute->>ImportRoute: Check Content-Length <= 5MB
alt Payload too large
ImportRoute-->>Browser: 413 Payload too large
else Size ok
ImportRoute->>ImportRoute: Parse JSON body
ImportRoute->>ImportRoute: ImportPayloadSchema.safeParse(body)
alt Invalid payload
ImportRoute-->>Browser: 400 Invalid payload (Zod details)
else Valid payload
ImportRoute->>ImportRoute: processItem() for each root item
ImportRoute->>ImportRoute: Build foldersToInsert, notesToInsert (inject userId, deletedAt null)
ImportRoute->>DB: transaction {
loop Folders chunks
DB-->>DB: insert folders
DB-->>DB: onConflictDoUpdate(id)
end
loop Notes chunks
DB-->>DB: insert notes
DB-->>DB: onConflictDoUpdate(id)
end
DB-->>ImportRoute: Transaction committed
ImportRoute-->>Browser: 200 { success, stats }
end
end
end
Sequence diagram for secured task sync API flowsequenceDiagram
actor User
participant Browser
participant SyncRoute as API_Tasks_Sync_POST
participant Auth as requireMutation
participant DB as Database
User->>Browser: Edit note tasks
Browser->>SyncRoute: POST /api/tasks/sync { noteId, tasks }
SyncRoute->>Auth: requireMutation()
Auth-->>SyncRoute: { authenticated, userId } or { response }
alt Not authenticated
SyncRoute-->>Browser: auth.response (401/403)
else Authenticated
alt Missing noteId
SyncRoute-->>Browser: 400 { error: noteId is required }
else noteId present
SyncRoute->>DB: select notes.id where id = noteId and userId = userId
DB-->>SyncRoute: note row or empty
alt Note not found or not owned by user
SyncRoute-->>Browser: 404 { error: Note not found }
else Note owned by user
SyncRoute->>DB: select * from tasks where noteId = noteId and userId = userId
DB-->>SyncRoute: existingTasks
SyncRoute->>SyncRoute: Build taskIdByBlockId map
loop For each incoming task
SyncRoute->>SyncRoute: Ensure stable id for blockId
SyncRoute->>SyncRoute: Map parentTaskId via taskIdByBlockId
SyncRoute->>SyncRoute: Build row with userId, timestamps
end
SyncRoute->>DB: delete from tasks where noteId = noteId and userId = userId
alt rows.length > 0
SyncRoute->>DB: insert into tasks values(rows)
end
DB-->>SyncRoute: OK
SyncRoute-->>Browser: 200 { success: true }
end
end
end
Entity relationship diagram for secured notes, folders, and taskserDiagram
users {
string id PK
}
folders {
string id PK
string userId FK
string name
string parentFolderId
boolean pinned
bigint pinnedAt
bigint createdAt
bigint updatedAt
bigint deletedAt
string type
}
notes {
string id PK
string userId FK
string name
text content
string parentFolderId
boolean pinned
bigint pinnedAt
boolean favorite
bigint createdAt
bigint updatedAt
bigint deletedAt
string type
}
tasks {
string id PK
string userId FK
string noteId FK
string blockId
text content
text description
boolean checked
bigint dueDate
string parentTaskId FK
int position
bigint createdAt
bigint updatedAt
}
users ||--o{ folders : owns
users ||--o{ notes : owns
users ||--o{ tasks : owns
folders ||--o{ folders : parent_of
folders ||--o{ notes : contains
notes ||--o{ tasks : has
tasks ||--o{ tasks : parent_of
Class diagram for auth constants, platform helpers, and CRUD context guardclassDiagram
class AuthConstants {
<<module>>
+const GUEST_USER_ID : string
+const LEGACY_GUEST_USER_ID : string
+const GUEST_USER_IDS : readonly string[]
+isGuestUserId(userId string) bool
}
class PlatformHelpers {
<<module>>
+isTauri() bool
+isExpo() bool
+isWeb() bool
+isServer() bool
}
class Platform {
<<object>>
+isWeb() bool
+isTauri() bool
+isExpo() bool
+isServer() bool
+web : bool
+tauri : bool
+expo : bool
+server : bool
}
class CrudContext {
<<module>>
-currentContext : UserContext
+setUserContext(ctx UserContext) void
+getUserContext() UserContext
+clearUserContext() void
+getCrudUserId() string
+withUser(userId string, fn PromiseFunction~T~) Promise~T~
+withUserSync(userId string, fn Function~T~) T
+createScopedContext(ctx UserContext) ScopedContext
-isServerRuntime() bool
-assertClientRuntime(api string) void
}
class UserContext {
<<type>>
+userId : string
+other metadata
}
class ScopedContext {
<<type>>
+getUserId() string
+getContext() UserContext
}
AuthConstants <.. PlatformHelpers : shared utilities
PlatformHelpers <.. Platform : uses
UserContext <.. CrudContext : manages
ScopedContext <.. CrudContext : creates
note for CrudContext "assertClientRuntime throws if window is undefined, preventing use of CRUD context on the server runtime"
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
|
Caution Review failedThe pull request is closed. 📝 WalkthroughWalkthroughThis PR centralizes authentication patterns across multiple API routes using Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Poem
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Hey - I've found 4 issues, and left some high level feedback:
- In
apps/web/app/api/import/route.ts, theonConflictDoUpdatehandlers for folders and notes only targetidand will happily update rows regardless ofuserId; in a multi-tenant DB this could let imported data overwrite other users’ records on ID collision, so consider scoping conflicts (and/or update conditions) by bothidanduserId. - In
apps/web/app/api/tasks/sync/route.ts, the delete-then-insert replacement of tasks for a note is not wrapped in a transaction, which can leave a note with no tasks or a partially written set if the insert fails; consider using a transaction (or UPSERT pattern) around the delete/insert block for better atomicity. - The updated
insertTemplateimplementation inNoteFooterBaruseseditor.insertBlocks(blocks as any, blocks[0] as any, 'after')when the document is empty, which passes a block from the new content as the anchor instead of an existing document block; this is likely incompatible with the editor API and could no-op or throw, so it would be safer to use the current first document block (or a dedicated insertion API) as the anchor.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `apps/web/app/api/import/route.ts`, the `onConflictDoUpdate` handlers for folders and notes only target `id` and will happily update rows regardless of `userId`; in a multi-tenant DB this could let imported data overwrite other users’ records on ID collision, so consider scoping conflicts (and/or update conditions) by both `id` and `userId`.
- In `apps/web/app/api/tasks/sync/route.ts`, the delete-then-insert replacement of tasks for a note is not wrapped in a transaction, which can leave a note with no tasks or a partially written set if the insert fails; consider using a transaction (or UPSERT pattern) around the delete/insert block for better atomicity.
- The updated `insertTemplate` implementation in `NoteFooterBar` uses `editor.insertBlocks(blocks as any, blocks[0] as any, 'after')` when the document is empty, which passes a block from the new content as the anchor instead of an existing document block; this is likely incompatible with the editor API and could no-op or throw, so it would be safer to use the current first document block (or a dedicated insertion API) as the anchor.
## Individual Comments
### Comment 1
<location> `apps/web/features/editor/components/note-footer-bar.tsx:71-73` </location>
<code_context>
- if (!anchor) return
- editor.insertBlocks(blocks as any, anchor, 'after')
- editor.setTextCursorPosition(anchor, 'end')
+ if (editor.document?.length) {
+ editor.replaceBlocks(editor.document, blocks as any)
+ } else {
+ editor.insertBlocks(blocks as any, blocks[0] as any, 'after')
+ }
</code_context>
<issue_to_address>
**issue (bug_risk):** Anchor passed to insertBlocks for an empty document looks incorrect and may no-op or misplace the template.
In the `else` branch, `insertBlocks(blocks as any, blocks[0] as any, 'after')` uses `blocks[0]` as the anchor even though it isn’t in `editor.document`, so the insertion may no-op or land in an unexpected place. Instead, anchor to an existing root node (e.g. `editor.document?.[0]`) or use an API that inserts at the start of an empty document to preserve the previous behavior.
</issue_to_address>
### Comment 2
<location> `apps/web/app/api/import/route.ts:117-123` </location>
<code_context>
- return NextResponse.json({ success: true })
+ // Execute Batched Inserts in Transaction
+ await db.transaction(async (tx) => {
+ // 1. Insert Folders
+ if (foldersToInsert.length > 0) {
+ const chunkSize = 1000
+ for (let i = 0; i < foldersToInsert.length; i += chunkSize) {
+ const chunk = foldersToInsert.slice(i, i + chunkSize)
+ await tx.insert(folders)
+ .values(chunk)
+ .onConflictDoUpdate({
</code_context>
<issue_to_address>
**issue (bug_risk):** Import upserts can overwrite other users’ data because conflicts are resolved only on id, not (userId, id).
The upserts for folders and notes use `onConflictDoUpdate({ target: folders.id })` / `target: notes.id` while also setting `userId` in the values. If two users import records with the same `id`, the later import will overwrite the other user’s row instead of creating a separate record.
To avoid cross-user data clobbering, either:
- Use a composite unique key `(userId, id)` as the conflict target, or
- When a conflict is detected with a different `userId`, skip the upsert or assign a new ID for the importing user.
As written, a malicious or buggy client can overwrite another user’s data by choosing colliding IDs.
</issue_to_address>
### Comment 3
<location> `apps/web/app/api/tasks/sync/route.ts:74-78` </location>
<code_context>
+ }
+ })
+
+ // Replace note-scoped task rows atomically enough for current usage.
+ await db.delete(tasks).where(and(eq(tasks.noteId, body.noteId), eq(tasks.userId, userId)))
+ if (rows.length > 0) {
+ await db.insert(tasks).values(rows)
}
</code_context>
<issue_to_address>
**suggestion (bug_risk):** Deleting and reinserting tasks without an explicit transaction can cause inconsistent state under concurrent calls.
The current sequence:
1) delete all tasks for the note/user
2) insert the rebuilt rows
can race under concurrent syncs: a later delete can remove rows inserted by an earlier sync, and any error after the delete leaves the note with no tasks. Please wrap the delete + insert in a single transaction (if `getDatabase` supports it) so each sync is all-or-nothing and not affected by concurrent calls.
```suggestion
// Replace note-scoped task rows atomically within a transaction to avoid
// inconsistencies under concurrent syncs.
await db.transaction(async (tx) => {
await tx
.delete(tasks)
.where(and(eq(tasks.noteId, body.noteId), eq(tasks.userId, userId)))
if (rows.length > 0) {
await tx.insert(tasks).values(rows)
}
})
```
</issue_to_address>
### Comment 4
<location> `apps/web/__tests__/api/ai-auth-routes.test.ts:126-110` </location>
<code_context>
+describe('AI auth route normalization', () => {
</code_context>
<issue_to_address>
**suggestion (testing):** Broaden AI auth route tests to cover successful flows and rate-limit errors for stronger regression protection
The existing tests cover unauthenticated delegation to `requireAuth` and `userId` scoping, but they miss the new success and error paths on these sensitive routes. To align with the "AUTH-001 verification" goal, consider adding:
- `POST /api/ai/prompt` success: mock `requireAuth` as authenticated, `checkRateLimit` as allowed, and `sendPrompt` to return a response; assert the response shape (e.g. `text`, `tokensUsed`) and that `aiPromptLog.insert` receives the correct `userId` and provider/model.
- `POST /api/ai/prompt` rate-limited: mock `checkRateLimit` to return `allowed: false` and assert a 429 with the expected message.
- `GET /api/ai/config` with existing config: set `_setResults` to a single row and assert the decrypted response matches the DB data for the authenticated `userId`.
- `GET /api/ai/usage` unauthenticated: mirror your prompt test to ensure the unauthenticated `requireAuth` result is returned directly and the DB is not queried.
Suggested implementation:
```typescript
mockDb.set.mockClear()
})
describe('AI auth route normalization', () => {
it('POST /api/ai/prompt returns prompt response and logs usage when authenticated and not rate-limited', async () => {
const userId = 'user_123'
const provider = 'openai'
const model = 'gpt-4'
const prompt = 'Hello, AI'
// authenticated user
mockRequireAuth.mockResolvedValue({
authenticated: true,
userId
})
// rate limit allows the request
mockCheckRateLimit.mockResolvedValue({
allowed: true,
limit: 1000,
remaining: 999,
reset: new Date().toISOString()
})
// AI provider returns a response
const promptResult = {
text: 'Hi there!',
tokensUsed: 42,
provider,
model
}
mockSendPrompt.mockResolvedValue(promptResult)
const request = new Request('http://localhost/api/ai/prompt', {
method: 'POST',
headers: { 'content-type': 'application/json' },
body: JSON.stringify({ prompt })
})
const response = await promptPost(request)
expect(response.status).toBe(200)
const body = await response.json()
expect(body).toEqual(
expect.objectContaining({
text: promptResult.text,
tokensUsed: promptResult.tokensUsed,
provider,
model
})
)
// ensure we log usage with the correct user and provider/model
expect(mockDb.insert).toHaveBeenCalled()
expect(mockDb.values).toHaveBeenCalledWith(
expect.objectContaining({
userId,
provider,
model
})
)
})
it('POST /api/ai/prompt returns 429 when rate-limited', async () => {
const userId = 'user_456'
// authenticated user
mockRequireAuth.mockResolvedValue({
authenticated: true,
userId
})
// rate limit denies the request
mockCheckRateLimit.mockResolvedValue({
allowed: false,
limit: 1000,
remaining: 0,
reset: new Date().toISOString()
})
const request = new Request('http://localhost/api/ai/prompt', {
method: 'POST',
headers: { 'content-type': 'application/json' },
body: JSON.stringify({ prompt: 'This should be rate limited' })
})
const response = await promptPost(request)
expect(response.status).toBe(429)
const body = await response.json()
expect(body).toEqual(
expect.objectContaining({
error: expect.any(String)
})
)
})
it('GET /api/ai/config returns decrypted config for authenticated user', async () => {
const userId = 'user_789'
const provider = 'openai'
const model = 'gpt-4o'
const baseUrl = 'https://api.openai.com/v1'
const encryptedApiKey = 'encrypted-key'
const decryptedApiKey = 'sk-test-decrypted'
mockRequireAuth.mockResolvedValue({
authenticated: true,
userId
})
// DB returns a single config row for this user
mockDb._setResults([
{
userId,
provider,
model,
baseUrl,
apiKeyEncrypted: encryptedApiKey
}
])
// decrypt returns a usable API key
if (typeof mockDecrypt === 'function') {
mockDecrypt.mockResolvedValue(decryptedApiKey)
}
const request = new Request('http://localhost/api/ai/config', {
method: 'GET'
})
const response = await configGet(request)
expect(response.status).toBe(200)
const body = await response.json()
expect(body).toEqual(
expect.objectContaining({
provider,
model,
baseUrl
})
)
// if we have a decrypt mock, ensure the decrypted key is surfaced
if (typeof mockDecrypt === 'function') {
expect(body).toEqual(
expect.objectContaining({
apiKey: decryptedApiKey
})
)
}
})
it('GET /api/ai/usage returns auth helper response when unauthenticated and does not hit the DB', async () => {
const unauthResponse = new Response(JSON.stringify({ error: 'Unauthorized' }), {
status: 401,
headers: { 'content-type': 'application/json' }
})
mockRequireAuth.mockResolvedValue({
authenticated: false,
response: unauthResponse
})
const request = new Request('http://localhost/api/ai/usage', {
method: 'GET'
})
const response = await usageGet(request)
// returns the exact response from the auth helper
expect(response).toBe(unauthResponse)
// ensure we did not query usage stats while unauthenticated
expect(mockDb.select).not.toHaveBeenCalled()
expect(mockDb.from).not.toHaveBeenCalled()
})
})
describe('AI auth route normalization', () => {
```
These edits assume the following already exist in this test file (they likely do, based on the existing tests and mocks):
1. Handlers:
- `promptPost` for `POST /api/ai/prompt`
- `configGet` for `GET /api/ai/config`
- `usageGet` for `GET /api/ai/usage`
2. Mocks:
- `mockRequireAuth` for the authentication helper
- `mockCheckRateLimit` for the rate-limit helper used by the prompt route
- `mockSendPrompt` (or similarly named) for the AI provider call used in the prompt route
- `mockDb` with `_setResults`, `select`, `from`, `insert`, and `values` mocks
- An optional `mockDecrypt` (or similar) for decrypting the stored API key
If the actual names differ (e.g. `promptPOST` instead of `promptPost`, `getConfig` instead of `configGet`, `decrypt` instead of `mockDecrypt`), update the new tests to match the real imports/mocks. Also, if the config handler returns a slightly different shape (e.g. nested under `config`), adjust the `expect(body).toEqual(expect.objectContaining(...))` assertions accordingly. The new describe block uses the same "AI auth route normalization" label; if you prefer a unique name, you can rename it to something like `"AI auth route normalization (success and error flows)"` without changing the test logic.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| if (editor.document?.length) { | ||
| editor.replaceBlocks(editor.document, blocks as any) | ||
| } else { |
There was a problem hiding this comment.
issue (bug_risk): Anchor passed to insertBlocks for an empty document looks incorrect and may no-op or misplace the template.
In the else branch, insertBlocks(blocks as any, blocks[0] as any, 'after') uses blocks[0] as the anchor even though it isn’t in editor.document, so the insertion may no-op or land in an unexpected place. Instead, anchor to an existing root node (e.g. editor.document?.[0]) or use an API that inserts at the start of an empty document to preserve the previous behavior.
| await db.transaction(async (tx) => { | ||
| // 1. Insert Folders | ||
| if (foldersToInsert.length > 0) { | ||
| const chunkSize = 1000 | ||
| for (let i = 0; i < foldersToInsert.length; i += chunkSize) { | ||
| const chunk = foldersToInsert.slice(i, i + chunkSize) | ||
| await tx.insert(folders) |
There was a problem hiding this comment.
issue (bug_risk): Import upserts can overwrite other users’ data because conflicts are resolved only on id, not (userId, id).
The upserts for folders and notes use onConflictDoUpdate({ target: folders.id }) / target: notes.id while also setting userId in the values. If two users import records with the same id, the later import will overwrite the other user’s row instead of creating a separate record.
To avoid cross-user data clobbering, either:
- Use a composite unique key
(userId, id)as the conflict target, or - When a conflict is detected with a different
userId, skip the upsert or assign a new ID for the importing user.
As written, a malicious or buggy client can overwrite another user’s data by choosing colliding IDs.
| // Replace note-scoped task rows atomically enough for current usage. | ||
| await db.delete(tasks).where(and(eq(tasks.noteId, body.noteId), eq(tasks.userId, userId))) | ||
| if (rows.length > 0) { | ||
| await db.insert(tasks).values(rows) | ||
| } |
There was a problem hiding this comment.
suggestion (bug_risk): Deleting and reinserting tasks without an explicit transaction can cause inconsistent state under concurrent calls.
The current sequence:
- delete all tasks for the note/user
- insert the rebuilt rows
can race under concurrent syncs: a later delete can remove rows inserted by an earlier sync, and any error after the delete leaves the note with no tasks. Please wrap the delete + insert in a single transaction (ifgetDatabasesupports it) so each sync is all-or-nothing and not affected by concurrent calls.
| // Replace note-scoped task rows atomically enough for current usage. | |
| await db.delete(tasks).where(and(eq(tasks.noteId, body.noteId), eq(tasks.userId, userId))) | |
| if (rows.length > 0) { | |
| await db.insert(tasks).values(rows) | |
| } | |
| // Replace note-scoped task rows atomically within a transaction to avoid | |
| // inconsistencies under concurrent syncs. | |
| await db.transaction(async (tx) => { | |
| await tx | |
| .delete(tasks) | |
| .where(and(eq(tasks.noteId, body.noteId), eq(tasks.userId, userId))) | |
| if (rows.length > 0) { | |
| await tx.insert(tasks).values(rows) | |
| } | |
| }) |
| ;({ POST: promptPost } = await import('@/app/api/ai/prompt/route')) | ||
| ;({ GET: configGet } = await import('@/app/api/ai/config/route')) | ||
| ;({ GET: usageGet } = await import('@/app/api/ai/usage/route')) | ||
| }) |
There was a problem hiding this comment.
suggestion (testing): Broaden AI auth route tests to cover successful flows and rate-limit errors for stronger regression protection
The existing tests cover unauthenticated delegation to requireAuth and userId scoping, but they miss the new success and error paths on these sensitive routes. To align with the "AUTH-001 verification" goal, consider adding:
POST /api/ai/promptsuccess: mockrequireAuthas authenticated,checkRateLimitas allowed, andsendPromptto return a response; assert the response shape (e.g.text,tokensUsed) and thataiPromptLog.insertreceives the correctuserIdand provider/model.POST /api/ai/promptrate-limited: mockcheckRateLimitto returnallowed: falseand assert a 429 with the expected message.GET /api/ai/configwith existing config: set_setResultsto a single row and assert the decrypted response matches the DB data for the authenticateduserId.GET /api/ai/usageunauthenticated: mirror your prompt test to ensure the unauthenticatedrequireAuthresult is returned directly and the DB is not queried.
Suggested implementation:
mockDb.set.mockClear()
})
describe('AI auth route normalization', () => {
it('POST /api/ai/prompt returns prompt response and logs usage when authenticated and not rate-limited', async () => {
const userId = 'user_123'
const provider = 'openai'
const model = 'gpt-4'
const prompt = 'Hello, AI'
// authenticated user
mockRequireAuth.mockResolvedValue({
authenticated: true,
userId
})
// rate limit allows the request
mockCheckRateLimit.mockResolvedValue({
allowed: true,
limit: 1000,
remaining: 999,
reset: new Date().toISOString()
})
// AI provider returns a response
const promptResult = {
text: 'Hi there!',
tokensUsed: 42,
provider,
model
}
mockSendPrompt.mockResolvedValue(promptResult)
const request = new Request('http://localhost/api/ai/prompt', {
method: 'POST',
headers: { 'content-type': 'application/json' },
body: JSON.stringify({ prompt })
})
const response = await promptPost(request)
expect(response.status).toBe(200)
const body = await response.json()
expect(body).toEqual(
expect.objectContaining({
text: promptResult.text,
tokensUsed: promptResult.tokensUsed,
provider,
model
})
)
// ensure we log usage with the correct user and provider/model
expect(mockDb.insert).toHaveBeenCalled()
expect(mockDb.values).toHaveBeenCalledWith(
expect.objectContaining({
userId,
provider,
model
})
)
})
it('POST /api/ai/prompt returns 429 when rate-limited', async () => {
const userId = 'user_456'
// authenticated user
mockRequireAuth.mockResolvedValue({
authenticated: true,
userId
})
// rate limit denies the request
mockCheckRateLimit.mockResolvedValue({
allowed: false,
limit: 1000,
remaining: 0,
reset: new Date().toISOString()
})
const request = new Request('http://localhost/api/ai/prompt', {
method: 'POST',
headers: { 'content-type': 'application/json' },
body: JSON.stringify({ prompt: 'This should be rate limited' })
})
const response = await promptPost(request)
expect(response.status).toBe(429)
const body = await response.json()
expect(body).toEqual(
expect.objectContaining({
error: expect.any(String)
})
)
})
it('GET /api/ai/config returns decrypted config for authenticated user', async () => {
const userId = 'user_789'
const provider = 'openai'
const model = 'gpt-4o'
const baseUrl = 'https://api.openai.com/v1'
const encryptedApiKey = 'encrypted-key'
const decryptedApiKey = 'sk-test-decrypted'
mockRequireAuth.mockResolvedValue({
authenticated: true,
userId
})
// DB returns a single config row for this user
mockDb._setResults([
{
userId,
provider,
model,
baseUrl,
apiKeyEncrypted: encryptedApiKey
}
])
// decrypt returns a usable API key
if (typeof mockDecrypt === 'function') {
mockDecrypt.mockResolvedValue(decryptedApiKey)
}
const request = new Request('http://localhost/api/ai/config', {
method: 'GET'
})
const response = await configGet(request)
expect(response.status).toBe(200)
const body = await response.json()
expect(body).toEqual(
expect.objectContaining({
provider,
model,
baseUrl
})
)
// if we have a decrypt mock, ensure the decrypted key is surfaced
if (typeof mockDecrypt === 'function') {
expect(body).toEqual(
expect.objectContaining({
apiKey: decryptedApiKey
})
)
}
})
it('GET /api/ai/usage returns auth helper response when unauthenticated and does not hit the DB', async () => {
const unauthResponse = new Response(JSON.stringify({ error: 'Unauthorized' }), {
status: 401,
headers: { 'content-type': 'application/json' }
})
mockRequireAuth.mockResolvedValue({
authenticated: false,
response: unauthResponse
})
const request = new Request('http://localhost/api/ai/usage', {
method: 'GET'
})
const response = await usageGet(request)
// returns the exact response from the auth helper
expect(response).toBe(unauthResponse)
// ensure we did not query usage stats while unauthenticated
expect(mockDb.select).not.toHaveBeenCalled()
expect(mockDb.from).not.toHaveBeenCalled()
})
})
describe('AI auth route normalization', () => {These edits assume the following already exist in this test file (they likely do, based on the existing tests and mocks):
- Handlers:
promptPostforPOST /api/ai/promptconfigGetforGET /api/ai/configusageGetforGET /api/ai/usage
- Mocks:
mockRequireAuthfor the authentication helpermockCheckRateLimitfor the rate-limit helper used by the prompt routemockSendPrompt(or similarly named) for the AI provider call used in the prompt routemockDbwith_setResults,select,from,insert, andvaluesmocks- An optional
mockDecrypt(or similar) for decrypting the stored API key
If the actual names differ (e.g. promptPOST instead of promptPost, getConfig instead of configGet, decrypt instead of mockDecrypt), update the new tests to match the real imports/mocks. Also, if the config handler returns a slightly different shape (e.g. nested under config), adjust the expect(body).toEqual(expect.objectContaining(...)) assertions accordingly. The new describe block uses the same "AI auth route normalization" label; if you prefer a unique name, you can rename it to something like "AI auth route normalization (success and error flows)" without changing the test logic.
Summary\n- normalize AI auth routes to shared requireAuth pattern\n- harden CRUD context with server runtime guard\n- add CRUD context guard tests and API auth route tests\n- normalize guest-user constants usage and update audit dashboard progress\n\n## Dashboard\n- completes CRITICAL-003 guard/test path and AUTH-001 verification\n- updates docs/audit-dashboard.html statuses accordingly\n
Summary by Sourcery
Harden API authentication and CRUD context usage, normalize guest user handling, and update editor, tasks, and AI flows along with adding an audit dashboard and tests.
New Features:
Bug Fixes:
Enhancements:
Documentation:
Tests:
Summary by CodeRabbit
New Features
Bug Fixes
Improvements