feat(core): phase 2 shared schemas and API payload validation#102
feat(core): phase 2 shared schemas and API payload validation#102remcostoeten merged 1 commit intodaddyfrom
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard. |
Reviewer's GuideIntroduces a new @skriuw/core package that centralizes Zod schemas, shared types, and business rules, then wires those schemas into web app API routes for notes, settings, tasks, AI config, and import, adding consistent 400 responses on invalid payloads plus tests and dashboard updates for Phase 2 validation work. Sequence diagram for API request validation with @skriuw/core schemassequenceDiagram
actor Client
participant Route as NotesRoute_POST
participant Core as Core_CreateNoteSchema
participant DB as Database
Client->>Route: HTTP POST /api/notes with JSON body
Route->>Route: requireMutation
Route->>Route: request.json()
Route->>Core: CreateNoteSchema.safeParse(body)
alt payload invalid
Core-->>Route: success false, error
Route-->>Client: 400 Invalid payload with error.flatten
else payload valid
Core-->>Route: success true, data
Route->>Route: normalize type, timestamps
Route->>DB: insert into folders or notes
DB-->>Route: inserted record
Route-->>Client: 200 JSON note or folder response
end
Class diagram for @skriuw/core schemas, types, and rulesclassDiagram
class Rules {
<<module>>
MAX_NOTE_NAME_LENGTH
MAX_NOTE_CONTENT_BYTES
MAX_TASK_CONTENT_LENGTH
MAX_TASK_DESCRIPTION_LENGTH
MAX_SHORTCUT_KEYS_PER_COMBO
MAX_SHORTCUT_COMBOS
AI_DEFAULT_TEMPERATURE
AI_MIN_TEMPERATURE
AI_MAX_TEMPERATURE
AI_DAILY_PROMPT_LIMIT
IMPORT_MAX_PAYLOAD_BYTES
IMPORT_MAX_ITEMS
RATE_LIMITS
}
class CreateNoteSchema {
+string name
+string type
+unknown content
+string parentFolderId
+string icon
+string coverImage
+string[] tags
+boolean pinned
+boolean favorite
+boolean isPublic
}
class UpdateNoteSchema {
+string id
+string name
+unknown content
+string parentFolderId
+string icon
+string coverImage
+string[] tags
+boolean pinned
+number pinnedAt
+boolean favorite
+boolean isPublic
+string publicId
+number publicViews
+number updatedAt
}
class NoteResponseSchema {
+string id
+string type
+string name
+unknown content
+string parentFolderId
+string icon
+string coverImage
+string[] tags
+boolean pinned
+number pinnedAt
+boolean favorite
+boolean isPublic
+string publicId
+number publicViews
+string userId
+number createdAt
+number updatedAt
}
class FolderResponseSchema {
+string id
+string type
+string name
+string parentFolderId
+boolean pinned
+number pinnedAt
+string userId
+number createdAt
+number updatedAt
+unknown[] children
}
class TaskSchema {
+string id
+string noteId
+string noteName
+string blockId
+string content
+string description
+number checked
+number dueDate
+string parentTaskId
+number position
+number createdAt
+number updatedAt
}
class TaskCreateSchema {
+string noteId
+string blockId
+string content
+string description
+boolean checked
+number dueDate
+string parentTaskId
+number position
}
class TaskUpdateSchema {
+string content
+string description
+boolean checked
+number dueDate
+number updatedAt
}
class TaskSyncItemSchema {
+string blockId
+string content
+boolean checked
+string parentTaskId
+number position
}
class TaskSyncPayloadSchema {
+string noteId
+TaskSyncItemSchema[] tasks
}
class AIConfigCreateSchema {
+string provider
+string model
+string basePrompt
+number temperature
}
class AIConfigPatchSchema {
+string provider
+string model
+string basePrompt
+number temperature
}
class AIConfigResponseSchema {
+string id
+string provider
+string model
+string basePrompt
+number temperature
+boolean isActive
}
class AIPromptRequestSchema {
+string prompt
+boolean isTest
+string configId
}
class AIPromptResponseSchema {
+string content
+number tokensUsed
+string provider
+string model
}
class AIUsageResponseSchema {
+number promptsUsedToday
+number promptsRemaining
+number resetAt
}
class SettingsUpsertSchema {
+record settings
}
class SettingsRecordSchema {
+string id
+string key
+record value
+string userId
+number createdAt
+number updatedAt
}
class ImportItemSchema {
+string id
+string name
+string type
+boolean pinned
+number pinnedAt
+number createdAt
+number updatedAt
+string parentFolderId
+unknown content
+unknown favorite
+ImportItemSchema[] children
}
class ImportPayloadSchema {
+ImportItemSchema[] items
}
class ShortcutUpsertSchema {
+string id
+string[][] keys
+number customizedAt
}
class ShortcutResponseSchema {
+string id
+string[][] keys
+string userId
+number customizedAt
+number createdAt
+number updatedAt
}
class CoreTypes {
<<module>>
+type Id
+type Timestamp
+type NoteType
+type CreateNoteInput
+type UpdateNoteInput
+type NoteResponse
+type FolderResponse
+type NoteOrFolderResponse
+type SettingsUpsertInput
+type SettingsRecord
+type Task
+type TaskCreateInput
+type TaskUpdateInput
+type TaskSyncItem
+type TaskSyncPayload
+type KeyCombo
+type ShortcutUpsertInput
+type ShortcutResponse
+type AIConfigCreateInput
+type AIConfigPatchInput
+type AIConfigResponse
+type AIPromptRequest
+type AIPromptResponse
+type AIUsageResponse
+type ImportItem
+type ImportPayload
}
Rules <.. CreateNoteSchema
Rules <.. UpdateNoteSchema
Rules <.. TaskSchema
Rules <.. TaskCreateSchema
Rules <.. TaskUpdateSchema
Rules <.. TaskSyncItemSchema
Rules <.. AIConfigCreateSchema
Rules <.. AIConfigPatchSchema
Rules <.. ImportPayloadSchema
Rules <.. ShortcutUpsertSchema
CreateNoteSchema <|-- CreateNoteInput
UpdateNoteSchema <|-- UpdateNoteInput
NoteResponseSchema <|-- NoteResponse
FolderResponseSchema <|-- FolderResponse
TaskSchema <|-- Task
TaskCreateSchema <|-- TaskCreateInput
TaskUpdateSchema <|-- TaskUpdateInput
TaskSyncItemSchema <|-- TaskSyncItem
TaskSyncPayloadSchema <|-- TaskSyncPayload
AIConfigCreateSchema <|-- AIConfigCreateInput
AIConfigPatchSchema <|-- AIConfigPatchInput
AIConfigResponseSchema <|-- AIConfigResponse
AIPromptRequestSchema <|-- AIPromptRequest
AIPromptResponseSchema <|-- AIPromptResponse
AIUsageResponseSchema <|-- AIUsageResponse
SettingsUpsertSchema <|-- SettingsUpsertInput
SettingsRecordSchema <|-- SettingsRecord
ImportItemSchema <|-- ImportItem
ImportPayloadSchema <|-- ImportPayload
ShortcutUpsertSchema <|-- ShortcutUpsertInput
ShortcutResponseSchema <|-- ShortcutResponse
TaskSyncPayloadSchema *-- TaskSyncItemSchema
ImportPayloadSchema *-- ImportItemSchema
ImportItemSchema *-- ImportItemSchema
ShortcutResponseSchema o-- ShortcutUpsertSchema
CoreTypes ..> CreateNoteSchema
CoreTypes ..> UpdateNoteSchema
CoreTypes ..> NoteResponseSchema
CoreTypes ..> FolderResponseSchema
CoreTypes ..> TaskSchema
CoreTypes ..> TaskCreateSchema
CoreTypes ..> TaskUpdateSchema
CoreTypes ..> TaskSyncItemSchema
CoreTypes ..> TaskSyncPayloadSchema
CoreTypes ..> AIConfigCreateSchema
CoreTypes ..> AIConfigPatchSchema
CoreTypes ..> AIConfigResponseSchema
CoreTypes ..> AIPromptRequestSchema
CoreTypes ..> AIPromptResponseSchema
CoreTypes ..> AIUsageResponseSchema
CoreTypes ..> SettingsUpsertSchema
CoreTypes ..> SettingsRecordSchema
CoreTypes ..> ImportItemSchema
CoreTypes ..> ImportPayloadSchema
CoreTypes ..> ShortcutUpsertSchema
CoreTypes ..> ShortcutResponseSchema
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
|
Caution Review failedThe pull request is closed. 📝 WalkthroughWalkthroughAdds a new Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Hey - I've found 5 issues, and left some high level feedback:
- The
invalidPayloadhelper is duplicated across several API routes; consider extracting a shared utility (e.g., inapps/web/lib/api-validation.ts) to centralize error formatting and status handling. - Each schema file redefines
TimestampSchemaasz.number().int().nonnegative(); you could export a shared primitive (e.g., from aschemas/primitives.ts) to avoid duplication and keep time-related validation consistent. - In
ImportItemSchemayou type it asz.ZodType<any>, which loses static type safety; consider defining the union shape explicitly with proper generics so the inferredImportItemtype is more precise and lessany-driven.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The `invalidPayload` helper is duplicated across several API routes; consider extracting a shared utility (e.g., in `apps/web/lib/api-validation.ts`) to centralize error formatting and status handling.
- Each schema file redefines `TimestampSchema` as `z.number().int().nonnegative()`; you could export a shared primitive (e.g., from a `schemas/primitives.ts`) to avoid duplication and keep time-related validation consistent.
- In `ImportItemSchema` you type it as `z.ZodType<any>`, which loses static type safety; consider defining the union shape explicitly with proper generics so the inferred `ImportItem` type is more precise and less `any`-driven.
## Individual Comments
### Comment 1
<location> `apps/web/app/api/ai/config/route.ts:8-14` </location>
<code_context>
import { encryptPrompt, decryptPrompt } from '@/features/ai/utilities'
+import { z } from 'zod'
+
+function invalidPayload(error: z.ZodError) {
+ return NextResponse.json(
+ {
+ error: 'Invalid payload',
+ details: error.flatten()
+ },
+ { status: 400 }
+ )
+}
</code_context>
<issue_to_address>
**suggestion:** Consider centralizing the `invalidPayload` helper instead of duplicating it across multiple route files.
The same helper is defined in multiple route modules. Extracting it to a shared utility (for example under `lib/http` or `lib/validation`) would reduce duplication, keep the error response consistent, and make future changes easier to apply.
Suggested implementation:
```typescript
import { requireAuth } from '@/lib/api-auth'
import { AIConfigCreateSchema, AIConfigPatchSchema } from '@skriuw/core'
import { getDatabase, aiProviderConfig, eq, and } from '@skriuw/db'
import { NextResponse } from 'next/server'
import { encryptPrompt, decryptPrompt } from '@/features/ai/utilities'
import { invalidPayload } from '@/lib/http/invalid-payload'
```
```typescript
export async function GET() {
```
1. Create a shared helper at `apps/web/lib/http/invalid-payload.ts` (or equivalent alias path for `@/lib/http/invalid-payload`) with something like:
```ts
import { NextResponse } from 'next/server'
import { ZodError } from 'zod'
export function invalidPayload(error: ZodError) {
return NextResponse.json(
{
error: 'Invalid payload',
details: error.flatten(),
},
{ status: 400 },
)
}
```
2. Update all other route files that currently define their own `invalidPayload` helper to remove the local function and `z` import, and instead import `invalidPayload` from `@/lib/http/invalid-payload`.
3. If `NextResponse` is not used elsewhere in this route file beyond the `invalidPayload` helper, you can safely remove the `NextResponse` import from this file as well.
</issue_to_address>
### Comment 2
<location> `apps/web/app/api/tasks/item/[taskId]/route.ts:146-157` </location>
<code_context>
}
- if (typeof body.checked === 'number' || typeof body.checked === 'boolean') {
- updateData.checked = body.checked ? 1 : 0
+ if (typeof payload.checked === 'number' || typeof payload.checked === 'boolean') {
+ updateData.checked = payload.checked ? 1 : 0
}
- if (body.dueDate !== undefined) {
</code_context>
<issue_to_address>
**suggestion:** Align the runtime type checks with the `TaskUpdateSchema` to avoid redundant or slightly divergent validation logic.
Because `TaskUpdateSchema.checked` already restricts the value to `boolean | 0 | 1`, this `typeof` check duplicates that validation and may drift if the schema changes. Instead, rely on the schema by checking for key presence (e.g. `'checked' in payload`) and then normalizing the value, keeping all validation rules in one place.
```suggestion
if (typeof payload.content === 'string') {
updateData.content = payload.content
}
if (payload.description !== undefined) {
updateData.description = payload.description
}
if ('checked' in payload) {
updateData.checked = payload.checked ? 1 : 0
}
if (payload.dueDate !== undefined) {
updateData.dueDate = payload.dueDate
}
```
</issue_to_address>
### Comment 3
<location> `packages/core/src/schemas/import.ts:16` </location>
<code_context>
+ parentFolderId: z.string().nullable().optional()
+})
+
+export const ImportItemSchema: z.ZodType<any> = z.lazy(() =>
+ z.union([
+ ImportItemBaseSchema.extend({
+ type: z.literal('note'),
+ content: z.unknown().optional(),
+ favorite: z.union([z.boolean(), z.number(), z.string()]).optional()
+ }),
+ ImportItemBaseSchema.extend({
+ type: z.literal('folder'),
+ children: z.array(ImportItemSchema).optional()
+ })
+ ])
+)
</code_context>
<issue_to_address>
**suggestion:** Using `ZodType<any>` for `ImportItemSchema` weakens type safety; you can tighten this by inferring the type or using `unknown`.
Using `any` here disables static checking for `ImportItemSchema`. Instead, consider defining the recursive schema with `z.lazy`, then inferring the type via `z.infer<typeof ImportItemSchema>`, or annotating it as `ZodType<ImportItem>` once `ImportItem` exists. If that’s not feasible yet, `ZodType<unknown>` is still safer than `any` and keeps the core package more robustly typed.
```suggestion
export const ImportItemSchema = z.lazy(() =>
```
</issue_to_address>
### Comment 4
<location> `apps/web/__tests__/api/validation-payloads.test.ts:116` </location>
<code_context>
+ mockSqlDb.transaction.mockClear()
+})
+
+describe('API payload validation', () => {
+ it('POST /api/notes returns 400 with validation details', async () => {
+ const response = await notesPost(
</code_context>
<issue_to_address>
**suggestion (testing):** Add happy-path tests to complement the invalid-payload cases for each endpoint
The current tests verify that invalid payloads are rejected before DB access, but they don’t confirm that valid payloads pass through correctly. Please add at least one valid-payload test per endpoint (notes POST/PUT, settings POST, tasks sync POST, import POST, ai/config POST) that asserts: (1) a 2xx response, (2) no validation error structure in the response body, and (3) the mocked DB method is called with the expected arguments (e.g., settings upsert receives an object, tasks sync uses the validated `noteId` and `tasks`). This will confirm that schema-compliant requests reach the handlers as intended.
Suggested implementation:
```typescript
mockServerDb.create.mockClear()
mockServerDb.update.mockClear()
mockServerDb.upsert.mockClear()
mockSqlDb._setResults([])
mockSqlDb.select.mockClear()
mockSqlDb.transaction.mockClear()
})
describe('API payload validation', () => {
it('POST /api/notes returns 400 with validation details', async () => {
const response = await notesPost(
new NextRequest('http://localhost/api/notes', {
method: 'POST',
body: JSON.stringify({ type: 'note' })
})
)
expect(response.status).toBe(400)
const json = await response.json()
expect(json.error).toBe('Invalid payload')
expect(json.details.fieldErrors.name).toBeDefined()
})
it('POST /api/notes accepts a valid payload and calls create', async () => {
const validPayload = {
type: 'note',
name: 'My note',
content: 'Some content'
}
const response = await notesPost(
new NextRequest('http://localhost/api/notes', {
method: 'POST',
body: JSON.stringify(validPayload)
})
)
expect(response.status).toBeGreaterThanOrEqual(200)
expect(response.status).toBeLessThan(300)
const json = await response.json()
expect(json.error).toBeUndefined()
expect(json.details).toBeUndefined()
expect(mockServerDb.create).toHaveBeenCalledTimes(1)
expect(mockServerDb.create).toHaveBeenCalledWith(
expect.objectContaining({
type: 'note',
name: validPayload.name
})
)
})
it('PUT /api/notes/[id] accepts a valid payload and calls update', async () => {
const noteId = 'note-123'
const validPayload = {
type: 'note',
name: 'Updated note',
content: 'Updated content'
}
const response = await notesPut(
new NextRequest(`http://localhost/api/notes/${noteId}`, {
method: 'PUT',
body: JSON.stringify(validPayload)
}),
{ params: { id: noteId } } as any
)
expect(response.status).toBeGreaterThanOrEqual(200)
expect(response.status).toBeLessThan(300)
const json = await response.json()
expect(json.error).toBeUndefined()
expect(json.details).toBeUndefined()
expect(mockServerDb.update).toHaveBeenCalledTimes(1)
expect(mockServerDb.update).toHaveBeenCalledWith(
expect.objectContaining({
id: noteId,
type: 'note',
name: validPayload.name
})
)
})
it('POST /api/settings accepts a valid payload and calls upsert', async () => {
const validPayload = {
theme: 'dark',
editor: { fontSize: 14 }
}
const response = await settingsPost(
new NextRequest('http://localhost/api/settings', {
method: 'POST',
body: JSON.stringify(validPayload)
})
)
expect(response.status).toBeGreaterThanOrEqual(200)
expect(response.status).toBeLessThan(300)
const json = await response.json()
expect(json.error).toBeUndefined()
expect(json.details).toBeUndefined()
expect(mockServerDb.upsert).toHaveBeenCalledTimes(1)
expect(mockServerDb.upsert).toHaveBeenCalledWith(
expect.objectContaining(validPayload)
)
})
it('POST /api/tasks/sync accepts a valid payload and uses validated noteId and tasks', async () => {
const validPayload = {
noteId: 'note-123',
tasks: [
{
id: 'task-1',
title: 'First task',
done: false
}
]
}
const response = await tasksSyncPost(
new NextRequest('http://localhost/api/tasks/sync', {
method: 'POST',
body: JSON.stringify(validPayload)
})
)
expect(response.status).toBeGreaterThanOrEqual(200)
expect(response.status).toBeLessThan(300)
const json = await response.json()
expect(json.error).toBeUndefined()
expect(json.details).toBeUndefined()
expect(mockSqlDb.transaction).toHaveBeenCalledTimes(1)
// Ensure the transaction callback receives the validated payload
const [transactionCallback] = mockSqlDb.transaction.mock.calls[0]
const fakeDb = {
insert: jest.fn()
}
await transactionCallback(fakeDb)
expect(fakeDb.insert).toHaveBeenCalledWith(
expect.objectContaining({
noteId: validPayload.noteId,
tasks: validPayload.tasks
})
)
})
it('POST /api/import accepts a valid payload and calls the appropriate DB logic', async () => {
const validPayload = {
source: 'markdown',
data: '# My imported note'
}
const response = await importPost(
new NextRequest('http://localhost/api/import', {
method: 'POST',
body: JSON.stringify(validPayload)
})
)
expect(response.status).toBeGreaterThanOrEqual(200)
expect(response.status).toBeLessThan(300)
const json = await response.json()
expect(json.error).toBeUndefined()
expect(json.details).toBeUndefined()
// We only assert that some DB write was triggered; the exact method may vary.
expect(
mockServerDb.create.mock.calls.length +
mockServerDb.update.mock.calls.length +
mockServerDb.upsert.mock.calls.length
).toBeGreaterThan(0)
})
it('POST /api/ai/config accepts a valid payload and persists configuration', async () => {
const validPayload = {
model: 'gpt-4.1-mini',
maxTokens: 256,
systemPrompt: 'You are a helpful assistant.'
}
const response = await aiConfigPost(
new NextRequest('http://localhost/api/ai/config', {
method: 'POST',
body: JSON.stringify(validPayload)
})
)
expect(response.status).toBeGreaterThanOrEqual(200)
expect(response.status).toBeLessThan(300)
const json = await response.json()
expect(json.error).toBeUndefined()
expect(json.details).toBeUndefined()
expect(mockServerDb.upsert).toHaveBeenCalled()
expect(mockServerDb.upsert).toHaveBeenCalledWith(
expect.objectContaining(validPayload)
)
```
1. Ensure the helper functions used in these tests are correctly imported or available in the test scope:
- `notesPut`
- `settingsPost`
- `tasksSyncPost`
- `importPost`
- `aiConfigPost`
2. Adjust the expected status codes if your endpoints return specific values (for example, `201` for create, `204` for updates without bodies). If an endpoint returns `204`, replace the `response.json()` calls with assertions on `response.status` only.
3. Align `validPayload` shapes with your existing Zod (or other) schemas for each endpoint. The properties I used (`type`, `name`, `content`, `theme`, `editor`, `noteId`, `tasks`, etc.) may need to be updated to match the actual validation rules.
4. If `mockSqlDb.transaction` does not accept a `(db) => {}` callback with an `insert` method, adjust the tasks sync test to assert against whatever mocked repository/DB function is actually called inside the transaction, and remove the `fakeDb`/`insert` logic accordingly.
5. If your DB abstraction for imports or AI config uses a specific method (e.g., `mockServerDb.import` or a dedicated repository mock), replace the generic DB assertions with calls to the appropriate mock and assert the payload/arguments as needed.
</issue_to_address>
### Comment 5
<location> `apps/web/__tests__/api/validation-payloads.test.ts:180-189` </location>
<code_context>
+ expect(mockSqlDb.select).not.toHaveBeenCalled()
+ })
+
+ it('POST /api/import returns 400 with validation details', async () => {
+ const response = await importPost(
+ new NextRequest('http://localhost/api/import', {
+ method: 'POST',
+ body: JSON.stringify({ wrong: 'schema' })
+ })
+ )
+
+ expect(response.status).toBe(400)
+ const json = await response.json()
+ expect(json.error).toBe('Invalid payload')
+ expect(json.details).toBeDefined()
+ expect(mockSqlDb.transaction).not.toHaveBeenCalled()
+ })
+
</code_context>
<issue_to_address>
**suggestion (testing):** Tighten import payload assertions by checking field-level validation errors and edge cases
Here we only check that `json.details` exists. Since `ImportPayloadSchema` enforces a structured shape (required `items`, note/folder unions, `IMPORT_MAX_ITEMS`, etc.), it’d be more valuable to assert specific paths like `details.fieldErrors.items` (or the equivalent) for invalid payloads, and to add at least one schema‑driven edge case (e.g., > `IMPORT_MAX_ITEMS` items or an item missing `type`) to confirm those constraints show up as structured `fieldErrors`.
Suggested implementation:
```typescript
it('POST /api/import returns 400 with validation details', async () => {
const response = await importPost(
new NextRequest('http://localhost/api/import', {
method: 'POST',
body: JSON.stringify({ wrong: 'schema' })
})
)
expect(response.status).toBe(400)
const json = await response.json()
expect(json.error).toBe('Invalid payload')
// Tightened assertions: ensure field-level validation information is present
expect(json.details).toBeDefined()
expect(json.details.fieldErrors).toBeDefined()
expect(json.details.fieldErrors.items).toBeDefined()
expect(Array.isArray(json.details.fieldErrors.items)).toBe(true)
expect(json.details.fieldErrors.items.length).toBeGreaterThan(0)
expect(mockSqlDb.transaction).not.toHaveBeenCalled()
})
it('POST /api/import returns 400 with fieldErrors for invalid items', async () => {
// Edge case: items array present but contains an invalid entry (e.g. missing "type")
const response = await importPost(
new NextRequest('http://localhost/api/import', {
method: 'POST',
body: JSON.stringify({
items: [
{
// deliberately missing required fields like "type" to trigger item-level validation
}
]
})
})
)
expect(response.status).toBe(400)
const json = await response.json()
expect(json.error).toBe('Invalid payload')
expect(json.details).toBeDefined()
expect(json.details.fieldErrors).toBeDefined()
// Be tolerant to how array field paths are encoded in fieldErrors:
// either as "items.0.type" or nested inside items[0].type
const itemTypeError =
json.details.fieldErrors['items.0.type'] ??
json.details.fieldErrors.items?.[0]?.type
expect(itemTypeError).toBeDefined()
expect(mockSqlDb.transaction).not.toHaveBeenCalled()
})
```
If the `fieldErrors` shape differs (for example, if arrays are keyed as `items._errors` or with a different pathing convention), adjust the assertions in both tests to match your actual `ImportPayloadSchema` error format. If `IMPORT_MAX_ITEMS` is exported from your import module and you want to assert that limit explicitly, you can add another test similar to the second one, building an `items` array of length `IMPORT_MAX_ITEMS + 1` and asserting that `fieldErrors.items` (or the appropriate path) contains a message indicating the max-item constraint.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| function invalidPayload(error: z.ZodError) { | ||
| return NextResponse.json( | ||
| { | ||
| error: 'Invalid payload', | ||
| details: error.flatten() | ||
| }, | ||
| { status: 400 } |
There was a problem hiding this comment.
suggestion: Consider centralizing the invalidPayload helper instead of duplicating it across multiple route files.
The same helper is defined in multiple route modules. Extracting it to a shared utility (for example under lib/http or lib/validation) would reduce duplication, keep the error response consistent, and make future changes easier to apply.
Suggested implementation:
import { requireAuth } from '@/lib/api-auth'
import { AIConfigCreateSchema, AIConfigPatchSchema } from '@skriuw/core'
import { getDatabase, aiProviderConfig, eq, and } from '@skriuw/db'
import { NextResponse } from 'next/server'
import { encryptPrompt, decryptPrompt } from '@/features/ai/utilities'
import { invalidPayload } from '@/lib/http/invalid-payload'export async function GET() {- Create a shared helper at
apps/web/lib/http/invalid-payload.ts(or equivalent alias path for@/lib/http/invalid-payload) with something like:import { NextResponse } from 'next/server' import { ZodError } from 'zod' export function invalidPayload(error: ZodError) { return NextResponse.json( { error: 'Invalid payload', details: error.flatten(), }, { status: 400 }, ) }
- Update all other route files that currently define their own
invalidPayloadhelper to remove the local function andzimport, and instead importinvalidPayloadfrom@/lib/http/invalid-payload. - If
NextResponseis not used elsewhere in this route file beyond theinvalidPayloadhelper, you can safely remove theNextResponseimport from this file as well.
| if (typeof payload.content === 'string') { | ||
| updateData.content = payload.content | ||
| } | ||
| if (body.description !== undefined) { | ||
| updateData.description = body.description | ||
| if (payload.description !== undefined) { | ||
| updateData.description = payload.description | ||
| } | ||
| if (typeof body.checked === 'number' || typeof body.checked === 'boolean') { | ||
| updateData.checked = body.checked ? 1 : 0 | ||
| if (typeof payload.checked === 'number' || typeof payload.checked === 'boolean') { | ||
| updateData.checked = payload.checked ? 1 : 0 | ||
| } | ||
| if (body.dueDate !== undefined) { | ||
| updateData.dueDate = body.dueDate | ||
| if (payload.dueDate !== undefined) { | ||
| updateData.dueDate = payload.dueDate | ||
| } |
There was a problem hiding this comment.
suggestion: Align the runtime type checks with the TaskUpdateSchema to avoid redundant or slightly divergent validation logic.
Because TaskUpdateSchema.checked already restricts the value to boolean | 0 | 1, this typeof check duplicates that validation and may drift if the schema changes. Instead, rely on the schema by checking for key presence (e.g. 'checked' in payload) and then normalizing the value, keeping all validation rules in one place.
| if (typeof payload.content === 'string') { | |
| updateData.content = payload.content | |
| } | |
| if (body.description !== undefined) { | |
| updateData.description = body.description | |
| if (payload.description !== undefined) { | |
| updateData.description = payload.description | |
| } | |
| if (typeof body.checked === 'number' || typeof body.checked === 'boolean') { | |
| updateData.checked = body.checked ? 1 : 0 | |
| if (typeof payload.checked === 'number' || typeof payload.checked === 'boolean') { | |
| updateData.checked = payload.checked ? 1 : 0 | |
| } | |
| if (body.dueDate !== undefined) { | |
| updateData.dueDate = body.dueDate | |
| if (payload.dueDate !== undefined) { | |
| updateData.dueDate = payload.dueDate | |
| } | |
| if (typeof payload.content === 'string') { | |
| updateData.content = payload.content | |
| } | |
| if (payload.description !== undefined) { | |
| updateData.description = payload.description | |
| } | |
| if ('checked' in payload) { | |
| updateData.checked = payload.checked ? 1 : 0 | |
| } | |
| if (payload.dueDate !== undefined) { | |
| updateData.dueDate = payload.dueDate | |
| } |
| parentFolderId: z.string().nullable().optional() | ||
| }) | ||
|
|
||
| export const ImportItemSchema: z.ZodType<any> = z.lazy(() => |
There was a problem hiding this comment.
suggestion: Using ZodType<any> for ImportItemSchema weakens type safety; you can tighten this by inferring the type or using unknown.
Using any here disables static checking for ImportItemSchema. Instead, consider defining the recursive schema with z.lazy, then inferring the type via z.infer<typeof ImportItemSchema>, or annotating it as ZodType<ImportItem> once ImportItem exists. If that’s not feasible yet, ZodType<unknown> is still safer than any and keeps the core package more robustly typed.
| export const ImportItemSchema: z.ZodType<any> = z.lazy(() => | |
| export const ImportItemSchema = z.lazy(() => |
| mockSqlDb.transaction.mockClear() | ||
| }) | ||
|
|
||
| describe('API payload validation', () => { |
There was a problem hiding this comment.
suggestion (testing): Add happy-path tests to complement the invalid-payload cases for each endpoint
The current tests verify that invalid payloads are rejected before DB access, but they don’t confirm that valid payloads pass through correctly. Please add at least one valid-payload test per endpoint (notes POST/PUT, settings POST, tasks sync POST, import POST, ai/config POST) that asserts: (1) a 2xx response, (2) no validation error structure in the response body, and (3) the mocked DB method is called with the expected arguments (e.g., settings upsert receives an object, tasks sync uses the validated noteId and tasks). This will confirm that schema-compliant requests reach the handlers as intended.
Suggested implementation:
mockServerDb.create.mockClear()
mockServerDb.update.mockClear()
mockServerDb.upsert.mockClear()
mockSqlDb._setResults([])
mockSqlDb.select.mockClear()
mockSqlDb.transaction.mockClear()
})
describe('API payload validation', () => {
it('POST /api/notes returns 400 with validation details', async () => {
const response = await notesPost(
new NextRequest('http://localhost/api/notes', {
method: 'POST',
body: JSON.stringify({ type: 'note' })
})
)
expect(response.status).toBe(400)
const json = await response.json()
expect(json.error).toBe('Invalid payload')
expect(json.details.fieldErrors.name).toBeDefined()
})
it('POST /api/notes accepts a valid payload and calls create', async () => {
const validPayload = {
type: 'note',
name: 'My note',
content: 'Some content'
}
const response = await notesPost(
new NextRequest('http://localhost/api/notes', {
method: 'POST',
body: JSON.stringify(validPayload)
})
)
expect(response.status).toBeGreaterThanOrEqual(200)
expect(response.status).toBeLessThan(300)
const json = await response.json()
expect(json.error).toBeUndefined()
expect(json.details).toBeUndefined()
expect(mockServerDb.create).toHaveBeenCalledTimes(1)
expect(mockServerDb.create).toHaveBeenCalledWith(
expect.objectContaining({
type: 'note',
name: validPayload.name
})
)
})
it('PUT /api/notes/[id] accepts a valid payload and calls update', async () => {
const noteId = 'note-123'
const validPayload = {
type: 'note',
name: 'Updated note',
content: 'Updated content'
}
const response = await notesPut(
new NextRequest(`http://localhost/api/notes/${noteId}`, {
method: 'PUT',
body: JSON.stringify(validPayload)
}),
{ params: { id: noteId } } as any
)
expect(response.status).toBeGreaterThanOrEqual(200)
expect(response.status).toBeLessThan(300)
const json = await response.json()
expect(json.error).toBeUndefined()
expect(json.details).toBeUndefined()
expect(mockServerDb.update).toHaveBeenCalledTimes(1)
expect(mockServerDb.update).toHaveBeenCalledWith(
expect.objectContaining({
id: noteId,
type: 'note',
name: validPayload.name
})
)
})
it('POST /api/settings accepts a valid payload and calls upsert', async () => {
const validPayload = {
theme: 'dark',
editor: { fontSize: 14 }
}
const response = await settingsPost(
new NextRequest('http://localhost/api/settings', {
method: 'POST',
body: JSON.stringify(validPayload)
})
)
expect(response.status).toBeGreaterThanOrEqual(200)
expect(response.status).toBeLessThan(300)
const json = await response.json()
expect(json.error).toBeUndefined()
expect(json.details).toBeUndefined()
expect(mockServerDb.upsert).toHaveBeenCalledTimes(1)
expect(mockServerDb.upsert).toHaveBeenCalledWith(
expect.objectContaining(validPayload)
)
})
it('POST /api/tasks/sync accepts a valid payload and uses validated noteId and tasks', async () => {
const validPayload = {
noteId: 'note-123',
tasks: [
{
id: 'task-1',
title: 'First task',
done: false
}
]
}
const response = await tasksSyncPost(
new NextRequest('http://localhost/api/tasks/sync', {
method: 'POST',
body: JSON.stringify(validPayload)
})
)
expect(response.status).toBeGreaterThanOrEqual(200)
expect(response.status).toBeLessThan(300)
const json = await response.json()
expect(json.error).toBeUndefined()
expect(json.details).toBeUndefined()
expect(mockSqlDb.transaction).toHaveBeenCalledTimes(1)
// Ensure the transaction callback receives the validated payload
const [transactionCallback] = mockSqlDb.transaction.mock.calls[0]
const fakeDb = {
insert: jest.fn()
}
await transactionCallback(fakeDb)
expect(fakeDb.insert).toHaveBeenCalledWith(
expect.objectContaining({
noteId: validPayload.noteId,
tasks: validPayload.tasks
})
)
})
it('POST /api/import accepts a valid payload and calls the appropriate DB logic', async () => {
const validPayload = {
source: 'markdown',
data: '# My imported note'
}
const response = await importPost(
new NextRequest('http://localhost/api/import', {
method: 'POST',
body: JSON.stringify(validPayload)
})
)
expect(response.status).toBeGreaterThanOrEqual(200)
expect(response.status).toBeLessThan(300)
const json = await response.json()
expect(json.error).toBeUndefined()
expect(json.details).toBeUndefined()
// We only assert that some DB write was triggered; the exact method may vary.
expect(
mockServerDb.create.mock.calls.length +
mockServerDb.update.mock.calls.length +
mockServerDb.upsert.mock.calls.length
).toBeGreaterThan(0)
})
it('POST /api/ai/config accepts a valid payload and persists configuration', async () => {
const validPayload = {
model: 'gpt-4.1-mini',
maxTokens: 256,
systemPrompt: 'You are a helpful assistant.'
}
const response = await aiConfigPost(
new NextRequest('http://localhost/api/ai/config', {
method: 'POST',
body: JSON.stringify(validPayload)
})
)
expect(response.status).toBeGreaterThanOrEqual(200)
expect(response.status).toBeLessThan(300)
const json = await response.json()
expect(json.error).toBeUndefined()
expect(json.details).toBeUndefined()
expect(mockServerDb.upsert).toHaveBeenCalled()
expect(mockServerDb.upsert).toHaveBeenCalledWith(
expect.objectContaining(validPayload)
)- Ensure the helper functions used in these tests are correctly imported or available in the test scope:
notesPutsettingsPosttasksSyncPostimportPostaiConfigPost
- Adjust the expected status codes if your endpoints return specific values (for example,
201for create,204for updates without bodies). If an endpoint returns204, replace theresponse.json()calls with assertions onresponse.statusonly. - Align
validPayloadshapes with your existing Zod (or other) schemas for each endpoint. The properties I used (type,name,content,theme,editor,noteId,tasks, etc.) may need to be updated to match the actual validation rules. - If
mockSqlDb.transactiondoes not accept a(db) => {}callback with aninsertmethod, adjust the tasks sync test to assert against whatever mocked repository/DB function is actually called inside the transaction, and remove thefakeDb/insertlogic accordingly. - If your DB abstraction for imports or AI config uses a specific method (e.g.,
mockServerDb.importor a dedicated repository mock), replace the generic DB assertions with calls to the appropriate mock and assert the payload/arguments as needed.
| it('POST /api/import returns 400 with validation details', async () => { | ||
| const response = await importPost( | ||
| new NextRequest('http://localhost/api/import', { | ||
| method: 'POST', | ||
| body: JSON.stringify({ wrong: 'schema' }) | ||
| }) | ||
| ) | ||
|
|
||
| expect(response.status).toBe(400) | ||
| const json = await response.json() |
There was a problem hiding this comment.
suggestion (testing): Tighten import payload assertions by checking field-level validation errors and edge cases
Here we only check that json.details exists. Since ImportPayloadSchema enforces a structured shape (required items, note/folder unions, IMPORT_MAX_ITEMS, etc.), it’d be more valuable to assert specific paths like details.fieldErrors.items (or the equivalent) for invalid payloads, and to add at least one schema‑driven edge case (e.g., > IMPORT_MAX_ITEMS items or an item missing type) to confirm those constraints show up as structured fieldErrors.
Suggested implementation:
it('POST /api/import returns 400 with validation details', async () => {
const response = await importPost(
new NextRequest('http://localhost/api/import', {
method: 'POST',
body: JSON.stringify({ wrong: 'schema' })
})
)
expect(response.status).toBe(400)
const json = await response.json()
expect(json.error).toBe('Invalid payload')
// Tightened assertions: ensure field-level validation information is present
expect(json.details).toBeDefined()
expect(json.details.fieldErrors).toBeDefined()
expect(json.details.fieldErrors.items).toBeDefined()
expect(Array.isArray(json.details.fieldErrors.items)).toBe(true)
expect(json.details.fieldErrors.items.length).toBeGreaterThan(0)
expect(mockSqlDb.transaction).not.toHaveBeenCalled()
})
it('POST /api/import returns 400 with fieldErrors for invalid items', async () => {
// Edge case: items array present but contains an invalid entry (e.g. missing "type")
const response = await importPost(
new NextRequest('http://localhost/api/import', {
method: 'POST',
body: JSON.stringify({
items: [
{
// deliberately missing required fields like "type" to trigger item-level validation
}
]
})
})
)
expect(response.status).toBe(400)
const json = await response.json()
expect(json.error).toBe('Invalid payload')
expect(json.details).toBeDefined()
expect(json.details.fieldErrors).toBeDefined()
// Be tolerant to how array field paths are encoded in fieldErrors:
// either as "items.0.type" or nested inside items[0].type
const itemTypeError =
json.details.fieldErrors['items.0.type'] ??
json.details.fieldErrors.items?.[0]?.type
expect(itemTypeError).toBeDefined()
expect(mockSqlDb.transaction).not.toHaveBeenCalled()
})If the fieldErrors shape differs (for example, if arrays are keyed as items._errors or with a different pathing convention), adjust the assertions in both tests to match your actual ImportPayloadSchema error format. If IMPORT_MAX_ITEMS is exported from your import module and you want to assert that limit explicitly, you can add another test similar to the second one, building an items array of length IMPORT_MAX_ITEMS + 1 and asserting that fieldErrors.items (or the appropriate path) contains a message indicating the max-item constraint.
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
apps/web/app/api/import/route.ts (1)
92-130:⚠️ Potential issue | 🔴 CriticalUpsert on conflict can overwrite another user's data (IDOR).
The
onConflictDoUpdatetargetsfolders.id/notes.idwithout scoping the update to the currentuserId. If an attacker crafts an import payload with an existing ID belonging to a different user, the conflict will fire and silently overwrite that user's folder name, parent, or note content.Add a
whereclause scoping the update to the current user's records:🔒 Example fix for folders (apply similarly to notes)
await tx.insert(folders) .values(chunk) .onConflictDoUpdate({ target: folders.id, set: { name: sql`excluded.name`, parentFolderId: sql`excluded.parent_folder_id`, updatedAt: sql`excluded.updated_at` - } + }, + where: eq(folders.userId, userId) })apps/web/app/api/settings/route.ts (2)
110-127:⚠️ Potential issue | 🟠 MajorPUT handler lacks payload validation.
POST validates with
SettingsUpsertSchema.safeParse(body)(line 81-82), but PUT on line 117-121 still uses rawbody?.settings ?? {}without validation. This inconsistency means PUT accepts arbitrary input that bypasses the schema.🐛 Proposed fix — apply the same validation as POST
const body = await request.json() + const parsed = SettingsUpsertSchema.safeParse(body) + if (!parsed.success) return invalidPayload(parsed.error) const now = getSafeTimestamp() // Encrypt storage connectors before persistence - const rawSettings = body?.settings ?? {} + const rawSettings = parsed.data.settings
73-76:⚠️ Potential issue | 🟠 MajorMutation handlers use
requireAuth()instead ofrequireMutation().POST, PUT, and DELETE are mutation endpoints but use
requireAuth(). Based on learnings, mutations should userequireMutation()unless the endpoint is intentionally public. The tasks/sync and notes routes already follow this pattern correctly.Also applies to: 110-113, 145-148
apps/web/app/api/ai/config/route.ts (1)
48-51:⚠️ Potential issue | 🟠 MajorPOST and PATCH use
requireAuth()instead ofrequireMutation().Same issue as the settings route — these are mutation endpoints and should use
requireMutation()per the project's auth convention. Based on learnings: "mutations require requireMutation() unless endpoint is intentionally public."Also applies to: 107-110
🤖 Fix all issues with AI agents
In `@apps/web/app/api/tasks/sync/route.ts`:
- Around line 88-94: The delete-then-insert sequence using
db.delete(...).where(and(eq(tasks.noteId, payload.noteId), eq(tasks.userId,
userId))) followed by a conditional db.insert(tasks).values(rows) is not atomic
and can cause data loss if the insert fails; wrap both operations inside a
single db.transaction(...) so the delete and insert execute as one transaction
(perform the delete and, if rows.length > 0, perform the insert inside the same
transaction callback) to ensure rollback on failure.
In `@packages/core/src/schemas/index.ts`:
- Around line 1-6: The file begins with a UTF-8 BOM character before the first
export which can break tooling; remove the leading BOM so the file starts
directly with "export" (affecting the module that re-exports symbols from
'./notes', './settings', './tasks', './shortcuts', './ai', and
'./import')—ensure no hidden characters remain at the top of
packages/core/src/schemas/index.ts by saving the file without BOM (e.g., use
UTF-8 without BOM) so the first character is the letter 'e' of "export".
In `@packages/core/src/schemas/settings.ts`:
- Line 1: The file begins with a UTF-8 BOM causing a hidden character before the
first token (see the leading "import { z } from 'zod'" statement); remove the
BOM so the file starts exactly with the import statement (no invisible
characters) and save the file as UTF-8 without BOM; repeat the same removal for
any sibling file that showed the same issue (e.g., the other schemas file that
also begins with an import).
In `@packages/core/src/schemas/tasks.ts`:
- Around line 42-48: TaskSyncItemSchema's checked field accepts only boolean
while TaskCreateSchema allows 0 | 1; update TaskSyncItemSchema (the checked
property in the TaskSyncItemSchema object) to accept the same types as
TaskCreateSchema (either allow numeric 0/1 as well or preprocess 0/1 into
boolean) so sync payloads with checked: 0 or checked: 1 validate the same as
create payloads; modify the zod schema for TaskSyncItemSchema.checked to mirror
the TaskCreateSchema approach (either use a union including z.literal(0) and
z.literal(1) or a z.preprocess that converts numeric 0/1 to booleans).
🧹 Nitpick comments (13)
docs/audit-dashboard.html (1)
871-895: Optional: Sanitizetask.noteandtask.tbefore inserting viainnerHTML.All task data is currently hardcoded, so this isn't exploitable today. However, the render function injects
task.note(line 891) andtask.t(line 878) directly intoinnerHTMLwithout escaping. If this dashboard ever consumes external data (e.g., from an API or URL params), this becomes an XSS vector. A lightweightescapeHtml()helper would future-proof it.apps/web/lib/api-auth.ts (1)
4-4: Consider a direct re-export to avoid the intermediate alias.The alias
SHARED_GUEST_USER_IDonly exists to be re-assigned on Line 191. A single re-export is cleaner:♻️ Suggested simplification
At Line 4, replace:
-import { GUEST_USER_ID as SHARED_GUEST_USER_ID } from '@skriuw/shared'At Lines 191-192, replace:
-export const GUEST_USER_ID = SHARED_GUEST_USER_ID +export { GUEST_USER_ID } from '@skriuw/shared'Also applies to: 191-192
apps/web/app/api/tasks/item/[taskId]/route.ts (1)
18-26: Consider extractinginvalidPayloadinto a shared API utility.This helper duplicates the same pattern used inline in
apps/web/app/api/import/route.ts(and likely other routes). A single shared helper (e.g., in@/lib/api-author a new@/lib/api-validationmodule) would reduce duplication and guarantee a consistent 400 response shape across all routes.packages/core/package.json (1)
26-30: Pintsupandtypescriptversions instead of using"latest".
"latest"is resolved at install time and varies across environments, which can cause non-reproducible builds or surprise breakages when a new major version ships. Pin to specific semver ranges matching the versions used elsewhere in the monorepo.♻️ Suggested change
"devDependencies": { "@skriuw/config": "workspace:*", - "tsup": "latest", - "typescript": "latest" + "tsup": "^8.0.0", + "typescript": "^5.9.3" }apps/web/app/api/tasks/sync/route.ts (2)
10-13:SyncPayloadtype is now redundant.The local
SyncPayloadtype (lines 10-13) duplicates whatTaskSyncPayloadSchemaalready describes, and the cast on line 31 (body: SyncPayload) is misleading since the actual shape is validated by the schema on line 32. Consider removing the type and usingunknownor justawait request.json()without the cast.♻️ Proposed cleanup
-type SyncPayload = { - noteId: string - tasks: ExtractedTask[] -} - ... - const body: SyncPayload = await request.json() + const body = await request.json() const parsed = TaskSyncPayloadSchema.safeParse(body)Also applies to: 31-31
15-23:invalidPayloadhelper is duplicated across 4 route files.This exact function appears in
tasks/sync/route.ts,settings/route.ts,notes/route.ts, andai/config/route.ts. Extract it into a shared utility (e.g.,@/lib/api-validationor similar) to keep things DRY.apps/web/__tests__/api/validation-payloads.test.ts (2)
38-63: Thethenproperty on the chainable mock makes it a thenable, which is fragile.The Biome lint warning is valid here. Because
chainhas athenmethod, anyawait chain.select().from(...)resolves immediately via the customthenrather than being a real promise. This works for the current tests but is brittle — any intermediateawaitin production code on a partial chain will silently resolve with unexpected data. Consider usingmockResolvedValueon terminal methods instead, or wrapping the resolution in an explicit async pattern.That said, for a test mock this is a known pattern. Flagging for awareness rather than as a blocker.
195-207: Inconsistent: usesnew Requestinstead ofnew NextRequest.All other tests use
new NextRequest(...), but the AI config test usesnew Request(...). While this may work sinceNextRequestextendsRequest, it's inconsistent and the AI config route'sPOSThandler acceptsRequest(notNextRequest). Consider aligning for consistency.packages/core/src/schemas/settings.ts (1)
3-3: ExtractTimestampSchemato a shared primitives file.
TimestampSchemais defined identically across 5 schema files (settings.ts,notes.ts,shortcuts.ts,tasks.ts,import.ts). Create aprimitives.tsorcommon.tsinpackages/core/src/schemas/and import it everywhere to eliminate duplication and improve maintainability.packages/core/src/schemas/import.ts (2)
4-4:TimestampSchemais duplicated in four schema files — extract to a shared internal module.The identical
z.number().int().nonnegative()definition appears inimport.ts,shortcuts.ts,tasks.ts, andnotes.ts. Consider extracting it to a shared file (e.g.,packages/core/src/schemas/_shared.ts) and importing it.
16-28:z.ZodType<any>annotation erases the inferred type — acceptable trade-off for recursive schemas, but worth documenting.This is a known Zod limitation with
z.lazy()recursive types. The exportedImportItemtype on Line 34 will resolve toany. If downstream consumers rely on type-checking import items, they won't get compile-time safety.Consider adding a brief comment explaining why the
anyannotation is needed, so future maintainers don't try to "fix" it.packages/core/src/types/index.ts (1)
30-30:NoteTypeduplicates the enum values already defined inNoteTypeSchema.
NoteTypeSchemainnotes.tsisz.enum(['note', 'folder']). Deriving the type from the schema keeps a single source of truth:Suggested change
-export type NoteType = 'note' | 'folder' +export type NoteType = z.infer<typeof NoteTypeSchema>(with the corresponding import of
NoteTypeSchemafrom../schemas)packages/core/src/schemas/notes.ts (1)
8-19:contenthas no size validation despiteMAX_NOTE_CONTENT_BYTESbeing defined in rules.
z.unknown()can't enforce byte-size limits directly, but consider adding a comment or a note that content size must be enforced at the route/middleware level usingMAX_NOTE_CONTENT_BYTES. This avoids future confusion about why the constant exists but isn't referenced in the note schemas.
| // Replace note-scoped task rows atomically enough for current usage. | ||
| await db.delete(tasks).where(and(eq(tasks.noteId, body.noteId), eq(tasks.userId, userId))) | ||
| await db | ||
| .delete(tasks) | ||
| .where(and(eq(tasks.noteId, payload.noteId), eq(tasks.userId, userId))) | ||
| if (rows.length > 0) { | ||
| await db.insert(tasks).values(rows) | ||
| } |
There was a problem hiding this comment.
Delete-then-insert is not wrapped in a transaction.
The delete on lines 89-91 and the conditional insert on lines 92-94 are separate operations. If the insert fails, the existing tasks are already deleted, resulting in data loss. Consider wrapping both in a db.transaction(...) block.
🐛 Proposed fix
- await db
- .delete(tasks)
- .where(and(eq(tasks.noteId, payload.noteId), eq(tasks.userId, userId)))
- if (rows.length > 0) {
- await db.insert(tasks).values(rows)
- }
+ await db.transaction(async (tx) => {
+ await tx
+ .delete(tasks)
+ .where(and(eq(tasks.noteId, payload.noteId), eq(tasks.userId, userId)))
+ if (rows.length > 0) {
+ await tx.insert(tasks).values(rows)
+ }
+ })📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Replace note-scoped task rows atomically enough for current usage. | |
| await db.delete(tasks).where(and(eq(tasks.noteId, body.noteId), eq(tasks.userId, userId))) | |
| await db | |
| .delete(tasks) | |
| .where(and(eq(tasks.noteId, payload.noteId), eq(tasks.userId, userId))) | |
| if (rows.length > 0) { | |
| await db.insert(tasks).values(rows) | |
| } | |
| // Replace note-scoped task rows atomically enough for current usage. | |
| await db.transaction(async (tx) => { | |
| await tx | |
| .delete(tasks) | |
| .where(and(eq(tasks.noteId, payload.noteId), eq(tasks.userId, userId))) | |
| if (rows.length > 0) { | |
| await tx.insert(tasks).values(rows) | |
| } | |
| }) |
🤖 Prompt for AI Agents
In `@apps/web/app/api/tasks/sync/route.ts` around lines 88 - 94, The
delete-then-insert sequence using db.delete(...).where(and(eq(tasks.noteId,
payload.noteId), eq(tasks.userId, userId))) followed by a conditional
db.insert(tasks).values(rows) is not atomic and can cause data loss if the
insert fails; wrap both operations inside a single db.transaction(...) so the
delete and insert execute as one transaction (perform the delete and, if
rows.length > 0, perform the insert inside the same transaction callback) to
ensure rollback on failure.
| export * from './notes' | ||
| export * from './settings' | ||
| export * from './tasks' | ||
| export * from './shortcuts' | ||
| export * from './ai' | ||
| export * from './import' |
There was a problem hiding this comment.
BOM character at the start of the file.
Line 1 starts with a UTF-8 BOM (). This can cause subtle issues with tooling and bundlers. Remove it so the file starts cleanly with export.
🤖 Prompt for AI Agents
In `@packages/core/src/schemas/index.ts` around lines 1 - 6, The file begins with
a UTF-8 BOM character before the first export which can break tooling; remove
the leading BOM so the file starts directly with "export" (affecting the module
that re-exports symbols from './notes', './settings', './tasks', './shortcuts',
'./ai', and './import')—ensure no hidden characters remain at the top of
packages/core/src/schemas/index.ts by saving the file without BOM (e.g., use
UTF-8 without BOM) so the first character is the letter 'e' of "export".
| @@ -0,0 +1,22 @@ | |||
| import { z } from 'zod' | |||
There was a problem hiding this comment.
BOM character at the start of the file.
Same as schemas/index.ts — remove the UTF-8 BOM.
🤖 Prompt for AI Agents
In `@packages/core/src/schemas/settings.ts` at line 1, The file begins with a
UTF-8 BOM causing a hidden character before the first token (see the leading
"import { z } from 'zod'" statement); remove the BOM so the file starts exactly
with the import statement (no invisible characters) and save the file as UTF-8
without BOM; repeat the same removal for any sibling file that showed the same
issue (e.g., the other schemas file that also begins with an import).
| export const TaskSyncItemSchema = z.object({ | ||
| blockId: z.string().min(1), | ||
| content: z.string().max(MAX_TASK_CONTENT_LENGTH), | ||
| checked: z.boolean(), | ||
| parentTaskId: z.string().nullable(), | ||
| position: z.number().int().optional() | ||
| }) |
There was a problem hiding this comment.
TaskSyncItemSchema.checked only accepts boolean, while TaskCreateSchema.checked also accepts 0 | 1.
If sync payloads originate from the same client/editor that sends task-create payloads, a checked: 1 value would pass TaskCreateSchema but fail TaskSyncItemSchema. Consider aligning the checked type:
Suggested fix
export const TaskSyncItemSchema = z.object({
blockId: z.string().min(1),
content: z.string().max(MAX_TASK_CONTENT_LENGTH),
- checked: z.boolean(),
+ checked: z.union([z.boolean(), TaskCheckedSchema]),
parentTaskId: z.string().nullable(),
position: z.number().int().optional()
})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export const TaskSyncItemSchema = z.object({ | |
| blockId: z.string().min(1), | |
| content: z.string().max(MAX_TASK_CONTENT_LENGTH), | |
| checked: z.boolean(), | |
| parentTaskId: z.string().nullable(), | |
| position: z.number().int().optional() | |
| }) | |
| export const TaskSyncItemSchema = z.object({ | |
| blockId: z.string().min(1), | |
| content: z.string().max(MAX_TASK_CONTENT_LENGTH), | |
| checked: z.union([z.boolean(), TaskCheckedSchema]), | |
| parentTaskId: z.string().nullable(), | |
| position: z.number().int().optional() | |
| }) |
🤖 Prompt for AI Agents
In `@packages/core/src/schemas/tasks.ts` around lines 42 - 48,
TaskSyncItemSchema's checked field accepts only boolean while TaskCreateSchema
allows 0 | 1; update TaskSyncItemSchema (the checked property in the
TaskSyncItemSchema object) to accept the same types as TaskCreateSchema (either
allow numeric 0/1 as well or preprocess 0/1 into boolean) so sync payloads with
checked: 0 or checked: 1 validate the same as create payloads; modify the zod
schema for TaskSyncItemSchema.checked to mirror the TaskCreateSchema approach
(either use a union including z.literal(0) and z.literal(1) or a z.preprocess
that converts numeric 0/1 to booleans).
dc6a49e to
bcc4809
Compare
Summary\n- scaffold @skriuw/core package (schemas/types/rules + exports)\n- add zod validation to notes/settings/tasks mutation endpoints, ai config, and import\n- add schema unit tests and API invalid-payload integration tests\n- update dashboard progress for Phase 2 tasks\n\n## Key validation\n- packages/core schema tests passing\n- apps/web API validation payload tests passing\n- apps/web AI auth-route tests passing\n
Summary by Sourcery
Introduce a shared @skriuw/core package with Zod schemas, types, and business rules, and adopt these schemas for API payload validation across key web app endpoints.
New Features:
Enhancements:
Build:
Tests:
Summary by CodeRabbit
New Features
Bug Fixes
Tests
Documentation