Skip to content

Implement chunked storage for IndexedDB to optimize memory usage#116

Open
Amanmahe wants to merge 2 commits intoupsidedownlabs:mainfrom
Amanmahe:main
Open

Implement chunked storage for IndexedDB to optimize memory usage#116
Amanmahe wants to merge 2 commits intoupsidedownlabs:mainfrom
Amanmahe:main

Conversation

@Amanmahe
Copy link
Member

@Amanmahe Amanmahe commented Feb 11, 2026

Problem

Current IndexedDB implementation rewrites entire file on each append, causing:

  • O(n) memory usage growth
  • 500ms+ per write operation
  • Cannot handle >1 hour recordings at 500Hz

Solution

Implement chunked storage architecture:

  • Split data into 1000-array chunks
  • Append only to last chunk
  • Separate metadata tracking
  • Memory usage: O(1) vs O(n)

Changes

  • Added FileMetadata and DataChunks stores
  • Composite key: [filename, chunkIndex]
  • Updated writeToIndexedDB for chunked append
  • Maintained all existing APIs
  • DB version: 2 → 3 with auto-migration

Performance Impact

Duration Before After
1 min 30MB 2KB
1 hour 1.8GB 2KB

@lorforlinux , Please review and merge it.

Summary by CodeRabbit

  • New Features

    • Per-action loading indicators and disabled states for recording, downloading, ZIP creation, and delete actions.
    • Inline status indicator in file actions to show ongoing saves.
  • Improvements

    • More reliable local storage with faster listing, download and bulk-export (ZIP) flows.
    • Updated NPG-Lite Flasher Web link and clarified battery/firmware messaging.

- Replace single-file storage with chunk-based architecture
- Add FileMetadata store for tracking file information
- Implement DataChunks store with composite key [filename, chunkIndex]
- Update write operations to append only to relevant chunks
- Maintain backward compatibility with existing API
- Improve performance for 500Hz continuous data streams
- Reduce memory usage from O(n) to O(1) for append operations

BREAKING CHANGE: Database version upgraded to 3. Existing data will be migrated automatically.
@coderabbitai
Copy link

coderabbitai bot commented Feb 11, 2026

📝 Walkthrough

Walkthrough

Refactors IndexedDB storage to a chunked, metadata-driven model and wires a background worker into the UI for chunked writes/reads, ZIP/export, and deletion; also adds per-action processing/loading state and updates NPG‑Lite firmware link text/URL in the page UI. (≤50 words)

Changes

Cohort / File(s) Summary
NPG‑Lite UI
src/app/npg-lite/page.tsx
Added processing/loading state flags for recording, download, delete, and ZIP flows; integrated worker messaging for save/list/download/delete; updated recording pipeline to await worker acknowledgements; replaced firmware link URL/text; wired per-item and global loaders.
IndexedDB Worker: chunked storage
workers/indexedDBWorker.ts
Replaced single-record model with chunked FileMetadata/DataChunks stores (schema v3); added openIndexedDB, get/update metadata, writeToIndexedDB (chunking), readFileData (reassemble), saveDataByFilename, saveAllDataAsZip, getFileCountFromIndexedDB, deleteFilesByFilename, deleteAllDataFromIndexedDB; enhanced action payloads and error handling.

Sequence Diagram(s)

sequenceDiagram
  participant UI as Client/UI
  participant W as IndexedDB Worker
  participant DB as IndexedDB

  UI->>W: postMessage { action: "save", filename, data }
  W->>DB: openIndexedDB() / start transaction
  W->>DB: write DataChunks + upsert FileMetadata
  DB-->>W: acknowledge commit
  W-->>UI: postMessage { type: "writeComplete", filename, success }
Loading
sequenceDiagram
  participant UI as Client/UI
  participant W as IndexedDB Worker
  participant DB as IndexedDB
  participant ZIP as ZipGenerator

  UI->>W: postMessage { action: "saveAllAsZip" }
  W->>DB: openIndexedDB() / read FileMetadata list
  loop for each file
    W->>DB: read DataChunks for filename
    DB-->>W: return chunks
    W->>ZIP: convert chunks -> CSV blob
    ZIP-->>W: returns CSV blob
  end
  W->>UI: postMessage { type: "saveAsZip", blobUrl }
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

🐰 In burrows of bytes I hop and hum,
Chunks tuck tight where data come from,
Metadata maps my secret trail,
Zip and download — I wag my tail,
Hooray — saved, snacked, and on I run! 🥕

🚥 Pre-merge checks | ✅ 3 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Merge Conflict Detection ⚠️ Warning ❌ Merge conflicts detected (2 files):

⚔️ src/app/npg-lite/page.tsx (content)
⚔️ workers/indexedDBWorker.ts (content)

These conflicts must be resolved before merging into main.
Resolve conflicts locally and push changes to this branch.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Implement chunked storage for IndexedDB to optimize memory usage' clearly and concisely summarizes the main objective of the PR—introducing chunked storage to optimize memory usage for IndexedDB operations.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
⚔️ Resolve merge conflicts (beta)
  • Auto-commit resolved conflicts to branch main
  • Post resolved changes as copyable diffs in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
workers/indexedDBWorker.ts (2)

75-86: ⚠️ Potential issue | 🟠 Major

deleteFile and deleteAll cases lack error handling.

Unlike other action handlers (write, saveAsZip, etc.), these two cases have no try/catch. If deleteFilesByFilename or deleteAllDataFromIndexedDB throws, it becomes an unhandled rejection in the worker — the caller never receives a response and will hang on its Promise.

🛡️ Add error handling
     case 'deleteFile':
+      try {
         if (!filename) {
           throw new Error('Filename is required for deleteFile action.');
         }
         await deleteFilesByFilename(filename);
         handlePostMessage({ success: true, action: 'deleteFile' });
+      } catch (error) {
+        handlePostMessage({ success: false, action: 'deleteFile', error: String(error) });
+      }
       break;

     case 'deleteAll':
+      try {
         await deleteAllDataFromIndexedDB();
         handlePostMessage({ success: true, action: 'deleteAll' });
+      } catch (error) {
+        handlePostMessage({ success: false, action: 'deleteAll', error: String(error) });
+      }
       break;

22-90: ⚠️ Potential issue | 🟠 Major

Missing handler for checkExistence action — regression from refactor.

Both src/components/Connection.tsx (line 410) and src/app/npg-lite/page.tsx (line 789) send { action: 'checkExistence', ... } to this worker, but the switch statement has no corresponding case. This falls through to the default case, which responds with { error: 'Invalid action' } back to the caller. While the new chunked architecture handles file existence implicitly via getFileMetadata(), the stale caller code was not removed during the refactor. Remove the checkExistence calls from both components, or add a handler to acknowledge the action.

🤖 Fix all issues with AI agents
In `@workers/indexedDBWorker.ts`:
- Around line 500-501: Both deleteFilesByFilename and deleteAllDataFromIndexedDB
currently call indexedDB.open("ChordsRecordings", 3) again; change both
functions to accept an existing IDBDatabase parameter (e.g., db: IDBDatabase)
instead of opening a new connection, update their internal logic to use that db,
and update the caller in self.onmessage to pass the already-open db handle
(consistent with writeToIndexedDB and readFileData); also update any other call
sites (lines around 547–548) to pass the db parameter to avoid redundant opens
and version-mismatch risks.
- Around line 580-595: Duplicate schema-creation logic is implemented in the
dbRequest.onupgradeneeded handler here, which diverges from the openIndexedDB
implementation (openIndexedDB) and risks getting out of sync for stores
"FileMetadata" and "DataChunks"; remove this duplicated onupgradeneeded block
and instead reuse the centralized upgrade logic in openIndexedDB (or have
dbRequest use the same helper that creates/updates the FileMetadata and
DataChunks stores) so schema creation lives in one place and both callers invoke
that single function.
- Around line 198-201: Add an early guard that returns when data.length === 0 to
avoid computing negative endChunk and performing a needless metadata
transaction; specifically, before computing
startIndex/endIndex/startChunk/endChunk (and before the loop that writes chunks
and the metadata update that uses metadata.totalRecords), check if data.length
=== 0 and return early so no chunk math or metadata += 0 update occurs. Ensure
the guard references the same variables (data, metadata, CHUNK_SIZE) and
prevents running the for-loop and subsequent metadata update.
- Around line 124-127: The empty migration block in the onupgradeneeded handler
(checking event.oldVersion < 2) will drop data from the old "ChordsRecordings"
store; implement a real migration: open a transaction that reads all entries
from the "ChordsRecordings" object store, transform each recording into the new
chunked representation (split audio/data into chunks and create corresponding
metadata), and write those chunks and metadata into the new stores within the
same upgrade transaction, or if you prefer destructive behavior, explicitly
delete the old store and emit a clear log/notification indicating old data was
removed; update the onupgradeneeded handler to perform one of these paths and
ensure it references event.oldVersion, the "ChordsRecordings" store, and the new
chunk/metadata stores so the migration happens atomically during upgrade.
- Around line 197-268: The current writeToIndexedDB logic uses
performIndexDBTransaction per chunk (reads and writes), causing non-atomic
partial commits and poor performance; refactor to open a single readwrite
transaction over both stores ("FileMetadata" and "DataChunks") for the whole
operation (from calculating startChunk..endChunk through all get/put operations)
so all chunk reads, chunk writes (existingChunk and newChunk handling) and the
metadata update occur atomically, and change the chunk read (store.get) to use
readonly when you perform isolated reads outside that batch; locate usages of
performIndexDBTransaction, the loop variables (startChunk, endChunk,
chunkIndex), existingChunk/newChunk logic and metadata.totalRecords to implement
the single multi-store transaction and update metadata.totalRecords only once
before committing.
🧹 Nitpick comments (4)
workers/indexedDBWorker.ts (4)

111-114: Redundant index on the keyPath field.

The FileMetadata store uses filename as its keyPath (line 112), which already provides unique lookups by filename. Creating an additional index on filename (line 113) is unnecessary and wastes storage.

♻️ Remove redundant index
       if (!db.objectStoreNames.contains("FileMetadata")) {
         const metadataStore = db.createObjectStore("FileMetadata", { keyPath: "filename" });
-        metadataStore.createIndex("filename", "filename", { unique: true });
       }

284-311: Sequential single-chunk transactions are inefficient for reads.

readFileData opens a separate readonly transaction for every chunk. A single transaction with sequential get calls (or a cursor on the byFilename index) would be both faster and guarantee a consistent snapshot.

♻️ Use a single transaction
 const readFileData = async (db: IDBDatabase, filename: string): Promise<number[][]> => {
   const metadata = await getFileMetadata(db, filename);
   const allData: number[][] = [];
-  
-  for (let chunkIndex = 0; chunkIndex < metadata.totalChunks; chunkIndex++) {
-    const chunk = await performIndexDBTransaction(
-      db,
-      "DataChunks",
-      "readonly",
-      (store) => {
-        return new Promise<any>((resolve, reject) => {
-          const key = [filename, chunkIndex];
-          const request = store.get(key);
-          request.onsuccess = () => resolve(request.result);
-          request.onerror = () => reject(request.error);
-        });
-      }
-    );
-    
-    if (chunk && chunk.data) {
-      allData.push(...chunk.data);
-    }
-  }
+
+  const tx = db.transaction("DataChunks", "readonly");
+  const store = tx.objectStore("DataChunks");
+
+  for (let chunkIndex = 0; chunkIndex < metadata.totalChunks; chunkIndex++) {
+    const chunk = await new Promise<any>((resolve, reject) => {
+      const request = store.get([filename, chunkIndex]);
+      request.onsuccess = () => resolve(request.result);
+      request.onerror = () => reject(request.error);
+    });
+    if (chunk?.data) {
+      allData.push(...chunk.data);
+    }
+  }
   
   return allData;
 };

47-55: Dead branch: action === 'getAllData' is always false here.

This code is inside the case 'getFileCountFromIndexedDB' block, so the ternary on line 49 always resolves to getFileCountFromIndexedDB. There is no separate 'getAllData' case in the switch. If getAllDataFromIndexedDB is still needed, add a dedicated case for it; otherwise, remove the dead ternary.

♻️ Simplify
     case 'getFileCountFromIndexedDB':
       try {
-        const dataMethod = action === 'getAllData' ? getAllDataFromIndexedDB : getFileCountFromIndexedDB;
-        const allData = await dataMethod(db);
+        const allData = await getFileCountFromIndexedDB(db);
         handlePostMessage({ allData });
       } catch (error) {

384-401: saveAllDataAsZip and saveDataByFilename open their own DB connections instead of accepting the one from the message handler.

Same issue as with the delete functions — the db opened at line 12 in self.onmessage is available but not passed to these functions. Consider accepting db as a parameter for consistency and to avoid redundant connections.

Also applies to: 427-447

Comment on lines 124 to 127
// Migrate existing data from old schema if needed
if (event.oldVersion < 2) {
// Handle migration from old schema if necessary
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Empty migration placeholder will silently discard existing user data.

The onupgradeneeded handler creates the new stores but does nothing with data in the old ChordsRecordings store. Users upgrading from version 2 will lose all their existing recordings with no warning. Since the commit message flags this as a BREAKING CHANGE, at minimum:

  1. Migrate data from the old store into the new chunked format, or
  2. Explicitly delete the old store and log/notify the user that old data was cleared.

Leaving a no-op if block is misleading and risks data loss going unnoticed.

Suggested direction for migration or cleanup
       // Migrate existing data from old schema if needed
-      if (event.oldVersion < 2) {
-        // Handle migration from old schema if necessary
+      if (event.oldVersion < 3 && db.objectStoreNames.contains("ChordsRecordings")) {
+        // Option A: Delete old store (explicit cleanup)
+        db.deleteObjectStore("ChordsRecordings");
+        console.warn("Old ChordsRecordings store removed during migration to v3.");
+        // Option B: Or migrate data (requires reading from old store 
+        // within versionchange transaction)
       }
🤖 Prompt for AI Agents
In `@workers/indexedDBWorker.ts` around lines 124 - 127, The empty migration block
in the onupgradeneeded handler (checking event.oldVersion < 2) will drop data
from the old "ChordsRecordings" store; implement a real migration: open a
transaction that reads all entries from the "ChordsRecordings" object store,
transform each recording into the new chunked representation (split audio/data
into chunks and create corresponding metadata), and write those chunks and
metadata into the new stores within the same upgrade transaction, or if you
prefer destructive behavior, explicitly delete the old store and emit a clear
log/notification indicating old data was removed; update the onupgradeneeded
handler to perform one of these paths and ensure it references event.oldVersion,
the "ChordsRecordings" store, and the new chunk/metadata stores so the migration
happens atomically during upgrade.

Comment on lines 197 to 268
// Calculate which chunks we need to write
const startIndex = metadata.totalRecords;
const endIndex = startIndex + data.length;
const startChunk = Math.floor(startIndex / CHUNK_SIZE);
const endChunk = Math.floor((endIndex - 1) / CHUNK_SIZE);

// Process each chunk
for (let chunkIndex = startChunk; chunkIndex <= endChunk; chunkIndex++) {
const chunkStart = chunkIndex * CHUNK_SIZE;
const chunkEnd = chunkStart + CHUNK_SIZE;

// Calculate what portion of data goes into this chunk
const dataStart = Math.max(0, chunkStart - startIndex);
const dataEnd = Math.min(data.length, chunkEnd - startIndex);

if (dataStart >= dataEnd) continue;

const chunkData = data.slice(dataStart, dataEnd);

// Get existing chunk or create new
const existingChunk = await performIndexDBTransaction(
db,
"DataChunks",
"readwrite",
(store) => {
return new Promise<any>((resolve, reject) => {
const key = [filename, chunkIndex];
const request = store.get(key);
request.onsuccess = () => resolve(request.result);
request.onerror = () => reject(request.error);
});
}
);

if (existingChunk) {
// Append to existing chunk
existingChunk.data.push(...chunkData);

await performIndexDBTransaction(
db,
"DataChunks",
"readwrite",
(store) => {
return new Promise<void>((resolve, reject) => {
const request = store.put(existingChunk);
request.onsuccess = () => resolve();
request.onerror = () => reject(request.error);
});
}
);
} else {
// Create new chunk
const newChunk = {
filename,
chunkIndex,
data: chunkData
};

await performIndexDBTransaction(
db,
"DataChunks",
"readwrite",
(store) => {
return new Promise<void>((resolve, reject) => {
const request = store.put(newChunk);
request.onsuccess = () => resolve();
request.onerror = () => reject(request.error);
});
}
);
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Each chunk uses a separate transaction — no atomicity and poor efficiency.

writeToIndexedDB creates a new transaction for every chunk read and every chunk write (potentially 2 × N transactions for N affected chunks). This has two consequences:

  1. Partial writes on failure: if a later chunk write fails, earlier chunks are already committed — metadata and data become inconsistent.
  2. Performance: each transaction has overhead; batching all reads/writes into a single readwrite transaction on ["FileMetadata", "DataChunks"] would be significantly faster.

Additionally, line 220 opens the get request in "readwrite" mode when "readonly" suffices.

♻️ Suggested approach: single multi-store transaction
 const writeToIndexedDB = async (
   db: IDBDatabase,
   data: number[][],
   filename: string
 ): Promise<boolean> => {
   try {
     const metadata = await getFileMetadata(db, filename);
     const startIndex = metadata.totalRecords;
     const endIndex = startIndex + data.length;
     const startChunk = Math.floor(startIndex / CHUNK_SIZE);
     const endChunk = Math.floor((endIndex - 1) / CHUNK_SIZE);
 
-    for (let chunkIndex = startChunk; chunkIndex <= endChunk; chunkIndex++) {
-      // ... separate transactions per chunk ...
-    }
-    
-    metadata.totalRecords += data.length;
-    metadata.totalChunks = Math.ceil(metadata.totalRecords / CHUNK_SIZE);
-    metadata.lastUpdated = new Date();
-    await updateFileMetadata(db, metadata);
+    // Single transaction covering both stores
+    const tx = db.transaction(["DataChunks", "FileMetadata"], "readwrite");
+    const chunksStore = tx.objectStore("DataChunks");
+    const metaStore = tx.objectStore("FileMetadata");
+
+    for (let chunkIndex = startChunk; chunkIndex <= endChunk; chunkIndex++) {
+      const chunkStart = chunkIndex * CHUNK_SIZE;
+      const chunkEnd = chunkStart + CHUNK_SIZE;
+      const dataStart = Math.max(0, chunkStart - startIndex);
+      const dataEnd = Math.min(data.length, chunkEnd - startIndex);
+      if (dataStart >= dataEnd) continue;
+      const chunkData = data.slice(dataStart, dataEnd);
+
+      const existing = await new Promise<any>((res, rej) => {
+        const req = chunksStore.get([filename, chunkIndex]);
+        req.onsuccess = () => res(req.result);
+        req.onerror = () => rej(req.error);
+      });
+
+      if (existing) {
+        existing.data.push(...chunkData);
+        chunksStore.put(existing);
+      } else {
+        chunksStore.put({ filename, chunkIndex, data: chunkData });
+      }
+    }
+
+    metadata.totalRecords += data.length;
+    metadata.totalChunks = Math.ceil(metadata.totalRecords / CHUNK_SIZE);
+    metadata.lastUpdated = new Date();
+    metaStore.put(metadata);
+
+    await new Promise<void>((res, rej) => {
+      tx.oncomplete = () => res();
+      tx.onerror = () => rej(tx.error);
+    });

     return true;
   } catch (error) {
     console.error("Error writing to IndexedDB:", error);
     return false;
   }
 };
🤖 Prompt for AI Agents
In `@workers/indexedDBWorker.ts` around lines 197 - 268, The current
writeToIndexedDB logic uses performIndexDBTransaction per chunk (reads and
writes), causing non-atomic partial commits and poor performance; refactor to
open a single readwrite transaction over both stores ("FileMetadata" and
"DataChunks") for the whole operation (from calculating startChunk..endChunk
through all get/put operations) so all chunk reads, chunk writes (existingChunk
and newChunk handling) and the metadata update occur atomically, and change the
chunk read (store.get) to use readonly when you perform isolated reads outside
that batch; locate usages of performIndexDBTransaction, the loop variables
(startChunk, endChunk, chunkIndex), existingChunk/newChunk logic and
metadata.totalRecords to implement the single multi-store transaction and update
metadata.totalRecords only once before committing.

Comment on lines 198 to 201
const startIndex = metadata.totalRecords;
const endIndex = startIndex + data.length;
const startChunk = Math.floor(startIndex / CHUNK_SIZE);
const endChunk = Math.floor((endIndex - 1) / CHUNK_SIZE);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Edge case: empty data array produces a negative endChunk.

When data.length === 0, endIndex === startIndex, so endIndex - 1 < startIndex. Math.floor((startIndex - 1) / CHUNK_SIZE) can be -1 (when startIndex === 0). The for loop won't execute, but the metadata update at lines 271–275 will still run with += 0, writing a needless transaction. Guard early:

🛡️ Add early return for empty data
 const writeToIndexedDB = async (
   db: IDBDatabase,
   data: number[][],
   filename: string
 ): Promise<boolean> => {
   try {
+    if (data.length === 0) return true;
+
     const metadata = await getFileMetadata(db, filename);
🤖 Prompt for AI Agents
In `@workers/indexedDBWorker.ts` around lines 198 - 201, Add an early guard that
returns when data.length === 0 to avoid computing negative endChunk and
performing a needless metadata transaction; specifically, before computing
startIndex/endIndex/startChunk/endChunk (and before the loop that writes chunks
and the metadata update that uses metadata.totalRecords), check if data.length
=== 0 and return early so no chunk math or metadata += 0 update occurs. Ensure
the guard references the same variables (data, metadata, CHUNK_SIZE) and
prevents running the for-loop and subsequent metadata update.

Comment on lines 500 to 501
const deleteFilesByFilename = async (filename: string) => {
const dbRequest = indexedDB.open("ChordsRecordings");
const dbRequest = indexedDB.open("ChordsRecordings", 3);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

deleteFilesByFilename and deleteAllDataFromIndexedDB open redundant DB connections.

Both functions call indexedDB.open("ChordsRecordings", 3) directly, yet the caller in self.onmessage (line 12) already has an open db handle. This is wasteful and risks version-mismatch issues if the version constant is updated in one place but not the other.

Pass the existing db as a parameter, consistent with writeToIndexedDB, readFileData, etc.

♻️ Accept `db` parameter
-const deleteFilesByFilename = async (filename: string) => {
-  const dbRequest = indexedDB.open("ChordsRecordings", 3);
-  return new Promise<void>((resolve, reject) => {
-    dbRequest.onsuccess = async (event) => {
-      const db = (event.target as IDBOpenDBRequest).result;
+const deleteFilesByFilename = async (db: IDBDatabase, filename: string) => {
+  try {
       // Delete metadata
       await performIndexDBTransaction(db, "FileMetadata", "readwrite", (store) => { ... });
       // Delete all chunks for this file
       await performIndexDBTransaction(db, "DataChunks", "readwrite", (store) => { ... });
-    };
-    dbRequest.onerror = () => reject(...);
-  });
+  } catch (error) {
+    throw error;
+  }
 };

Then update the call site in self.onmessage:

-      await deleteFilesByFilename(filename);
+      await deleteFilesByFilename(db, filename);
       ...
-      await deleteAllDataFromIndexedDB();
+      await deleteAllDataFromIndexedDB(db);

Also applies to: 547-548

🤖 Prompt for AI Agents
In `@workers/indexedDBWorker.ts` around lines 500 - 501, Both
deleteFilesByFilename and deleteAllDataFromIndexedDB currently call
indexedDB.open("ChordsRecordings", 3) again; change both functions to accept an
existing IDBDatabase parameter (e.g., db: IDBDatabase) instead of opening a new
connection, update their internal logic to use that db, and update the caller in
self.onmessage to pass the already-open db handle (consistent with
writeToIndexedDB and readFileData); also update any other call sites (lines
around 547–548) to pass the db parameter to avoid redundant opens and
version-mismatch risks.

Comment on lines 580 to 595
dbRequest.onupgradeneeded = (event) => {
const db = (event.target as IDBOpenDBRequest).result;

if (!db.objectStoreNames.contains("ChordsRecordings")) {
const store = db.createObjectStore("ChordsRecordings", { keyPath: "filename" });
store.createIndex("filename", "filename", { unique: false });
// Create stores if they don't exist
if (!db.objectStoreNames.contains("FileMetadata")) {
const metadataStore = db.createObjectStore("FileMetadata", { keyPath: "filename" });
metadataStore.createIndex("filename", "filename", { unique: true });
}

if (!db.objectStoreNames.contains("DataChunks")) {
const chunksStore = db.createObjectStore("DataChunks", {
keyPath: ["filename", "chunkIndex"]
});
chunksStore.createIndex("byFilename", "filename", { unique: false });
}
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Duplicated onupgradeneeded handler — divergence risk.

This block repeats the store-creation logic from openIndexedDB (lines 107–128). If the schema evolves, these two locations can easily get out of sync. Eliminating the separate DB open (per the previous comment) also removes this duplication.

🤖 Prompt for AI Agents
In `@workers/indexedDBWorker.ts` around lines 580 - 595, Duplicate schema-creation
logic is implemented in the dbRequest.onupgradeneeded handler here, which
diverges from the openIndexedDB implementation (openIndexedDB) and risks getting
out of sync for stores "FileMetadata" and "DataChunks"; remove this duplicated
onupgradeneeded block and instead reuse the centralized upgrade logic in
openIndexedDB (or have dbRequest use the same helper that creates/updates the
FileMetadata and DataChunks stores) so schema creation lives in one place and
both callers invoke that single function.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/app/npg-lite/page.tsx (1)

88-90: ⚠️ Potential issue | 🟠 Major

recordingBuffers is recreated on every render — data written to it may be lost.

recordingBuffers is declared as a plain local variable (not a useRef), so it's re-initialized to empty arrays on every re-render. Since processSample captures it via closure (and processSample is recreated via useCallback with dependencies), buffer data can be lost when the component re-renders mid-recording.

This is likely a pre-existing issue, but the new stopRecording flow that flushes remaining buffer data (lines 972–979) is particularly sensitive to it — it may flush an empty, freshly-allocated buffer.

Suggested direction
-    const NUM_BUFFERS = 4;
-    const recordingBuffers = Array(NUM_BUFFERS)
-        .fill(null)
-        .map(() => [] as number[][]);
+    const NUM_BUFFERS = 4;
+    const recordingBuffersRef = useRef<number[][][]>(
+        Array(NUM_BUFFERS).fill(null).map(() => [] as number[][])
+    );
+    const recordingBuffers = recordingBuffersRef.current;
🤖 Fix all issues with AI agents
In `@src/app/npg-lite/page.tsx`:
- Around line 986-989: The await is ineffective because
getFileCountFromIndexedDB posts a message and returns void; change it to return
a Promise that resolves when the worker responds (or create a Promise wrapper at
the call site) so the caller actually awaits dataset refresh before clearing
isProcessingRecording; update getFileCountFromIndexedDB (or add a new
getFileCountFromIndexedDBAsync) to attach a one-off response listener or use a
request/response id, resolve the Promise when the worker's response arrives, and
then await that Promise before calling setIsProcessingRecording(false).
- Line 982: The linter flags the arrow callback in recordingBuffers.forEach
because the assignment expression "buffer.length = 0" returns a value; change
the callback to use a block body and a statement so it doesn't return anything
(e.g., replace the current inline arrow with a block arrow that sets
buffer.length = 0 inside braces), or alternatively replace the forEach call with
an explicit for...of loop; locate the call to recordingBuffers.forEach and
update it accordingly.
- Around line 761-816: The onmessage handler currently ignores messages that
lack an action (worker posts {error: msg}), so add a fallback branch in
workerRef.current.onmessage that checks for event.data.error when action is
falsy or unmatched and then: call toast.error with the error text, reset
relevant loading states (call setIsDownloadingFile for the filename if present,
setIsDownloadingAll(false), setIsDeletingFile for filename,
setIsDeletingAll(false) as appropriate), and refresh datasets when a delete
failure affects state (call getFileCountFromIndexedDB() or setDatasets([]) as
needed). Update the default/fallback to reference the existing symbols used in
the switch (saveAsZip, deleteFile, deleteAll, setIsDownloadingFile,
setIsDownloadingAll, setIsDeletingFile, setIsDeletingAll, toast.error,
getFileCountFromIndexedDB, setDatasets) so errors are handled even when the
worker omits action.
- Around line 836-872: The processBuffer promise currently only resolves on any
writeComplete message and never rejects or times out; update processBuffer to
reject when the worker responds with success: false (inspect msgSuccess in the
handleMessage for action 'write'/'writeComplete') and to implement a
configurable timeout (e.g., using setTimeout) that rejects if no response
arrives within a reasonable interval; ensure you remove the message listener and
clear the timeout on both success and failure paths and when timing out, so
workerRef event listeners are always cleaned up and stopRecording can handle
failures instead of hanging.

In `@workers/indexedDBWorker.ts`:
- Around line 370-394: The bug is that the truthy check "if (channel && item[i +
1] !== undefined)" treats a valid channel value of 0 as falsy and drops its
data; update the condition in the selectedChannels.map callback (the block that
builds filteredRow) to explicitly check for presence (e.g., channel !==
undefined && channel !== null) before using item[i + 1], keep the item[i + 1]
!== undefined check, and preserve the existing warning fallback when the value
is missing.
- Around line 88-98: The deleteFile and deleteAll branches inside the async
self.onmessage handler can throw (e.g., deleteFilesByFilename,
deleteAllDataFromIndexedDB) which causes unhandled promise rejections and kills
the worker; wrap the logic in the 'deleteFile' and 'deleteAll' cases in
try/catch blocks, call the existing handlePostMessage to return an error
response (e.g., { success: false, action: 'deleteFile'|'deleteAll', error:
err.message }) instead of rethrowing, and ensure any early validation (like
missing filename) also results in a handled error response rather than throwing;
reference deleteFilesByFilename, deleteAllDataFromIndexedDB, self.onmessage, and
handlePostMessage when implementing the fix.
🧹 Nitpick comments (2)
workers/indexedDBWorker.ts (2)

8-12: DB is opened for every worker message, even non-DB actions.

openIndexedDB() is called unconditionally at line 12 for every incoming message, including setCanvasCount and setSelectedChannels which don't need a database connection. Additionally, saveAllDataAsZip (line 403) and saveDataByFilename (line 449) open their own separate DB connections, ignoring the db handle from line 12.

Consider opening the DB lazily (once, cached) or only for actions that require it.


299-361: Tree-style merge still materializes all data into memory via spread operators.

The mergeArrays helper uses [...a, ...b] (line 304), which copies all elements. For a 1-hour recording at 500 Hz (1.8M rows), the tree merge still allocates ~O(n log n) temporary arrays. A simpler Array.prototype.concat or a single flat pass would be more efficient:

// After loading all chunks, just flatten once:
return chunkBuffers.flat();

This avoids all intermediate array copies.

Comment on lines +761 to +816
workerRef.current.onmessage = (event) => {
const { action, filename, success, blob, error, allData } = event.data;

switch (action) {
case 'writeComplete':
console.log(`Write completed for ${filename}: ${success}`);
break;

case 'getFileCountFromIndexedDB':
if (allData) {
setDatasets(allData);
}
break;

case 'saveDataByFilename':
if (blob) {
saveAs(blob, filename);
toast.success(`File "${filename}" downloaded successfully.`);
setIsDownloadingFile(prev => ({ ...prev, [filename]: false }));
} else if (error) {
console.error("Download error:", error);
toast.error(`Error downloading file: ${error}`);
setIsDownloadingFile(prev => ({ ...prev, [filename]: false }));
}
break;

case 'saveAsZip':
if (blob) {
saveAs(blob, 'ChordsWeb.zip');
toast.success("All files downloaded successfully as ZIP.");
setIsDownloadingAll(false);
} else if (error) {
console.error("ZIP creation error:", error);
toast.error(`Error creating ZIP file: ${error}`);
setIsDownloadingAll(false);
}
break;

case 'deleteFile':
if (success) {
toast.success(`File '${filename}' deleted successfully.`);
setIsDeletingFile(prev => ({ ...prev, [filename]: false }));
// Refresh datasets after deletion
getFileCountFromIndexedDB();
}
break;

case 'deleteAll':
if (success) {
toast.success(`All files deleted successfully.`);
setIsDeletingAll(false);
setDatasets([]);
}
break;
}
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Worker error messages are silently dropped — no action field is set.

When the worker calls handleError(msg), it posts { error: msg } with no action property. The onmessage handler here switches on action, so error messages match no case and are silently ignored. The user gets no feedback (no toast, no loading state reset) when operations like saveAsZip, deleteFile, or deleteAll fail.

Either include the action in the worker's error responses, or add a default/fallback handler for messages containing error:

Proposed fix (page-side fallback)
                     case 'deleteAll':
                         if (success) {
                             toast.success(`All files deleted successfully.`);
                             setIsDeletingAll(false);
                             setDatasets([]);
                         }
                         break;
+
+                    default:
+                        if (error) {
+                            console.error("Worker error:", error);
+                            toast.error(error);
+                            // Reset all loading states
+                            setIsDownloadingAll(false);
+                            setIsDeletingAll(false);
+                            setIsDownloadingFile({});
+                            setIsDeletingFile({});
+                        }
+                        break;
                 }

Ideally, also fix the worker side to always include action in error posts.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
workerRef.current.onmessage = (event) => {
const { action, filename, success, blob, error, allData } = event.data;
switch (action) {
case 'writeComplete':
console.log(`Write completed for ${filename}: ${success}`);
break;
case 'getFileCountFromIndexedDB':
if (allData) {
setDatasets(allData);
}
break;
case 'saveDataByFilename':
if (blob) {
saveAs(blob, filename);
toast.success(`File "${filename}" downloaded successfully.`);
setIsDownloadingFile(prev => ({ ...prev, [filename]: false }));
} else if (error) {
console.error("Download error:", error);
toast.error(`Error downloading file: ${error}`);
setIsDownloadingFile(prev => ({ ...prev, [filename]: false }));
}
break;
case 'saveAsZip':
if (blob) {
saveAs(blob, 'ChordsWeb.zip');
toast.success("All files downloaded successfully as ZIP.");
setIsDownloadingAll(false);
} else if (error) {
console.error("ZIP creation error:", error);
toast.error(`Error creating ZIP file: ${error}`);
setIsDownloadingAll(false);
}
break;
case 'deleteFile':
if (success) {
toast.success(`File '${filename}' deleted successfully.`);
setIsDeletingFile(prev => ({ ...prev, [filename]: false }));
// Refresh datasets after deletion
getFileCountFromIndexedDB();
}
break;
case 'deleteAll':
if (success) {
toast.success(`All files deleted successfully.`);
setIsDeletingAll(false);
setDatasets([]);
}
break;
}
};
workerRef.current.onmessage = (event) => {
const { action, filename, success, blob, error, allData } = event.data;
switch (action) {
case 'writeComplete':
console.log(`Write completed for ${filename}: ${success}`);
break;
case 'getFileCountFromIndexedDB':
if (allData) {
setDatasets(allData);
}
break;
case 'saveDataByFilename':
if (blob) {
saveAs(blob, filename);
toast.success(`File "${filename}" downloaded successfully.`);
setIsDownloadingFile(prev => ({ ...prev, [filename]: false }));
} else if (error) {
console.error("Download error:", error);
toast.error(`Error downloading file: ${error}`);
setIsDownloadingFile(prev => ({ ...prev, [filename]: false }));
}
break;
case 'saveAsZip':
if (blob) {
saveAs(blob, 'ChordsWeb.zip');
toast.success("All files downloaded successfully as ZIP.");
setIsDownloadingAll(false);
} else if (error) {
console.error("ZIP creation error:", error);
toast.error(`Error creating ZIP file: ${error}`);
setIsDownloadingAll(false);
}
break;
case 'deleteFile':
if (success) {
toast.success(`File '${filename}' deleted successfully.`);
setIsDeletingFile(prev => ({ ...prev, [filename]: false }));
// Refresh datasets after deletion
getFileCountFromIndexedDB();
}
break;
case 'deleteAll':
if (success) {
toast.success(`All files deleted successfully.`);
setIsDeletingAll(false);
setDatasets([]);
}
break;
default:
if (error) {
console.error("Worker error:", error);
toast.error(error);
// Reset all loading states
setIsDownloadingAll(false);
setIsDeletingAll(false);
setIsDownloadingFile({});
setIsDeletingFile({});
}
break;
}
};
🤖 Prompt for AI Agents
In `@src/app/npg-lite/page.tsx` around lines 761 - 816, The onmessage handler
currently ignores messages that lack an action (worker posts {error: msg}), so
add a fallback branch in workerRef.current.onmessage that checks for
event.data.error when action is falsy or unmatched and then: call toast.error
with the error text, reset relevant loading states (call setIsDownloadingFile
for the filename if present, setIsDownloadingAll(false), setIsDeletingFile for
filename, setIsDeletingAll(false) as appropriate), and refresh datasets when a
delete failure affects state (call getFileCountFromIndexedDB() or
setDatasets([]) as needed). Update the default/fallback to reference the
existing symbols used in the switch (saveAsZip, deleteFile, deleteAll,
setIsDownloadingFile, setIsDownloadingAll, setIsDeletingFile, setIsDeletingAll,
toast.error, getFileCountFromIndexedDB, setDatasets) so errors are handled even
when the worker omits action.

Comment on lines +836 to 872
const processBuffer = async (bufferIndex: number, canvasCount: number, selectChannel: number[]): Promise<void> => {
return new Promise((resolve) => {
if (!workerRef.current) {
initializeWorker();
}

// If the buffer is empty, return early
if (recordingBuffers[bufferIndex].length === 0) return;
// If the buffer is empty, return early
if (recordingBuffers[bufferIndex].length === 0) {
resolve();
return;
}

const data = recordingBuffers[bufferIndex];
const filename = currentFileNameRef.current;
const data = recordingBuffers[bufferIndex];
const filename = currentFileNameRef.current;

if (filename) {
// Check if the record already exists
workerRef.current?.postMessage({ action: 'checkExistence', filename, canvasCount, selectChannel });
writeToIndexedDB(data, filename, canvasCount, selectChannel);
}
};
if (filename) {
const handleMessage = (event: MessageEvent) => {
const { action: msgAction, success: msgSuccess, filename: completedFilename } = event.data;
if (msgAction === 'writeComplete' && completedFilename === filename) {
workerRef.current?.removeEventListener('message', handleMessage);
resolve();
}
};

const writeToIndexedDB = (data: number[][], filename: string, canvasCount: number, selectChannel: number[]) => {
workerRef.current?.postMessage({ action: 'write', data, filename, canvasCount, selectChannel });
workerRef.current?.addEventListener('message', handleMessage);
workerRef.current?.postMessage({
action: 'write',
data,
filename,
canvasCount,
selectChannel
});
} else {
resolve();
}
});
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

processBuffer resolves on success but never rejects on failure.

Line 854 checks for writeComplete but resolves regardless of msgSuccess. If the worker reports success: false, the promise still resolves silently, and the caller (stopRecording) proceeds as if the write succeeded.

Additionally, there's no timeout — if the worker never responds (e.g., it crashes), the promise hangs forever, leaving the UI stuck in the "saving" state.

Proposed direction
                 const handleMessage = (event: MessageEvent) => {
                     const { action: msgAction, success: msgSuccess, filename: completedFilename } = event.data;
                     if (msgAction === 'writeComplete' && completedFilename === filename) {
                         workerRef.current?.removeEventListener('message', handleMessage);
-                        resolve();
+                        if (msgSuccess) {
+                            resolve();
+                        } else {
+                            reject(new Error(`Write failed for ${filename}`));
+                        }
                     }
                 };
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const processBuffer = async (bufferIndex: number, canvasCount: number, selectChannel: number[]): Promise<void> => {
return new Promise((resolve) => {
if (!workerRef.current) {
initializeWorker();
}
// If the buffer is empty, return early
if (recordingBuffers[bufferIndex].length === 0) return;
// If the buffer is empty, return early
if (recordingBuffers[bufferIndex].length === 0) {
resolve();
return;
}
const data = recordingBuffers[bufferIndex];
const filename = currentFileNameRef.current;
const data = recordingBuffers[bufferIndex];
const filename = currentFileNameRef.current;
if (filename) {
// Check if the record already exists
workerRef.current?.postMessage({ action: 'checkExistence', filename, canvasCount, selectChannel });
writeToIndexedDB(data, filename, canvasCount, selectChannel);
}
};
if (filename) {
const handleMessage = (event: MessageEvent) => {
const { action: msgAction, success: msgSuccess, filename: completedFilename } = event.data;
if (msgAction === 'writeComplete' && completedFilename === filename) {
workerRef.current?.removeEventListener('message', handleMessage);
resolve();
}
};
const writeToIndexedDB = (data: number[][], filename: string, canvasCount: number, selectChannel: number[]) => {
workerRef.current?.postMessage({ action: 'write', data, filename, canvasCount, selectChannel });
workerRef.current?.addEventListener('message', handleMessage);
workerRef.current?.postMessage({
action: 'write',
data,
filename,
canvasCount,
selectChannel
});
} else {
resolve();
}
});
};
const processBuffer = async (bufferIndex: number, canvasCount: number, selectChannel: number[]): Promise<void> => {
return new Promise((resolve, reject) => {
if (!workerRef.current) {
initializeWorker();
}
// If the buffer is empty, return early
if (recordingBuffers[bufferIndex].length === 0) {
resolve();
return;
}
const data = recordingBuffers[bufferIndex];
const filename = currentFileNameRef.current;
if (filename) {
const handleMessage = (event: MessageEvent) => {
const { action: msgAction, success: msgSuccess, filename: completedFilename } = event.data;
if (msgAction === 'writeComplete' && completedFilename === filename) {
workerRef.current?.removeEventListener('message', handleMessage);
if (msgSuccess) {
resolve();
} else {
reject(new Error(`Write failed for ${filename}`));
}
}
};
workerRef.current?.addEventListener('message', handleMessage);
workerRef.current?.postMessage({
action: 'write',
data,
filename,
canvasCount,
selectChannel
});
} else {
resolve();
}
});
};
🤖 Prompt for AI Agents
In `@src/app/npg-lite/page.tsx` around lines 836 - 872, The processBuffer promise
currently only resolves on any writeComplete message and never rejects or times
out; update processBuffer to reject when the worker responds with success: false
(inspect msgSuccess in the handleMessage for action 'write'/'writeComplete') and
to implement a configurable timeout (e.g., using setTimeout) that rejects if no
response arrives within a reasonable interval; ensure you remove the message
listener and clear the timeout on both success and failure paths and when timing
out, so workerRef event listeners are always cleaned up and stopRecording can
handle failures instead of hanging.

}

// Clear buffers after processing
recordingBuffers.forEach(buffer => buffer.length = 0);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Static analysis: forEach callback should not return a value.

buffer.length = 0 is an assignment expression that returns 0. While harmless at runtime, it triggers the linter rule useIterableCallbackReturn. Use a block body:

Proposed fix
-        recordingBuffers.forEach(buffer => buffer.length = 0);
+        recordingBuffers.forEach(buffer => { buffer.length = 0; });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
recordingBuffers.forEach(buffer => buffer.length = 0);
recordingBuffers.forEach(buffer => { buffer.length = 0; });
🧰 Tools
🪛 Biome (2.3.14)

[error] 982-982: This callback passed to forEach() iterable method should not return a value.

Either remove this return or remove the returned value.

(lint/suspicious/useIterableCallbackReturn)

🤖 Prompt for AI Agents
In `@src/app/npg-lite/page.tsx` at line 982, The linter flags the arrow callback
in recordingBuffers.forEach because the assignment expression "buffer.length =
0" returns a value; change the callback to use a block body and a statement so
it doesn't return anything (e.g., replace the current inline arrow with a block
arrow that sets buffer.length = 0 inside braces), or alternatively replace the
forEach call with an explicit for...of loop; locate the call to
recordingBuffers.forEach and update it accordingly.

Comment on lines +986 to +989
// Fetch updated datasets
await getFileCountFromIndexedDB();

setIsProcessingRecording(false);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

await getFileCountFromIndexedDB() doesn't actually wait for the result.

getFileCountFromIndexedDB (line 993) just posts a message to the worker and returns void — there's no promise that resolves when the response arrives. The await on line 987 is a no-op, meaning setIsProcessingRecording(false) on line 989 fires before datasets are actually refreshed.

🤖 Prompt for AI Agents
In `@src/app/npg-lite/page.tsx` around lines 986 - 989, The await is ineffective
because getFileCountFromIndexedDB posts a message and returns void; change it to
return a Promise that resolves when the worker responds (or create a Promise
wrapper at the call site) so the caller actually awaits dataset refresh before
clearing isProcessingRecording; update getFileCountFromIndexedDB (or add a new
getFileCountFromIndexedDBAsync) to attach a one-off response listener or use a
request/response id, resolve the Promise when the worker's response arrives, and
then await that Promise before calling setIsProcessingRecording(false).

Comment on lines +88 to +98
case 'deleteFile':
if (!filename) {
throw new Error('Filename is required for deleteFile action.');
}
await deleteFilesByFilename(filename);
handlePostMessage({
success: true,
action: 'deleteFile',
filename
});
break;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Unhandled throw will crash the worker as an unhandled rejection.

Line 90 throws an error, but the deleteFile case (and the surrounding self.onmessage) has no try/catch. Since self.onmessage is async, this becomes an unhandled promise rejection, silently killing the operation and never posting a response back to the main thread. The same issue applies to lines 100–106 (deleteAll), where errors from deleteAllDataFromIndexedDB or deleteFilesByFilename are also uncaught.

Proposed fix: wrap delete cases in try/catch
         case 'deleteFile':
+            try {
             if (!filename) {
-                throw new Error('Filename is required for deleteFile action.');
+                handleError('Filename is required for deleteFile action.');
+                break;
             }
             await deleteFilesByFilename(filename);
             handlePostMessage({ 
                 success: true, 
                 action: 'deleteFile',
                 filename 
             });
+            } catch (error) {
+                handleError('Failed to delete file');
+            }
             break;

         case 'deleteAll':
+            try {
             await deleteAllDataFromIndexedDB();
             handlePostMessage({ 
                 success: true, 
                 action: 'deleteAll' 
             });
+            } catch (error) {
+                handleError('Failed to delete all data');
+            }
             break;
🤖 Prompt for AI Agents
In `@workers/indexedDBWorker.ts` around lines 88 - 98, The deleteFile and
deleteAll branches inside the async self.onmessage handler can throw (e.g.,
deleteFilesByFilename, deleteAllDataFromIndexedDB) which causes unhandled
promise rejections and kills the worker; wrap the logic in the 'deleteFile' and
'deleteAll' cases in try/catch blocks, call the existing handlePostMessage to
return an error response (e.g., { success: false, action:
'deleteFile'|'deleteAll', error: err.message }) instead of rethrowing, and
ensure any early validation (like missing filename) also results in a handled
error response rather than throwing; reference deleteFilesByFilename,
deleteAllDataFromIndexedDB, self.onmessage, and handlePostMessage when
implementing the fix.

Comment on lines 370 to +394
const rows = data
.filter((item, index) => {
// Ensure each item is an array and has valid data
if (!item || !Array.isArray(item) || item.length === 0) {
console.warn(`Skipping invalid data at index ${index}:`, item);
return false;
}
return true;
})
.map((item, index) => {
// Generate filtered row with Counter and selected channel data
const filteredRow = [
item[0], // Counter
...selectedChannels.map((channel, i) => {
if (channel) {

return item[i + 1];
} else {
console.warn(`Missing data for channel ${channel} in item ${index}:`, item);
return ""; // Default empty value for missing data
}
}),
];

return filteredRow
.map((field) => (field !== undefined && field !== null ? JSON.stringify(field) : "")) // Ensure proper formatting
.join(",");
});

// Combine header and rows into a CSV format
const csvContent = [header.join(","), ...rows].join("\n");
.filter((item, index) => {
if (!item || !Array.isArray(item) || item.length === 0) {
console.warn(`Skipping invalid data at index ${index}:`, item);
return false;
}
return true;
})
.map((item, index) => {
const filteredRow = [
item[0], // Counter
...selectedChannels.map((channel, i) => {
if (channel && item[i + 1] !== undefined) {
return item[i + 1];
} else {
console.warn(`Missing data for channel ${channel} in item ${index}:`, item);
return "";
}
}),
];

return filteredRow
.map((field) => (field !== undefined && field !== null ? JSON.stringify(field) : ""))
.join(",");
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Bug: channel value of 0 is falsy — first channel data will always be empty in CSV.

At line 382, if (channel && item[i + 1] !== undefined) will evaluate to false when channel === 0 because 0 is falsy in JavaScript. This means the first selected channel's data will always be replaced with "" in the CSV output.

Proposed fix
-                    if (channel && item[i + 1] !== undefined) {
+                    if (channel !== undefined && item[i + 1] !== undefined) {
🤖 Prompt for AI Agents
In `@workers/indexedDBWorker.ts` around lines 370 - 394, The bug is that the
truthy check "if (channel && item[i + 1] !== undefined)" treats a valid channel
value of 0 as falsy and drops its data; update the condition in the
selectedChannels.map callback (the block that builds filteredRow) to explicitly
check for presence (e.g., channel !== undefined && channel !== null) before
using item[i + 1], keep the item[i + 1] !== undefined check, and preserve the
existing warning fallback when the value is missing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant