Benchmarks, Fabric, and WebRTC#26
Conversation
Webpack DefinePlugin previously inlined JSON.stringify(process.env), leaking host secrets into assets/bundles/browser.min.js. Rebuild bundle with safe env. Co-authored-by: Cursor <cursoragent@cursor.com>
|
Warning Rate limit exceeded
To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (20)
📝 WalkthroughWalkthroughThis PR introduces streaming chat completions, an active worker queue for background tasks, a goals management system, expanded memory/vector store capabilities, goal/worker route handlers, improved Discord integrations, enhanced peer/federation networking, email service refactoring, configuration/build tooling updates, and extensive test coverage. Multiple components are refactored for access control and state management consistency. ChangesStreaming Chat & Message Processing
Active Worker Queue & Background Task Processing
Goals Management System
Memory & Vector Store Enhancements
Discord & Network Integration
Session & Authorization Refactoring
Email Service & Route Consolidations
Peer & Document Access Control
Configuration & Build System
Frontend UI & Component Updates
Data & Pool Management
API & HTTP Routes
Test Expansion & Documentation
Assets & Browser Bundle
Sequence Diagram(s)sequenceDiagram
participant User as User (Browser)
participant ChatBox as ChatBox Component
participant Redux as Redux Store
participant API as API Endpoint
participant Pool as Agent Pool
participant SSE as SSE Stream
User->>ChatBox: Submit message
ChatBox->>Redux: dispatch submitMessage()
Redux->>API: POST /v1/chat/completions
API->>Pool: pool.query(request)
alt Streaming Enabled
Pool->>SSE: Stream start
Pool->>API: Send chunks
API->>SSE: Write SSE chunk
SSE->>Redux: dispatch CHAT_STREAM_CHUNK
Redux->>ChatBox: Update streamBuffer
ChatBox->>User: Render live preview
Pool->>SSE: Send [DONE]
else Non-Streaming
Pool->>API: Return complete response
API->>Redux: dispatch submitMessage success
Redux->>ChatBox: Update messages
end
ChatBox->>User: Display final response
sequenceDiagram
participant Client as HTTP Client
participant Route as Message Route
participant Queue as ActiveWorkerQueue
participant Worker as Worker Loop
participant Handler as Conversation Handler
Client->>Route: POST /messages (conversation_id)
Route->>Route: Authenticate & authorize
Route->>Route: Insert assistant placeholder (queued)
Route->>Queue: enqueue(conversation_turn job)
Route->>Client: 202 Accepted
Queue->>Queue: _tick() checks for work
Queue->>Worker: Dequeue job
Worker->>Handler: _processConversationTurnJob()
Handler->>Handler: Run conversation pipeline
Handler->>Handler: Mark assistant ready/error
Worker->>Queue: Job complete
Queue->>Queue: Update lastFinishedAt
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Poem
✨ Finishing Touches🧪 Generate unit tests (beta)
|
Not up to standards ⛔🔴 Issues
|
| Category | Results |
|---|---|
| ErrorProne | 1 medium 1 high |
| Security | 6 critical 1 high |
🟢 Metrics 671 complexity · 69 duplication
Metric Results Complexity 671 Duplication 69
NEW Get contextual insights on your PRs based on Codacy's metrics, along with PR and Jira context, without leaving GitHub. Enable AI reviewer
TIP This summary will be updated as you push new changes.
There was a problem hiding this comment.
Actionable comments posted: 6
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (13)
README.md (1)
131-132:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winRemove unused link reference.
The
[fabric-hub]link reference is defined but never used in the document. Only[fabric-pub]is referenced (line 108).🧹 Proposed fix
[fabric-pub]: https://fabric.pub -[fabric-hub]: https://hub.fabric.pub🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@README.md` around lines 131 - 132, The README defines an unused link reference "[fabric-hub]" which should be removed; locate the link reference block containing "[fabric-pub]" and "[fabric-hub]" and delete the "[fabric-hub]: https://hub.fabric.pub" line (or remove the entire unused reference entry) while keeping the "[fabric-pub]" reference intact and verifying no other occurrences of "[fabric-hub]" remain in the document.routes/users/create_user.js (1)
22-24:⚠️ Potential issue | 🔴 Critical | ⚡ Quick winReplace synchronous bcrypt operations with async API
genSaltSyncandhashSyncblock the Node.js event loop and stall all concurrent requests, even when called inside anasynchandler (~100 ms per request atcost=10). Synchronous operations cannot yield the thread regardless of async context.Replace with
await bcrypt.hash(password, saltRounds), which handles salt generation internally. Note: the separatesaltcolumn being stored (line 30) is not needed—bcrypt embeds the salt in the hash output used bycompareSyncduring login verification.Proposed fix
-const { hashSync, genSaltSync } = require('bcrypt'); +const bcrypt = require('bcrypt');- // Generate a salt and hash the password - const saltRounds = 10; - const salt = genSaltSync(saltRounds); - const hashedPassword = hashSync(password, salt); + // Hash the password (bcrypt generates the salt internally) + const hashedPassword = await bcrypt.hash(password, 10);// Insert the new user into the database const newUser = await this.db('users').insert({ username: username, password: hashedPassword, - salt: salt });🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@routes/users/create_user.js` around lines 22 - 24, The code currently uses synchronous bcrypt calls (genSaltSync, hashSync) to derive hashedPassword; replace them by awaiting the async API: remove genSaltSync and hashSync usage and call await bcrypt.hash(password, saltRounds) (keep the saltRounds constant) to produce hashedPassword; stop generating or storing a separate salt value (remove writing the salt column) since bcrypt embeds the salt in the hash; ensure the surrounding handler/function remains async and that login verification continues to use bcrypt.compare or bcrypt.compareSync against the stored hash.routes/users/edit_username.js (1)
11-22:⚠️ Potential issue | 🟠 Major | ⚡ Quick winMissing null check:
usermay be undefined if ID doesn't exist.If the
idfromreq.bodydoesn't match any user, the query returnsundefined. The code then accessesuser.idon line 19, which will throw a TypeError.🐛 Proposed fix
const user = await this.db('users').where('id', id).first(); + if (!user) { + return res.status(404).json({ message: 'User not found.' }); + } // Check if the username already exists🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@routes/users/edit_username.js` around lines 11 - 22, The code doesn't check that user (from const user = await this.db('users').where('id', id).first()) exists before accessing user.id, which can throw; add a null check immediately after that query (e.g., if (!user) return res.status(404).json({ message: 'User not found.' })), and when calling update use the original id (or user.id only after the null check) so the update step and subsequent logic (existingUser, update) are safe; update references: user, existingUser, and the this.db('users').where(...).update call.routes/users/create_full_user.js (1)
6-6:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winUnused destructured variables:
firmNameandfirmSizeare accepted but never used.These values are extracted from
req.bodyon line 6 but are not included in the database insert (lines 38–45). The database schema has no corresponding columns for these fields. The frontend does send these values (authActions.js:201), so this appears to be either incomplete implementation or a placeholder for future functionality.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@routes/users/create_full_user.js` at line 6, The variables firmName and firmSize are destructured from req.body but never used; either remove them from the destructuring to stop accepting unused input, or persist them by adding the fields to the user INSERT/update logic and to the DB schema (or store in a separate firm table). Locate the destructuring line that extracts { username, password, email, firstName, lastName, firmName, firmSize } from req.body and either delete firmName and firmSize from that list, or add firmName and firmSize to the values passed into the existing database insert/update (the INSERT statement block that creates the user) and ensure the database schema has corresponding columns or a related table and migration to accept those fields.types/queue.js (1)
63-65:⚠️ Potential issue | 🔴 Critical | ⚡ Quick winRemove self-referential getter or fix the naming mismatch with the route.
The getter
get _clearQueueat lines 63–65 returnsthis._clearQueue.bind(this), which recursively invokes itself and causes a stack overflow. Additionally, the route atroutes/redis/clear_queue.js:5expectsthis.queue.clearQueue(public API, no underscore), but the Queue class defines only_clearQueue(private, with underscore), so the route receivesundefined.Rename the getter to
clearQueueto match the route's expectation, and update it to return a bound reference to the actual async method without causing recursion:Proposed fix
- get _clearQueue () { - return this._clearQueue.bind(this); + get clearQueue () { + return this._clearQueueImpl.bind(this); } - async _clearQueue () { + async _clearQueueImpl () {🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@types/queue.js` around lines 63 - 65, The getter get _clearQueue is self-referential and causes recursion and also doesn't match the route expectation this.queue.clearQueue; rename the getter to clearQueue and have it return a bound reference to the real implementation method (e.g., bind this._clearQueueImpl or this._clearQueueMethod — whatever the actual async implementation is named) so routes/redis/clear_queue.js can call this.queue.clearQueue without hitting recursion or undefined. Ensure the new getter name is clearQueue and it returns this.<actualImplementation>.bind(this).routes/conversations/list_conversations.js (1)
23-30:⚠️ Potential issue | 🔴 Critical | ⚡ Quick winBug: Update query uses wrong identifier.
The query aliases
id as dbidandfabric_id as id(lines 15, 17), soconversation.idis actually thefabric_id. However, line 28 useswhere({ id: conversation.id })which should bewhere({ id: conversation.dbid })to match the database primary key.🐛 Proposed fix
for (let i = 0; i < results.length; i++) { const conversation = results[i]; if (!conversation.fabric_id) { // TODO: ensure no LocalConversation is shared externally const actor = new Actor({ type: 'LocalConversation', name: `sensemaker/conversations/${conversation.dbid}`, created: conversation.created_at }); - await this.db('conversations').update({ fabric_id: actor.id }).where({ id: conversation.id }); + await this.db('conversations').update({ fabric_id: actor.id }).where({ id: conversation.dbid }); } }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@routes/conversations/list_conversations.js` around lines 23 - 30, The update is using the aliased conversation.id (which holds fabric_id) instead of the primary key; in the loop inside list_conversations replace the where clause in this.db('conversations').update({ fabric_id: actor.id }).where({ id: conversation.id }) to use the primary key alias: where({ id: conversation.dbid }). Keep the Actor creation and fabric_id check as-is (Actor, results, conversation.dbid, conversation.id are the relevant symbols).routes/conversations/view_conversation.js (1)
16-26:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winRedundant nested
res.formatand incorrect HTML response.The 404 handler uses a nested
res.formatinside the outer JSON handler, but both branches return JSON. The HTML branch (line 23) incorrectly returns JSON instead of HTML. Since this code path is already within thejson:handler, the nested format is unnecessary.🔧 Proposed simplification
// 404 Conversation Not Found if (!conversation) { - return res.format({ - json: function () { - res.status(404).json({ message: 'Conversation not found.' }); - }, - html: function () { - res.status(404).json({ message: 'Conversation not found.' }); - } - }); + return res.status(404).json({ message: 'Conversation not found.' }); }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@routes/conversations/view_conversation.js` around lines 16 - 26, The 404 handler currently nests a second res.format inside the json branch and returns JSON for both branches; remove the nested res.format and instead use a single res.format call for the conversation-not-found case: provide a json handler that sends res.status(404).json({ message: 'Conversation not found.' }) and an html handler that returns appropriate HTML (e.g., res.status(404).render(...) or res.status(404).send(...) with an HTML message). Update the block around the conversation existence check (the conversation variable and the res.format invocation) to eliminate the redundant nesting and ensure the html branch returns HTML rather than JSON.components/services/discord/DiscordGuild.js (1)
98-98:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winInconsistent null-safety:
discord.guild.idaccessed without optional chaining.Earlier code uses
discord.guild?.namefor null-safety, but lines 98 and 119 accessdiscord.guild.iddirectly. Ifdiscord.guildis undefined during loading, this will throw.Proposed fix
- <Card as={Link} to={`/services/discord/guilds/${discord.guild.id}/members`}> + <Card as={Link} to={`/services/discord/guilds/${discord.guild?.id}/members`}>Apply the same fix to line 119:
- <Card as={Link} to={`/services/discord/guilds/${discord.guild.id}/channels`}> + <Card as={Link} to={`/services/discord/guilds/${discord.guild?.id}/channels`}>🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@components/services/discord/DiscordGuild.js` at line 98, The JSX accesses discord.guild.id without null-safety which can throw if discord.guild is undefined; update all occurrences (e.g., the Card as={Link} to={`/services/discord/guilds/${discord.guild.id}/members`} and the other usage around line 119) to use optional chaining (discord.guild?.id) and ensure the resulting URL/Link handles an undefined id (e.g., guard rendering or produce a safe fallback) inside the DiscordGuild component so it matches the existing discord.guild?.name pattern.components/Bridge.js (1)
331-355:⚠️ Potential issue | 🟠 Major | ⚡ Quick winAdd checks for Buffer and typed array payloads in
handleWebRTCMessage().When
sendMessage()sends a raw Buffer over WebRTC, PeerJS 1.5.4 deserializes it on the receiving end as a Buffer or Uint8Array. The current code only recognizesstringandArrayBuffer, so both of these payloads fall through toJSON.stringify(data)and corrupt the binary data beforeMessage.fromBuffer()processes it. AddBuffer.isBuffer()andArrayBuffer.isView()checks to handle these cases.Suggested fix
- if (typeof data === 'string') { - messageData = Buffer.from(data, 'utf8'); - } else if (data instanceof ArrayBuffer) { - messageData = Buffer.from(data); - } else { - messageData = Buffer.from(JSON.stringify(data)); - } + if (typeof data === 'string') { + messageData = Buffer.from(data, 'utf8'); + } else if (Buffer.isBuffer(data)) { + messageData = data; + } else if (data instanceof ArrayBuffer) { + messageData = Buffer.from(data); + } else if (ArrayBuffer.isView(data)) { + messageData = Buffer.from(data.buffer, data.byteOffset, data.byteLength); + } else { + messageData = Buffer.from(JSON.stringify(data)); + }Also applies to: 515-520
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@components/Bridge.js` around lines 331 - 355, handleWebRTCMessage currently only handles string and ArrayBuffer payloads and falls back to JSON.stringify for other types which corrupts raw binary; update handleWebRTCMessage to detect raw Buffers and typed arrays by using Buffer.isBuffer(data) and ArrayBuffer.isView(data) (covers Uint8Array/etc.) and convert them to a Buffer directly (Buffer.from(data) or Buffer.from(data.buffer, data.byteOffset, data.byteLength) as appropriate) before calling this.onSocketMessage({ data: messageData }); also ensure the same checks are added in the duplicate handling block around lines ~515-520 so Message.fromBuffer() receives the original binary unchanged; reference function handleWebRTCMessage and related sendMessage/Message.fromBuffer logic when making the change.components/TopBar.js (1)
55-58:⚠️ Potential issue | 🔴 Critical | ⚡ Quick winDefine
tokenbefore passing it intoDonateToHostModal.
tokenon Line 112 is never declared inrender(), so this component throws aReferenceErrorwhen rendering theDonateToHostModalJSX. Extracttokenfrom theauthobject at the start ofrender().Suggested fix
render () { const { auth, bitcoinBalance } = this.props; - const { isPopupOpen, isProfileModalOpen } = this.state; + const { isPopupOpen, isProfileModalOpen, donateModalOpen } = this.state; + const token = auth && auth.token; @@ <DonateToHostModal - open={this.state.donateModalOpen} + open={donateModalOpen} onClose={() => this.setState({ donateModalOpen: false })} token={token}🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@components/TopBar.js` around lines 55 - 58, In render(), extract token from the auth prop (e.g., const { auth, bitcoinBalance } = this.props; then const { token } = auth) before using it in the DonateToHostModal JSX so that the undefined ReferenceError is removed; locate the render method and add the token extraction near the existing prop destructuring and pass that token into DonateToHostModal.tests/sensemaker.unit.js (1)
50-82:⚠️ Potential issue | 🟠 Major | ⚡ Quick winMake these learner tests awaitable instead of
done+ timers.Both cases start async work and never return/catch it, so a rejection in the timeout path becomes a test timeout instead of a clean failure. The hard
> 200nafter 1 second check is also likely to flap under CI load. Preferasynctests with an awaited delay, and keep the assertion off a strict wall-clock threshold if possible.Suggested shape
- it('can execute more than 200 ticks per second', function (done) { - async function test () { - const learner = new Learner(); - await learner.start(); - assert.strictEqual(learner.status, 'STARTED'); - setTimeout(async function () { - await learner.stop(); - assert.strictEqual(learner.status, 'STOPPED'); - assert.ok(learner); - assert.ok(learner.clock > 200n); - done(); - }, 1000); - } - - test(); - }); + it('can execute more than 200 ticks per second', async function () { + const learner = new Learner(); + await learner.start(); + assert.strictEqual(learner.status, 'STARTED'); + await new Promise(resolve => setTimeout(resolve, 1000)); + await learner.stop(); + assert.strictEqual(learner.status, 'STOPPED'); + assert.ok(learner.clock > 200n); + });🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@tests/sensemaker.unit.js` around lines 50 - 82, Convert both tests to async functions that return a Promise instead of using done+setTimeout: call await learner.start(), then await a short promise-based delay (e.g., await new Promise(r => setTimeout(r, 1000))) and then await learner.stop(); perform assertions after await stop so rejections propagate. For the tick-rate test (referencing Learner, start, stop, and learner.clock) avoid a hard >200n wall-clock check—either assert the clock increased by an expected delta during the awaited period (e.g., clock_after > clock_before + X) or relax the threshold to be CI-friendly. For the chunk test use direct calls to learner._writeChunk and learner._readChunk and then await the delay + await learner.stop() before final assertions so failures are reported correctly.start.sh (2)
144-166:⚠️ Potential issue | 🟠 Major | ⚡ Quick winUse the recorded pull PID for retry cleanup.
PULL_PIDis captured but never used, andpkill -f "curl.*api/pull.*$model_name"is broad enough to miss the current pull or kill unrelatedcurlprocesses. Retrying after that can leave overlapping pulls running in the background.Suggested fix
curl -X POST http://ollama:11434/api/pull -d "{\"name\": \"$model_name\"}" & PULL_PID=$! @@ - pkill -f "curl.*api/pull.*$model_name" || true + kill "$PULL_PID" 2>/dev/null || true + wait "$PULL_PID" 2>/dev/null || true sleep 5🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@start.sh` around lines 144 - 166, The retry cleanup currently uses a broad pkill and never uses the recorded PULL_PID, which can leave overlapping pulls or kill unrelated curl processes; update the pull/retry logic so after starting the background curl (PULL_PID=$!) you explicitly check and terminate that PID on retry/failure (e.g., if process is running kill -TERM $PULL_PID and wait for it or kill -9 if needed), fall back to the existing pkill only if PULL_PID is unavailable, and clear/unset PULL_PID after cleanup; apply this change around the existing PULL_PID, check_model_exists, and retry_count handling so retries reliably stop the exact background pull before starting another.
127-129:⚠️ Potential issue | 🟠 Major | ⚡ Quick winMatch model names exactly when checking Ollama tags.
grep -q "$1"treats the model name as a regex and only does substring matching. Tagged names likeqwen3:0.6bcan produce false positives, which means the script may skip pulling a required model that is actually missing.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@start.sh` around lines 127 - 129, The check_model_exists function currently uses grep -q "$1" which treats the model name as a regex and allows substring matches; change it to perform an exact, literal match against the tag string returned by the Ollama API. Update the grep invocation in check_model_exists to use a literal, whole-line match (e.g., use grep with flags that enforce fixed-string and full-line matching such as -F and -x, and keep -q for quiet) and ensure the model argument is passed safely (quoted) so names with regex metacharacters like ":" or "." are not interpreted as patterns.
🟡 Minor comments (14)
scripts/browser.js-93-95 (1)
93-95:⚠️ Potential issue | 🟡 Minor | ⚡ Quick win
NaNbalance passes the!= nullguard and renders as the string"NaN"in the UI.
NaN != nullistrue, soNumber(NaN).toFixed(8)→"NaN"would be exposed asbitcoinBalance. Add aNumber.isFinitecheck:🐛 Proposed fix
- bitcoinBalance: state.bitcoin && state.bitcoin.balance != null - ? Number(state.bitcoin.balance).toFixed(8) - : undefined + bitcoinBalance: state.bitcoin && Number.isFinite(Number(state.bitcoin.balance)) + ? Number(state.bitcoin.balance).toFixed(8) + : undefined🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@scripts/browser.js` around lines 93 - 95, The bitcoinBalance computation currently treats any non-null value (including NaN) as valid and calls Number(...).toFixed(8), which renders "NaN" in the UI; update the logic where bitcoinBalance is derived (referencing bitcoinBalance and state.bitcoin.balance) to first coerce to a Number and verify Number.isFinite(value) (or Number.isFinite(Number(state.bitcoin.balance))) before calling toFixed(8), otherwise set bitcoinBalance to undefined so only finite numeric balances are rendered.actions/documentActions.js-101-101 (1)
101-101:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winSetting a header to
undefinedmay cause unexpected behavior.Some environments serialize
undefinedas the literal string"undefined". Consider omitting the header entirely when there's no token.🛡️ Proposed fix
- const response = await fetch(`/documents/${fabricID}`, { - method: 'GET', - headers: { - 'Accept': 'application/json', - 'Content-Type': 'application/json', - 'Authorization': (token) ? `Bearer ${token}` : undefined - } - }); + const headers = { + 'Accept': 'application/json', + 'Content-Type': 'application/json' + }; + if (token) { + headers['Authorization'] = `Bearer ${token}`; + } + const response = await fetch(`/documents/${fabricID}`, { + method: 'GET', + headers + });🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@actions/documentActions.js` at line 101, The Authorization header is being set to undefined which can serialize to the string "undefined"; update the header construction in actions/documentActions.js so the 'Authorization' key is omitted when token is falsy rather than set to undefined (e.g., build headers conditionally or remove the key when token is not present where the `'Authorization': (token) ? `Bearer ${token}` : undefined` expression is used).tests/sensemaker.security.js-83-92 (1)
83-92:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winIntentional bypass with warning in place:
verifyMessagereturns true when no signing key is configured.The implementation (services/sensemaker.js:4881-4883) explicitly logs a warning (
console.warn) when no signing key is configured, indicating this is deliberate behavior for optional message signing scenarios. However, verify that this design is appropriate for your production usage:
- The warning ensures operators/developers will be alerted when running without a configured signing key
- Consider whether this bypass behavior should exist at all, or if the code should require explicit opt-in for unsigned messages
- Ensure deployment documentation clarifies that
signingKeymust be set in production environmentsThe existing tests properly validate both the bypass behavior and HMAC verification when a key is present.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@tests/sensemaker.security.js` around lines 83 - 92, The current verifyMessage behavior bypasses validation when settings.signingKey is missing; change this to require an explicit opt-in flag to avoid accidental production bypasses: update Sensemaker.prototype.verifyMessage to only return true without a signing key if a new settings.allowUnsignedMessages (or similar) flag is present and truthy, otherwise fail validation (and keep the console.warn or replace with processLogger.warn); update tests in tests/sensemaker.security.js to reflect the new opt-in behavior and update deployment docs to document the new settings.allowUnsignedMessages flag and that signingKey must be configured in production.routes/sources/list_sources.js-67-68 (1)
67-68:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winPotential null reference if
req.useris undefined.Same issue as in
view_source.js— accessingreq.user.fabric_idwithout a guard will throw if the request is unauthenticated.Proposed fix
- if (enhanced.owner == req.user.fabric_id) { + if (req.user && enhanced.owner == req.user.fabric_id) {🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@routes/sources/list_sources.js` around lines 67 - 68, The code assigns enhanced.can_edit by comparing enhanced.owner to req.user.fabric_id without guarding against a missing req.user; update the route handler in list_sources.js so the comparison uses a safe check (e.g., ensure req.user exists or use optional chaining) before setting enhanced.can_edit on the enhanced object (the symbol to change is the block that sets enhanced.can_edit when enhanced.owner == req.user.fabric_id). Ensure the check short-circuits when req.user is undefined to prevent a null reference and only sets enhanced.can_edit = true when the authenticated user's fabric_id matches enhanced.owner.routes/sources/view_source.js-15-15 (1)
15-15:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winPotential null reference if
req.useris undefined.If an unauthenticated request reaches this handler, accessing
req.user.fabric_idwill throw. Consider adding a guard or ensuring authentication middleware is always applied.Proposed fix
- if (req.user.fabric_id == source.owner) source.can_edit = true; + if (req.user && req.user.fabric_id == source.owner) source.can_edit = true;🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@routes/sources/view_source.js` at line 15, The code accesses req.user.fabric_id directly which can throw if req.user is undefined; guard the access before comparing by checking req.user and req.user.fabric_id (e.g., use an existence check or optional chaining) before setting source.can_edit, or ensure authentication middleware always populates req.user; update the expression that currently reads req.user.fabric_id == source.owner to a safe check (e.g., confirm req.user && req.user.fabric_id or req.user?.fabric_id) when comparing to source.owner and then set source.can_edit.routes/inquiries/create_inquiry.js-38-38 (1)
38-38:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winHardcoded sender address.
from: 'noreply@localhost'won't work in production. Consider making this configurable viathis.settingsor a similar mechanism.Proposed fix
await this.email.send({ - from: 'noreply@localhost', + from: this.settings.email?.from || 'noreply@sensemaker.io', to: email,🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@routes/inquiries/create_inquiry.js` at line 38, Replace the hardcoded sender in the email options (currently "from: 'noreply@localhost'" in create_inquiry.js) with a configurable value sourced from the instance settings (e.g., this.settings.email.from or this.settings.smtp.from) and provide a sensible fallback (like 'noreply@example.com') so production SMTP works; update where the mail payload is built in the function that sends the inquiry email to read from this.settings and ensure any tests or configuration docs reflect the new setting key.routes/goals/list.js-10-10 (1)
10-10:⚠️ Potential issue | 🟡 Minor | ⚡ Quick win
req.query.globalis a string —"false"is truthy, so admins can never opt out of global goals via?global=false.All
req.queryvalues are strings. The conditionglobal && isAdminevaluates"false" && trueas"false"(truthy), so an admin passing?global=falsestill receives global goals. Only the absence of the parameter (undefined) correctly suppresses them.🐛 Proposed fix
- const { global, status, priority, category } = req.query; + const { global: globalParam, status, priority, category } = req.query; + const global = globalParam === 'true' || globalParam === '1';(Also avoids shadowing the built-in
globalidentifier in Node.js.)Also applies to: 20-23
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@routes/goals/list.js` at line 10, The destructured variable named global from req.query is a string (e.g. "false") and thus truthy, so change the code to explicitly parse the query value into a boolean (e.g. const includeGlobal = req.query.global === 'true') and then use includeGlobal && isAdmin where the original global && isAdmin was used; also rename the destructured identifier (stop using the name global) to avoid shadowing Node's global. Apply the same explicit parsing/renaming approach for the other req.query fields mentioned on lines 20-23 to ensure correct types are used.routes/goals/list.js-55-61 (1)
55-61:⚠️ Potential issue | 🟡 Minor | ⚡ Quick win
details: error.messageleaks internal error information to callers in production.Database errors, schema details, or stack fragments embedded in
error.messagecan reach API consumers. This same pattern appears inroutes/goals/view.jsline 55.🛡️ Proposed fix
return res.status(500).json({ error: 'Internal server error', message: 'Failed to retrieve goals', - details: error.message });🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@routes/goals/list.js` around lines 55 - 61, The catch blocks in routes/goals/list.js (and similarly in routes/goals/view.js) currently return error.message to API clients; remove this leak by returning a generic error object (e.g., error: 'Internal server error', message: 'Failed to retrieve goals' / 'Failed to retrieve goal') and do not include error.message or stack in the response; instead log the full error server-side using console.error (or your logger) inside the catch (refer to the catch(error) block) so internal details are retained in logs but never sent to the client.assets/cache.manifest-71-71 (1)
71-71:⚠️ Potential issue | 🟡 Minor | ⚡ Quick win
scripts/graphviz.umd.js.newlooks like a temporary rename artifact.The
.newextension suggests this was an intermediate file during an update and should not be permanently listed in the cache manifest.🗑️ Proposed fix
-scripts/graphviz.umd.js.new🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@assets/cache.manifest` at line 71, Remove the temporary artifact entry "scripts/graphviz.umd.js.new" from the cache manifest and either replace it with the correct asset name "scripts/graphviz.umd.js" or remove the line entirely if that asset should not be cached; look for the literal string "scripts/graphviz.umd.js.new" in assets/cache.manifest and update the manifest to reference the proper filename or omit the entry.routes/goals/create.js-48-55 (1)
48-55:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winSame error leakage concern as
update.js.Remove
details: error.messageto avoid exposing internal error information to clients.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@routes/goals/create.js` around lines 48 - 55, The catch block in the create-goal route handler is leaking internal error details to clients via the JSON response; remove the "details: error.message" property from the object passed to res.status(500).json and instead keep full error information only in server-side logs (e.g., console.error('[GOALS]', 'Error creating goal:', error)) so the response returns the generic error and message fields without exposing internal messages.routes/goals/update.js-75-82 (1)
75-82:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winAvoid leaking internal error details to clients.
Exposing
error.messagein the response could reveal sensitive implementation details (e.g., SQL errors, stack traces). Consider logging the full error server-side and returning a generic message to clients.Suggested fix
} catch (error) { console.error('[GOALS]', 'Error updating goal:', error); return res.status(500).json({ error: 'Internal server error', - message: 'Failed to update goal', - details: error.message + message: 'Failed to update goal' }); }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@routes/goals/update.js` around lines 75 - 82, The catch block in routes/goals/update.js currently logs the error but returns error.message to the client; change this to log the full error server-side (keep console.error('[GOALS]', 'Error updating goal:', error) or use the app logger) and remove details: error.message from the res.status(500).json response so the client receives a generic error object (e.g., { error: 'Internal server error', message: 'Failed to update goal' }) while the full error remains in server logs; update the catch around the update handler where the `error` variable and `res.status(500).json(...)` are used.reducers/chatReducer.js-88-94 (1)
88-94:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winReset the buffer when streamed chunks switch to a new message id.
Right now every
CHAT_STREAM_CHUNKappends to the existing buffer. If a new stream starts before a reset is dispatched, the next assistant reply gets merged into the previous preview.Suggested fix
case CHAT_STREAM_CHUNK: { const { content = '', id = null } = action.payload || {}; + const isNewStream = id != null && id !== state.streamMessageId; return { ...state, - streamBuffer: state.streamBuffer + (typeof content === 'string' ? content : ''), + streamBuffer: (isNewStream ? '' : state.streamBuffer) + (typeof content === 'string' ? content : ''), streamMessageId: id != null ? id : state.streamMessageId }; }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@reducers/chatReducer.js` around lines 88 - 94, In the CHAT_STREAM_CHUNK branch of the chatReducer, the reducer currently always appends incoming chunk content to state.streamBuffer; change it so that when the incoming payload id differs from state.streamMessageId (i.e., a new streamed message began) you reset streamBuffer to the incoming content (not append), otherwise continue appending; also still update streamMessageId to the new id when provided. This logic should be implemented inside the case handling for CHAT_STREAM_CHUNK, referencing streamBuffer and streamMessageId in the reducer.tests/trainer.integration.js-29-34 (1)
29-34:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winTest assertion may fail for async methods.
The assertion
trainer.start.constructor.name === 'Function'will fail becausestart()is anasyncmethod, which means its constructor name is'AsyncFunction', not'Function'.it('exposes async lifecycle methods for integration startup', function () { const trainer = new Trainer(settings); assert.strictEqual(typeof trainer.start, 'function'); assert.strictEqual(typeof trainer.stop, 'function'); - assert.strictEqual(trainer.start.constructor.name, 'Function'); + assert.strictEqual(trainer.start.constructor.name, 'AsyncFunction'); });Alternatively, if the intent is just to verify these are callable, remove the constructor check since
typeof ... === 'function'already covers that.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@tests/trainer.integration.js` around lines 29 - 34, The test incorrectly asserts the constructor name of async methods; change the assertion in the test around Trainer.start/Trainer.stop to either remove the constructor name check entirely (since typeof trainer.start === 'function' already verifies callability) or update it to expect 'AsyncFunction' for async methods; locate the assertions that reference trainer.start.constructor.name and replace them accordingly to reflect that start is async (or drop that check).API.md-13-15 (1)
13-15:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winMinor grammar issue in Pool class description.
The text "to reliably queries" is grammatically incorrect. It should be "to reliably handle queries" or "for reliable queries".
<dt><a href="#Pool">Pool</a></dt> -<dd><p>Manage a pool of member Agents to reliably queries.</p> +<dd><p>Manage a pool of member Agents to reliably handle queries.</p> </dd>Note: The same fix should be applied at line 178.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@API.md` around lines 13 - 15, Update the Pool description text: replace the incorrect phrase "to reliably queries" with a correct phrasing such as "to reliably handle queries" (or "for reliable queries") in the API.md entry for the Pool class (the <dt><a href="#Pool">Pool</a></dt> / corresponding <dd> block) and apply the same correction to the duplicate occurrence later in the file (the second Pool description around line 178).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a97334e3-d342-4751-8126-baf6e769b0e3
⛔ Files ignored due to path filters (13)
assets/bundles/browser.min.jsis excluded by!**/*.min.jsassets/semantic.min.jsis excluded by!**/*.min.jsassets/themes/basic/assets/fonts/icons.woffis excluded by!**/*.woffassets/themes/default/assets/fonts/brand-icons.woffis excluded by!**/*.woffassets/themes/default/assets/fonts/brand-icons.woff2is excluded by!**/*.woff2assets/themes/default/assets/fonts/icons.woffis excluded by!**/*.woffassets/themes/default/assets/fonts/icons.woff2is excluded by!**/*.woff2assets/themes/default/assets/fonts/outline-icons.woffis excluded by!**/*.woffassets/themes/default/assets/fonts/outline-icons.woff2is excluded by!**/*.woff2assets/themes/material/assets/fonts/icons.woffis excluded by!**/*.woffassets/themes/material/assets/fonts/icons.woff2is excluded by!**/*.woff2package-lock.jsonis excluded by!**/package-lock.jsonreports/install.logis excluded by!**/*.log
📒 Files selected for processing (150)
.gitignore.mocharc.cjsAPI.mdDEVELOPERS.mdDockerfileINSTALL.mdMakefileREADME.mdactions/apiActions.jsactions/authActions.jsactions/chatActions.jsactions/discordActions.jsactions/documentActions.jsactions/index.jsactions/peerActions.jsactions/taskActions.jsassets/cache.manifestassets/index.htmlassets/semantic.cssassets/semantic.jsassets/semantic.min.csscomponents/AdminSettings.jscomponents/AdminSettingsUsers.jscomponents/AlertsHome.jscomponents/BenchmarkManager.jscomponents/Bridge.jscomponents/ChatBox.jscomponents/Dashboard.jscomponents/DocumentView.jscomponents/FeaturesHome.jscomponents/HeaderBar.jscomponents/Home.jscomponents/InvitationView.jscomponents/LoginForm.jscomponents/NetworkHome.jscomponents/QueryForm.jscomponents/SensemakerUI.jscomponents/Settings.jscomponents/SignUpForm.jscomponents/SourceHome.jscomponents/SourceView.jscomponents/TopBar.jscomponents/UserProfileSection.jscomponents/services/FabricHome.jscomponents/services/discord/DiscordGuild.jscomponents/services/discord/DiscordGuildList.jscomponents/tabs/admin/memories.jsconstants.jscontracts/beacon.jsdocker-compose.ymldocker-rebuild.shfunctions/handleSensemakerLog.jsknexfile.jsmigrations/20250721000000_add_source_blob_history.jspackage.jsonreducers/apiReducer.jsreducers/authReducer.jsreducers/chatReducer.jsreducers/discordReducer.jsreducers/peersReducer.jsreports/TODO.txtroutes/account/reset_password.jsroutes/announcements/latest_announcement.jsroutes/conversations/get_conversations_by_id.jsroutes/conversations/list_conversations.jsroutes/conversations/view_conversation.jsroutes/documents/get_commit.jsroutes/documents/get_document_by_id.jsroutes/documents/search_documents.jsroutes/documents/view_document.jsroutes/goals/create.jsroutes/goals/delete.jsroutes/goals/index.jsroutes/goals/list.jsroutes/goals/update.jsroutes/goals/view.jsroutes/help/get_conversations.jsroutes/help/get_conversations_adm.jsroutes/help/get_messages.jsroutes/help/send_message.jsroutes/help/set_messages_read.jsroutes/index.jsroutes/inquiries/create_inquiry.jsroutes/inquiries/list_inquiries.jsroutes/invitations/create_invitation.jsroutes/invitations/get_invitations.jsroutes/invitations/resend_invitation.jsroutes/memories/list_memories.jsroutes/memories/view_memory.jsroutes/messages/create_completion.jsroutes/messages/create_message.jsroutes/messages/list_messages.jsroutes/messages/regenerate_message.jsroutes/models/list_models.jsroutes/peers/create_peer.jsroutes/peers/list_peers.jsroutes/redis/clear_queue.jsroutes/services/discord/list_channels.jsroutes/services/discord/list_guilds.jsroutes/services/discord/list_users.jsroutes/services/discord/view_channel.jsroutes/services/discord/view_guild.jsroutes/services/discord/view_user.jsroutes/sessions/create_session.jsroutes/sessions/current_session.jsroutes/sources/list_sources.jsroutes/sources/view_source.jsroutes/users/create_full_user.jsroutes/users/create_user.jsroutes/users/edit_email.jsroutes/users/edit_username.jsroutes/users/list_users.jsscripts/bootstrap-dev-admin-password.jsscripts/browser.jsscripts/build.jsscripts/crypto-shim.jsscripts/node.jsservices/activeWorkerQueue.jsservices/chatStreamBridge.jsservices/email.jsservices/fabric.jsservices/sensemaker.jssetup-first-time.shstart.shtests/clock.unit.jstests/functions/IngestFile.unit.jstests/functions/createInvitationEmailContent.unit.jstests/functions/createPasswordResetEmailContent.unit.jstests/functions/createTimeoutPromise.unit.jstests/functions/formatBitcoin.unit.jstests/functions/formatDate.unit.jstests/functions/toMySQLDatetime.unit.jstests/functions/toRelativeTime.unit.jstests/functions/tokenize.unit.jstests/functions/truncateMiddle.unit.jstests/getDocumentsById.test.jstests/ollama.test.jstests/pool.test.jstests/sandbox.integration.jstests/sensemaker.integration.jstests/sensemaker.security.jstests/sensemaker.unit.jstests/trainer.integration.jstypes/agent.jstypes/bundler.jstypes/learner.jstypes/pool.jstypes/queue.jstypes/site.jstypes/trainer.js
💤 Files with no reviewable changes (8)
- routes/help/get_messages.js
- routes/help/send_message.js
- routes/help/get_conversations.js
- routes/help/get_conversations_adm.js
- routes/documents/get_document_by_id.js
- tests/getDocumentsById.test.js
- scripts/crypto-shim.js
- routes/help/set_messages_read.js
| @@ -93,7 +93,24 @@ const fetchDocument = (fabricID) => { | |||
| dispatch(fetchDocumentRequest()); | |||
| const { token } = getState().auth.token; | |||
There was a problem hiding this comment.
Incorrect destructuring breaks token extraction.
getState().auth.token is the token string itself, not an object. Destructuring { token } from a string yields undefined, so the Authorization header will always be malformed (Bearer undefined).
🐛 Proposed fix
- const { token } = getState().auth.token;
+ const { token } = getState().auth;This matches the pattern used in other thunks in this file (e.g., fetchDocuments at line 81, uploadDocument at line 125).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const { token } = getState().auth.token; | |
| const { token } = getState().auth; |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@actions/documentActions.js` at line 94, The destructuring on
getState().auth.token is wrong — replace the incorrect "const { token } =
getState().auth.token" with a direct extraction "const token =
getState().auth.token" in the thunk so the token string is used when building
the Authorization header; locate the same pattern in this action (compare with
fetchDocuments and uploadDocument thunks) and update any Authorization usage to
reference the corrected token variable.
| 000047.ldb | ||
| 000050.log | ||
| CURRENT | ||
| LOCK | ||
| LOG | ||
| LOG.old | ||
| MANIFEST-000049 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify whether these names match actual asset files or server-side DB files.
fd -e ldb -e log --full-path | head -20
fd -H "^(CURRENT|LOCK|LOG|LOG\.old|MANIFEST-.*)$" | head -20Repository: FabricLabs/sensemaker
Length of output: 47
🏁 Script executed:
cat -n assets/cache.manifestRepository: FabricLabs/sensemaker
Length of output: 9192
🏁 Script executed:
# Check if this file is version-controlled or generated
git log --oneline -n 5 -- assets/cache.manifest 2>/dev/null || echo "No git history"Repository: FabricLabs/sensemaker
Length of output: 128
🏁 Script executed:
# Search for any build scripts that might generate or reference this manifest
rg "cache.manifest" --type js --type json --type py -B 2 -A 2Repository: FabricLabs/sensemaker
Length of output: 1652
🏁 Script executed:
# Check if assets directory exists and list some files
ls -la assets/ 2>/dev/null | head -30Repository: FabricLabs/sensemaker
Length of output: 1265
🏁 Script executed:
# Check .gitignore to see if assets or LevelDB files are ignored
cat .gitignore | grep -E "(assets|ldb|MANIFEST)"Repository: FabricLabs/sensemaker
Length of output: 47
🏁 Script executed:
# Verify if server.js is the Node.js server entry point
head -20 server.js 2>/dev/null || echo "server.js not found in root"Repository: FabricLabs/sensemaker
Length of output: 93
🏁 Script executed:
# Check if server.js exists in assets/
ls -la assets/server.js 2>/dev/null || echo "server.js not in assets"Repository: FabricLabs/sensemaker
Length of output: 89
🏁 Script executed:
# Check git status of cache.manifest and when it was last modified
git log --oneline -n 10 -- assets/cache.manifest
git diff HEAD -- assets/cache.manifest | head -50Repository: FabricLabs/sensemaker
Length of output: 128
🏁 Script executed:
# Check if these entries appear anywhere else in the codebase (maybe comments or scripts)
rg "000047\.ldb|000050\.log|MANIFEST-000049"Repository: FabricLabs/sensemaker
Length of output: 169
🏁 Script executed:
# Verify actual contents of assets/ more thoroughly
find assets -type f -name "*" | sortRepository: FabricLabs/sensemaker
Length of output: 6614
🏁 Script executed:
# Check the full git diff for this commit to see what changed
git show a77a844 --statRepository: FabricLabs/sensemaker
Length of output: 50377
🏁 Script executed:
# Get the detailed diff for cache.manifest specifically
git show a77a844 -- assets/cache.manifest | head -100Repository: FabricLabs/sensemaker
Length of output: 3686
Remove auto-generated cache.manifest entries for server-side files and build artifacts.
The cache.manifest file includes entries that should not be served to browsers:
server.js(line 86): Node.js server entry point—exposing server code and environment context to clients is a security risk, especially given the bundler fix in this PR already addressesprocess.envleakage.scripts/graphviz.umd.js.new(line 71): Temporary artifact with.newextension—rename artifacts should be cleaned up before manifest generation.000047.ldb,000050.log,CURRENT,LOCK,LOG,LOG.old,MANIFEST-000049(lines 5-11): LevelDB internal database files that should not exist in the assets directory or be cached by browsers. These are build artifacts or misconfigured paths.
Either exclude these files from the assets directory before generating the manifest, or add a filter in generateCacheManifest() to skip them.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@assets/cache.manifest` around lines 5 - 11, The cache.manifest contains
server-side and build-artifact entries that must be excluded; update the
manifest generation so generateCacheManifest() filters out server.js, any *.new
artifacts (e.g., scripts/graphviz.umd.js.new), and LevelDB/temp files
(000047.ldb, 000050.log, CURRENT, LOCK, LOG, LOG.old, MANIFEST-000049) before
writing. Implement the filter within generateCacheManifest() to ignore filenames
matching those exact names or patterns (.ldb, .log, CURRENT, LOCK, MANIFEST-*)
and *.new, or ensure the build step removes/relocates these artifacts from the
assets list prior to manifest creation.
| const AdminServicesTab = require('./tabs/admin/services'); | ||
| const AdminSettingsTab = require('./tabs/admin/settings'); | ||
| const AdminAgentsTab = require('./tabs/admin/agents'); | ||
| const AdminDiscordTab = require('./tabs/admin/AdminDiscordTab'); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find the actual file name under tabs/admin/ to confirm correct casing
fd --type f --extension js . --search-path components/tabs/adminRepository: FabricLabs/sensemaker
Length of output: 341
Critical: Import references non-existent file — will fail at runtime
The file components/tabs/admin/AdminDiscordTab.js does not exist. The directory components/tabs/admin/ contains only lowercase-named files: agents.js, announcements.js, conversations.js, design.js, growth.js, memories.js, services.js, settings.js. This require statement will throw a MODULE_NOT_FOUND error when the code runs.
Create the missing file as components/tabs/admin/discord.js (following the lowercase naming convention of existing tabs) or adjust the import path to match the actual filename.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@components/AdminSettings.js` at line 17, The require for AdminDiscordTab in
components/AdminSettings.js references a non-existent module; either create a
new tab module named discord.js under the existing tabs directory (matching
lowercase naming) implementing the AdminDiscordTab export, or change the import
in AdminSettings.js to require('./tabs/admin/discord') and update any references
to AdminDiscordTab accordingly so the identifier points to the correct existing
file/module.
| "@fabric/core": "file:../fabric-clean", | ||
| "@fabric/discord": "file:../fabric-discord", | ||
| "@fabric/http": "file:../fabric-http", | ||
| "@fabric/hub": "file:../hub.fabric.pub", |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
node <<'NODE'
const fs = require('fs');
const path = require('path');
const pkg = JSON.parse(fs.readFileSync('package.json', 'utf8'));
const refs = [];
for (const [name, spec] of Object.entries(pkg.dependencies || {})) {
if (typeof spec === 'string' && spec.startsWith('file:')) {
refs.push({ section: 'dependencies', name, spec });
}
}
for (const [name, spec] of Object.entries(pkg.overrides || {})) {
if (typeof spec === 'string' && spec.startsWith('file:')) {
refs.push({ section: 'overrides', name, spec });
}
}
for (const ref of refs) {
const resolved = path.resolve(process.cwd(), ref.spec.slice(5));
const exists = fs.existsSync(resolved);
console.log(`${ref.section} ${ref.name} -> ${ref.spec} :: ${exists ? 'exists' : 'MISSING'} :: ${resolved}`);
}
NODERepository: FabricLabs/sensemaker
Length of output: 530
🏁 Script executed:
sed -n '231,232p' package.jsonRepository: FabricLabs/sensemaker
Length of output: 125
These local file:../... dependencies make installs non-portable.
All four file: references are missing in a clean checkout—npm install will fail immediately before any build or test. Resolve by either committing the required sibling packages to the repository, publishing them to npm and using version constraints, or adopting a monorepo/workspace layout available in CI environments.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@package.json` around lines 71 - 74, The package.json currently lists local
file: dependencies for `@fabric/core`, `@fabric/discord`, `@fabric/http` and
`@fabric/hub` which break installs on clean checkouts; fix by replacing these
file:../... entries with one of the following: commit the sibling packages into
the repo so those paths exist, publish those packages and reference them by
released version (e.g. "@fabric/core": "1.x.y"), or convert the repo into a
workspace/monorepo (add a workspace config and reference the packages as
workspace packages) so CI and external installs can resolve `@fabric/core`,
`@fabric/discord`, `@fabric/http` and `@fabric/hub` reliably. Ensure package.json is
updated accordingly and update any CI scripts to support the chosen layout.
| if (!isAdmin) { | ||
| query = query.andWhere(function() { | ||
| // Allow access if user is creator/owner (if authenticated), OR if document is source-created (creator and owner are null) | ||
| if (req.user.id) { | ||
| this.where('creator', '=', req.user.id) | ||
| .orWhere('owner', '=', req.user.id); | ||
| } | ||
| }); | ||
| } |
There was a problem hiding this comment.
Access control bypass when req.user.id is falsy.
If req.user.id is undefined, null, or 0, the andWhere callback adds no constraints, causing the query to return any non-deleted document matching the fabricID—effectively bypassing the creator/owner restriction for unauthenticated or malformed user states.
🐛 Proposed fix
// If not admin, restrict to documents where user is creator/owner or document is source-created
if (!isAdmin) {
+ if (!req.user || !req.user.id) {
+ return res.status(401).send({ status: 'error', message: 'Unauthorized.' });
+ }
query = query.andWhere(function() {
- // Allow access if user is creator/owner (if authenticated), OR if document is source-created (creator and owner are null)
- if (req.user.id) {
- this.where('creator', '=', req.user.id)
- .orWhere('owner', '=', req.user.id);
- }
+ this.where('creator', '=', req.user.id)
+ .orWhere('owner', '=', req.user.id);
});
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (!isAdmin) { | |
| query = query.andWhere(function() { | |
| // Allow access if user is creator/owner (if authenticated), OR if document is source-created (creator and owner are null) | |
| if (req.user.id) { | |
| this.where('creator', '=', req.user.id) | |
| .orWhere('owner', '=', req.user.id); | |
| } | |
| }); | |
| } | |
| if (!isAdmin) { | |
| if (!req.user || !req.user.id) { | |
| return res.status(401).send({ status: 'error', message: 'Unauthorized.' }); | |
| } | |
| query = query.andWhere(function() { | |
| this.where('creator', '=', req.user.id) | |
| .orWhere('owner', '=', req.user.id); | |
| }); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@routes/documents/view_document.js` around lines 22 - 30, The andWhere
callback on query (inside the isAdmin check) currently skips adding any
constraints when req.user.id is falsy, allowing access bypass; update the
callback so it always adds restrictions: if req.user && req.user.id add the
existing this.where('creator', '=', req.user.id).orWhere('owner', '=',
req.user.id) but also include an OR branch to allow source-created docs (creator
IS NULL AND owner IS NULL); if req.user or req.user.id is missing, add a
constraint that only allows source-created docs (where creator IS NULL AND owner
IS NULL). Ensure you reference the same query.andWhere callback, req.user.id
check, and the this.where('creator'...)/orWhere('owner'...) clauses when making
the change.
| constructor (hub) { | ||
| this.hub = hub; | ||
| /** @type {object[]} */ | ||
| this._memory = []; | ||
| this.busy = false; | ||
| this.lastFinishedAt = Date.now(); | ||
| /** @type {ReturnType<typeof setInterval>|null} */ | ||
| this._timer = null; | ||
| } |
There was a problem hiding this comment.
Serialize _tick() before the first await.
setInterval() can start a second _tick() while the first one is still suspended in _dequeue() or _pickOldestBackgroundTask(). Because busy is only flipped after those awaits, two ticks can pop/process work concurrently on the same node. That breaks the queue's single-worker guarantee.
Suggested fix
constructor (hub) {
this.hub = hub;
/** `@type` {object[]} */
this._memory = [];
this.busy = false;
this.lastFinishedAt = Date.now();
+ this._ticking = false;
/** `@type` {ReturnType<typeof setInterval>|null} */
this._timer = null;
}
@@
async _tick () {
- if (this.busy) return;
+ if (this._ticking || this.busy) return;
+ this._ticking = true;
+
+ try {
+ if (this.busy) return;
- const job = await this._dequeue();
- if (!job) {
- const depth = await this.queueDepth();
- if (depth === 0 && Date.now() - this.lastFinishedAt >= IDLE_MS) {
- const bg = await this.hub._pickOldestBackgroundTask();
- if (bg) {
- this.busy = true;
- try {
- await this.hub._processBackgroundTaskJob(bg);
- } catch (e) {
- console.error('[SENSEMAKER:CORE]', '[WORKER]', 'Background task failed:', e);
- } finally {
- this.busy = false;
- this.lastFinishedAt = Date.now();
- }
- }
- }
- return;
- }
+ const job = await this._dequeue();
+ if (!job) {
+ const depth = await this.queueDepth();
+ if (depth === 0 && Date.now() - this.lastFinishedAt >= IDLE_MS) {
+ const bg = await this.hub._pickOldestBackgroundTask();
+ if (bg) {
+ this.busy = true;
+ try {
+ await this.hub._processBackgroundTaskJob(bg);
+ } catch (e) {
+ console.error('[SENSEMAKER:CORE]', '[WORKER]', 'Background task failed:', e);
+ } finally {
+ this.busy = false;
+ this.lastFinishedAt = Date.now();
+ }
+ }
+ }
+ return;
+ }
- this.busy = true;
- try {
- if (job.type === 'conversation_turn') {
- await this.hub._processConversationTurnJob(job);
- } else {
- console.warn('[SENSEMAKER:CORE]', '[WORKER]', 'Unknown job type:', job.type);
- }
- } catch (e) {
- console.error('[SENSEMAKER:CORE]', '[WORKER]', 'Job failed:', e);
- if (job.response_message_id != null) {
- await this.hub.db('messages').where({ id: job.response_message_id }).update({
- status: 'error',
- content: `Error: ${e.message || String(e)}`,
- updated_at: this.hub.db.fn.now()
- }).catch(() => {});
- }
- } finally {
- this.busy = false;
- this.lastFinishedAt = Date.now();
- }
+ this.busy = true;
+ try {
+ if (job.type === 'conversation_turn') {
+ await this.hub._processConversationTurnJob(job);
+ } else {
+ console.warn('[SENSEMAKER:CORE]', '[WORKER]', 'Unknown job type:', job.type);
+ }
+ } catch (e) {
+ console.error('[SENSEMAKER:CORE]', '[WORKER]', 'Job failed:', e);
+ if (job.response_message_id != null) {
+ await this.hub.db('messages').where({ id: job.response_message_id }).update({
+ status: 'error',
+ content: `Error: ${e.message || String(e)}`,
+ updated_at: this.hub.db.fn.now()
+ }).catch(() => {});
+ }
+ } finally {
+ this.busy = false;
+ this.lastFinishedAt = Date.now();
+ }
+ } finally {
+ this._ticking = false;
+ }
}Also applies to: 107-149
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@services/activeWorkerQueue.js` around lines 21 - 29, The _tick() handler can
be re-entered because setInterval may invoke it again before the first
invocation reaches the first await, so flip the "busy" guard synchronously at
the start of _tick() (before any awaits) to serialize execution; ensure _tick()
immediately sets this.busy = true, and clear this.busy = false only after work
completes (including all awaits inside _dequeue() and
_pickOldestBackgroundTask()), and update any early-return paths in _tick(),
_dequeue(), and _pickOldestBackgroundTask() to respect that synchronous guard so
only one tick runs at a time.
Resolved conflicts in favour of feature/benchmark streaming/queue implementation for all business-logic files. Kept upstream additions: - UPDATE_MESSAGE action / reducer case (complementary to CHAT_STREAM_*) - knexfile.js pool config alongside existing seeds config - docker-compose DEBUG env var alongside BITCOIN_REGTEST - Bridge.js reconnect logic alongside heartbeat cleanup - agent.js STOPPING/STOPPED status tracking
There was a problem hiding this comment.
Security review found high-confidence issues introduced in the changed document read path. The main risk is an authorization bypass for unauthenticated requests, plus credential leakage through new debug logging. I did not find prior unresolved threads authored by this automation to revalidate; cleanup_previous is enabled so only this assessment remains active for this automation.
Sent by Cursor Automation: Find vulnerabilities
| if (!isAdmin) { | ||
| query = query.andWhere(function() { | ||
| // Allow access if user is creator/owner (if authenticated), OR if document is source-created (creator and owner are null) | ||
| if (req.user.id) { |
There was a problem hiding this comment.
High: _userMiddleware initializes unauthenticated callers as req.user.id = null, and this callback adds no predicate in that case. The query then only filters by fabric_id and status, so an attacker who knows or guesses a document ID can GET /documents/:fabricID and read another user's document content. Return 401 when there is no user ID, or add an explicit deny predicate before querying.
| if (!isAdmin) { | ||
| query = query.andWhere(function () { | ||
| // Allow access if user is creator/owner (if authenticated), OR if document is source-created (creator and owner are null) | ||
| if (req.user.id) { |
There was a problem hiding this comment.
High: this has the same empty-authz-predicate bypass as view_document. For unauthenticated requests req.user.id is null, so the document access check is reduced to fabric_id + non-deleted status before returning commit content. Require authentication or make the non-admin branch deny when no user ID is present.
| console.debug('[SENSEMAKER]', '[HTTP]', 'Authorization header:', req.headers.authorization); | ||
| console.debug('[SENSEMAKER]', '[HTTP]', 'Cookie:', req.headers.cookie); |
There was a problem hiding this comment.
Medium: these debug logs write the bearer token and full Cookie header for every document read. Anyone with access to application or centralized logs can recover live session credentials and impersonate users. Remove these logs or redact credential header values before logging.


Import latest Fabric changes.
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Configuration