Conversation
|
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
|
Can you complete the CLA? |
|
The brush hover outline (circle where the mouse pointer is) seems to go away in some cases when changing the zoom level. |
2d5359d to
9d15526
Compare
|
|
- Introduced a new dummy `MultiscaleVolumeChunkSource`. - Added `VoxelAnnotationRenderLayer` for voxel annotation rendering. - Implemented `VoxUserLayer` with dummy data source and rendering. - Added tools and logs for voxel layer interactions and debugging. - Documented voxel annotation specification and implementation details.
- Added a backend `VoxDummyChunkSource` that generates a checkerboard pattern for voxel annotations. - Implemented frontend `VoxDummyChunkSource` with RPC pairing to the backend. - Updated documentation with details on chunk source architecture and implementation.
…t seems there is some fighting.
…s to corruped the chunk after the usage of the tool. Added a front end buffer which is the only drawing storage for now. Added user settings to set the voxel_annotation layer scale and bounds. Added a second empty source to DummyMultiscaleVolumeChunkSource to prevent crashs when zoomed out too much
…lobal one (there where a missing convertion) ; add a primitive brush tool
…umeChunkSource and update related imports
…panded brush settings
…ation, and improved backend edit handling
…map options and UI settings
…r remote workflows, label creation, and new drawing tools
…table source is present
…ization and add benchmarks to prouve the performance gain
…is undefined, added a benchmark to see the performance gain
…allocation within the loops and breaking down the maths
…operly calculate the stamina
…t offset based on brush radius
use BackendVoxelAccessor for the downsmapling, keep the created BackendVoxelAccessor in cache and set a max cache size in BackendVoxelAccessor.
…E_DISPLYING_3_DISKS as the SPHERE), optimize the frontend brush by flattening vectors
…l when brush tool is active
…rently 50ms) and synced, this mean that it will temporarly override pixel of chunks not loaded in memory. This feature may be discussed as the caused artifacts may be judged too strong.
… call per stroke to reduce the amount of applyLocalEdits calls
… and fix the integration tests
|
Update on the latest work:
|
|
Using a dataset with a single resolution, we can use the voxel annotation without the downsampling, reducing the complexity. Here is an example. |
|
I just wanted to say this feature is awesome and has been a dream feature for many of us! I think it would be very useful, even if no more changes were made. Perhaps if it is in flux, it could be added as an experimental feature similar to how Volume Rendering was introduced? For example, we have made a tool that uses python Neuroglancer to do real time, on the fly inference of datasets as one browses around the dataset: https://github.com/janelia-cellmap/cellmap-flow. Using this voxel annotation branch, we have also added the ability to do human in the loop finetuning of our models, akin to ilastik (see below). In the movie, we are doing live preprocessing, inference and postprocessing of one of our models that is doing poorly on nucleus segmentations. We then do finetuning using voxel annotations and get better segementation results, all served up on the fly. If this was merged into main, it would make maintenance on our end much more straightforward. CellmapFlowLowRes.mp4 |
Cool to see the feature already used! I completely agree with you. I've been a bit quiet over the last month since I think all the requested changes and feedback have been addressed, and I would avoid extending the scope of this PR further. Getting this merged as an experimental feature would be fantastic and make it much easier for everyone to start building on and give feedback. |
|
@briossant I've been meaning to spend some time to review the code, it's substantial! I'll try to get to this soon |
seankmartin
left a comment
There was a problem hiding this comment.
Thanks for all the work on this @briossant! I'm sorry I'm late to the party in offering any feedback. This PR was (and still is) very extensive and I was struggling to get around to looking at in detail beyond trying the demo a few times to get a feel for the functionality (thank you for making the demo).
Generally it looks pretty good to me as an experimental feature. Very happy to chat more on any of the feedback here, and thanks again
| The first time you attempt a drawing operation (like a brush stroke) after enabling writing, a confirmation dialog will appear. Note that this initial operation will be canceled; you can resume drawing once you have confirmed. | ||
|
|
||
| .. note:: | ||
| For the segmentation layer, it is recommended to deactivate the Highlight on hover option under the Render tab. |
There was a problem hiding this comment.
Might be worth mentioning why it is recommended
|
|
||
| - Amazon S3 or any S3 compatible storage. | ||
|
|
||
| **Data Format**: |
There was a problem hiding this comment.
Further restrictions would be nice to list here. 2D volumes not supported. Float32 data not supported. And anything else aware of
| Seg Picker | ||
| ~~~~~~~~~~ | ||
|
|
||
| The Seg Picker tool allows you to adopt the voxel value at the current mouse |
There was a problem hiding this comment.
From what I was trying this tool also picks up image values, is that correct? If so great! But could be renamed to just picker tool.
| painting performance when erasing. | ||
| - **Undo / Redo**: Revert or re-apply recent changes. | ||
| - **Paint Value**: Manually specify the segment ID or intensity value to paint. | ||
| - **New Random Value**: Generates a new random segment ID. |
|
|
||
| **Storage**: | ||
|
|
||
| - Amazon S3 or any S3 compatible storage. |
There was a problem hiding this comment.
I think that s3 storage will need new CORS permissions for PUT/POST/DELETE methods. I think worth mentioning and also implications of this. There's also some mention about the CORS policy in src/kvstore/s3
| } | ||
|
|
||
| _createVoxelRenderLayer( | ||
| source: MultiscaleVolumeChunkSource, |
| ); | ||
| this.registerDisposer(shaderControlState); | ||
|
|
||
| return new ImageRenderLayer(source, { |
There was a problem hiding this comment.
From using this, it can feel odd sometimes to visually see everything flickering after an update because it's fetching the newly updated data and showing it instead of the overlay chunk. I think I get why this is, especially in case where you are zoomed far out so seeing normally very downsampled data and then drawing at high res. We either rely on the overlay, and see too high res data, or refetch to show the user the downsampling.
I do wonder if this could be configurable though. So we'd send the updates to the backend, but not refetch. And just rely on the overlay information. We trust that what was drawn was sent to the backend ok, and don't bother to refetch the new info.
Screencast.from.2026-04-01.18-04-48.mp4
| tools: [ | ||
| { toolId: BRUSH_TOOL_ID, label: "Brush" }, | ||
| { toolId: FLOODFILL_TOOL_ID, label: "Flood Fill" }, | ||
| { toolId: SEG_PICKER_TOOL_ID, label: "Seg Picker" }, |
There was a problem hiding this comment.
https://github.com/user-attachments/assets/259f328d-d8d6-4949-a677-e4ed39ffb192
I've had issues at times using the seg picker. Could it instead use the regular neuroglancer selection on control right click? Like for example (set value to match pinned selection). Maybe I'm missing what the seg picker is intending to add extra that needs its own tool.
| await Promise.all(backendOps); | ||
| } | ||
|
|
||
| async floodFillPlane2D( |
There was a problem hiding this comment.
on large fills the flood fill can just fail sometimes with no message. I'm aware this is really a lot of voxels, but maybe we should lower the max limit on the number of voxels allowed to be filled on tool? Not sure how the performance is for you at higher fill values.
Screencast.from.2026-04-01.18-09-35.mp4
| { key: otherKey, indices: [5], value: 99n }, | ||
| ]); | ||
|
|
||
| await controller.commitVoxels([ |
There was a problem hiding this comment.
I don't think commitVoxels is async / returns a promise to wait on
Thanks for taking the time to look at this! Glad the demo helped. I'll start working through your comments. |
|
There was a discussion a few weeks ago where we thought it was worth trying to speed up the sphere painting so the 3 disk approach was not necessary. I had AI take a stab, after not getting anywhere by trying to feed it chrome performance traces, I had luck by directing it to optimize paintBrushWithShape. I think the primary speedup here is the Painting slowly with max brush size is now very smooth on my computer. Fast strokes still are slow but still usable, the backend processing is an equal limitation in that end. Here are the commits https://github.com/seung-lab/neuroglancer/commits/cj-fast-sphere/ 41b2a5e and fe20e84 being the important ones You can try this optimization here Note: I disabled the disk brush shape to make it easier to iterate on the sphere performance |
| private cachedChunkTransform: ChunkTransformParameters | undefined; | ||
| private cachedTransformGeneration: number = -1; | ||
| private cachedVoxelPosition: Float32Array = new Float32Array(3); | ||
| optimisticRenderLayer: |
There was a problem hiding this comment.
is it accurate to rename this to previewRenderLayer? If so I think it's easier to follow
|
|
||
| // since we only allow drawing at max res, we can lock the optimistic render layer to it | ||
| ( | ||
| this.optimisticRenderLayer as SliceViewRenderLayer |
There was a problem hiding this comment.
getForcedSourceIndexOverride can be defined on the SliceViewRenderLayer class to avoid needing to cast this. I think it is more accurate for it to be on the class rather than the interface since it is only used on the frontend from what I can tell. If so the logic inside filterVisibleSources can be moved to inside the frontend filterVisibleSources.
I am thinking it might be possible to have preview renderlayer be a subclass of SegmentationRenderLayer/ImageRenderLayer to keep preview logic outside of the base classes. For SegmentationRenderLayer, we could have a generic hook so that you could embed the shader code at the start of the main function.
| function dataSubsourceSpecificationToJson(spec: DataSubsourceSpecification) { | ||
| return spec.enabled; | ||
| const { enabled, writingEnabled } = spec; | ||
| return { enabled, writingEnabled }; |
There was a problem hiding this comment.
I'm not sure we want to serialize like this, it changes the representation if nothing is different from the default. For example
{
"type": "image",
"source": {
"url": "precomputed://gs://neuroglancer-fafb-data/fafb_v14/fafb_v14_clahe",
"subsources": {
"default": {},
"bounds": {}
}
},
"tab": "source",
"name": "fafb_v14_clahe"
},
{
"type": "segmentation",
"source": {
"url": "precomputed://gs://fafb-ffn1-20190805/segmentation",
"subsources": {
"default": {},
"bounds": {},
"mesh": {}
}
},
"tab": "source",
"segments": [
"710435991"
],
"name": "fafb-ffn1-20190805"
},While the old representation would be without the subsources keys because they are all default. Also since most sources will just support being enabled, not writingEnabled we could consider here to serialize as just spec.enabled unless writingEnabled is defined and there is information about the writingEnabled to include

This Draft Pull Request introduces an interactive voxel annotation feature, allowing users to perform manual segmentation by painting directly onto volumetric layers. This implementation is based on the proposal in Issue #851 and incorporates the feedback from @jbms.
Here is a live demo to try the feature, watch out, there is persistent storage, so your annotations will be saved and will override the ones already present: OPEN DEMO VIEWER
Key Changes & Architectural Overview
Following the discussion, this implementation has been significantly revised from the initial prototype:
voxlayer type, the voxel editing functionality is now integrated directly intoImageUserLayerandSegmentationUserLayervia aUserLayerWithVoxelEditingMixin. This mixin adds a new "Draw" tab in the UI.New Tool System: The Brush and Flood Fill tools are implemented as toggleable LayerTools, while the Picker tool is a one-shot tool. All integrate with Neuroglancer's new tool system. The drawing action is bound to Ctrl + Left Click.
Optimistic Preview for Compressed Chunks: To provide immediate visual feedback and solve the performance problem with compressed chunks, edits are now rendered through an optimistic preview layer.
InMemoryVolumeChunkSource.RenderLayer(e.g.,ImageRenderLayerorSegmentationRenderLayer). This ensures the preview perfectly matches the user's existing shader and display settings.Data-flow
sequenceDiagram participant User participant Tool as VoxelBrushTool participant ControllerFE as VoxelEditController (FE) participant EditSourceFE as OverlayChunkSource (FE) participant BaseSourceFE as VolumeChunkSource (FE) participant ControllerBE as VoxelEditController (BE) participant BaseSourceBE as VolumeChunkSource (BE) User->>Tool: Mouse Down/Drag Tool->>ControllerFE: paintBrushWithShape(mouse, ...) ControllerFE->>ControllerFE: Calculates affected voxels and chunks ControllerFE->>EditSourceFE: applyLocalEdits(chunkKeys, ...) activate EditSourceFE EditSourceFE->>EditSourceFE: Modifies its own in-memory chunk data note over EditSourceFE: This chunk's texture is re-uploaded to the GPU deactivate EditSourceFE ControllerFE->>ControllerBE: commitEdits(edits, ...) [RPC] activate ControllerBE ControllerBE->>ControllerBE: Debounces and batches edits ControllerBE->>BaseSourceBE: applyEdits(chunkKeys, ...) activate BaseSourceBE BaseSourceBE-->>ControllerBE: Returns VoxelChange (for undo stack) deactivate BaseSourceBE ControllerBE->>ControllerFE: callChunkReload(chunkKeys) [RPC] activate ControllerFE ControllerFE->>BaseSourceFE: invalidateChunks(chunkKeys) note over BaseSourceFE: BaseSourceFE re-fetches chunk with the now-permanent edit. ControllerFE->>EditSourceFE: clearOptimisticChunk(chunkKeys) deactivate ControllerFE ControllerBE->>ControllerBE: Pushes change to Undo Stack & enqueues for downsampling deactivate ControllerBE loop Downsampling & Reload Cascade ControllerBE->>ControllerBE: downsampleStep(chunkKeys) ControllerBE->>ControllerFE: callChunkReload(chunkKeys) [RPC] activate ControllerFE ControllerFE->>BaseSourceFE: invalidateChunks(chunkKeys) note over BaseSourceFE: BaseSourceFE re-fetches chunk with the now-permanent edit. ControllerFE->>EditSourceFE: clearOptimisticChunk(chunkKeys) deactivate ControllerFE end5. Dataset creation To complete Neuroglancer's writing capabilities, a dataset metadata creation/initialization feature was introduced.The workflow is triggered when a user provides a URL to a data source that does not resolve:Neuroglancer recognizes the potential intent to create a new dataset and prompts the user:Finally, the user is able to access dataset creation form:Data sources & Kvstores
Currently, there is a very limited set of supported data sources and kvstores, which are:
opfs: in-browser storage, also used for local development at some point, the relevancy can be discussed.ssa+https: a kvstore linked to an in development project, which is a stateless (thanks to OAuth 2.0) worker providing signed urls to read/write in s3 storesLimitations
Open Questions & Future Work
This PR focuses on establishing the core architecture. Several larger topics from the original discussion are noted here as future work:
Checklist
[ ] Added support to more (every?) datasources and kvstoresEdits