Skip to content

TileJSON 3.0 support with layer fields discovery and cached /capabilities responses#1071

Open
Rub21 wants to merge 12 commits intogo-spatial:masterfrom
OpenHistoricalMap:update_tilejson_spec
Open

TileJSON 3.0 support with layer fields discovery and cached /capabilities responses#1071
Rub21 wants to merge 12 commits intogo-spatial:masterfrom
OpenHistoricalMap:update_tilejson_spec

Conversation

@Rub21
Copy link

@Rub21 Rub21 commented Jan 13, 2026

Update TileJSON version to 3.0.0 and adds the required fields property to VectorLayer.

  • Implements a LayerFielder interface and a PostGIS-backed LayerFields method to infer attribute types per layer.
  • Extends the /capabilities handler to populate fields from providers and cache TileJSON responses in memory

cc. @1ec5

Copy link
Member

@iwpnd iwpnd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey. This is an update that is well overdue. The implementation is alright. The LayerFielder makes sense. Yet incrementing the version to 3.0.0 is an increment to all providers alike even though they do not (yet) implement the LayerFielder. This in turn makes a capabilities.json from any other provider but the postgis provider an invalid TileJSON v3.
The handle_map_capabilities.go should therefor determine whether the provider implements the LayerFielder and if it doesn't yet, it should respond with a valid TileJSON v2.
This PR lacks some tests. Take a look into the postgis_test.go and snoop around in the testdata we setup locally. It should be straight forward to write a small test that covers the creation of the TileJSON v2 and v3 in handle_map_capabilities_test.go.

All in all, good work. Some nits, some questions, some potential redundancy and the question how to handle providers that do not yet impl a LayerFielder.

@ARolek
Copy link
Member

ARolek commented Jan 13, 2026

I agree with @iwpnd, this is long over due so thanks for tackling it! I left a few comments inline and generally agree with the first pass review that @iwpnd gave.

Rub21 and others added 10 commits January 23, 2026 11:30
Co-authored-by: Ben <iwpnd@users.noreply.github.com>
Co-authored-by: Ben <iwpnd@users.noreply.github.com>
Co-authored-by: Ben <iwpnd@users.noreply.github.com>
Co-authored-by: Ben <iwpnd@users.noreply.github.com>
Co-authored-by: Ben <iwpnd@users.noreply.github.com>
Co-authored-by: Ben <iwpnd@users.noreply.github.com>
* Add testing for TileJSON 3.0.0

* Updates for testing
@Rub21
Copy link
Author

Rub21 commented Jan 28, 2026

@iwpnd @ARolek

Thanks for taking the time to review this PR and for the helpful suggestions.
I’ve updated the implementation according to your feedback, including adding tests in server/handle_map_capabilities_test.go.
I’ve also deployed this branch to my Tegola server and the /capabilities and TileJSON 3.0 responses are working as expected in my setup.

https://vtiles.staging.openhistoricalmap.org/capabilities
https://vtiles.staging.openhistoricalmap.org/capabilities/ohm.json

@iwpnd
Copy link
Member

iwpnd commented Jan 29, 2026

@Rub21 thank you! i still think that the rwmutex will become a bottleneck in high concurrency environments and I have an idea how to address this. if you're in a rush you can keep your fork deployed, as I will not touch on functionality. i will however need some time to work on this and am unsure when I can get to it - this weekend maybe.

@Rub21
Copy link
Author

Rub21 commented Jan 29, 2026

I have an idea how to address this.

That would be great, Thank you!!

@ARolek
Copy link
Member

ARolek commented Jan 30, 2026

@Rub21 thanks for tackling the code review comments. I'm traveling and wont get a chance to look at this until next week. Does look like govulncheck is failing too. Looks like we need to update go to 1.25.6. I will get this done.

@iwpnd
Copy link
Member

iwpnd commented Jan 31, 2026

@Rub21 can you please verify that you enabled us to commit to your branch? See here.

I simplified the caching from a dual-map + RWMutex setup to just sync.Map, where each entry holds both the sync.Once and TileJSON together. In this scnario sync.Map handles our read-heavy workload better under high concurrency since RWMutex performance drops significantly with load. Also fixed a bug where cancelled requests could cache incomplete data by switching to context.Background() for layer field fetching instead. That way even when cancelled, we ensure completetion for the next requester.

The main fix was versioning logic however, and looking into this in more detail I want to propose a different approach here.

We were using TileJSON 3.0.0 if any provider supported field metadata. However if you mix providers, this is causing inconsistent field availability across layers.
Now we only claim 3.0.0 when all providers support it, giving clients predictable behavior. I always initialize Fields as an empty map for spec compliance and renamed tilejson.Version to tilejson.Version3 for clarity.

Then I added error handling so failed builds don't get cached permanently - if buildTileJSON errors, we remove the cache entry so the next request can retry.
buildTileJSON is just the build logic put into its own method to declutter the ServeHTTP method a little.

When I was working through the TileJSON creation itself I was struggling to read the logic, tbh. I added a helper to find layers by ID - and removed that nested loop with the skip flag, and replaced the Atlas field with a GetMap function for cleaner dependency injection and easier testing where when GetMap is nil, we default to the defaultAtlas. Those test cases really made me think man.. the defaultAtlas is not a good idea in retrospect.

I have my changes tucked away in a separate commit, that I could contribute here if you give me the permission.

@iwpnd
Copy link
Member

iwpnd commented Feb 8, 2026

@Rub21 bumping this once more to your attention.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants