Skip to content

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Mar 16, 2025

Coming soon: The Renovate bot (GitHub App) will be renamed to Mend. PRs from Renovate will soon appear from 'Mend'. Learn more here.

This PR contains the following updates:

Package Update Change
ghcr.io/apollographql/router minor v1.32.0 -> v1.61.10

Release Notes

apollographql/router (ghcr.io/apollographql/router)

v1.61.10

Compare Source

🐛 Fixes
Fix deduplicated subscriptions hanging when one subscription closes (PR #​7879)

Fixes a regression introduced in v1.50.0. When multiple client subscriptions are deduped onto a single subgraph subscription in WebSocket passthrough mode, and the first client subscription closes, the Router would close the subgraph subscription. The other deduplicated subscriptions would then silently stop receiving events.

Now outgoing subscriptions to subgraphs are kept open as long as any client subscription uses them.

By @​bnjjj in #​7879

v1.61.9

Compare Source

🐛 Fixes
Coprocessor: improve handling of invalid GraphQL responses with conditional validation (PR #​7731)

The router was creating invalid GraphQL responses internally, especially when subscriptions terminate. When a coprocessor is configured, it validates all responses for correctness, causing errors to be logged when the router generates invalid internal responses. This affects the reliability of subscription workflows with coprocessors.

Fix handling of invalid GraphQL responses returned from coprocessors, particularly when used with subscriptions. Added conditional response validation and improved testing to ensure correctness. Added the response_validation configuration option at the coprocessor level to enable the response validation (by default it's enabled).

By @​BrynCooke in #​7731

Fix several hot reload issues with subscriptions (PR #​7746)

When a hot reload is triggered by a configuration change, the router attempted to apply updated configuration to open subscriptions. This could cause excessive logging.

When a hot reload was triggered by a schema change, the router closed subscriptions with a SUBSCRIPTION_SCHEMA_RELOAD error. This happened before the new schema was fully active and warmed up, so clients could reconnect to the old schema, which should not happen.

To fix these issues, a configuration and a schema change now have the same behavior. The router waits for the new configuration and schema to be active, and then closes all subscriptions with a SUBSCRIPTION_SCHEMA_RELOAD/SUBSCRIPTION_CONFIG_RELOAD error, so clients can reconnect.

By @​goto-bus-stop and @​bnjjj in #​7777

v1.61.8

Compare Source

🐛 Fixes
Set a valid GraphQL response for websocket handshake response (PR #​7680)

Since this PR we added more checks on graphql response returned by coprocessors to be compliant with GraphQL specs. When it's a subscription using websocket it was not returning any data and so was not a correct GraphQL response payload. This is a fix to always return valid GraphQL response when doing the websocket handshake.

By @​bnjjj in #​7680

Spans should only include path in http.route (PR #​7405)

Per the OpenTelemetry spec, the http.route should only include "the matched route, that is, the path template used in the format used by the respective server framework."

The router currently sends the full URI in http.route, which can be high cardinality (ie /graphql?operation=one_of_many_values). After this change, the router will only include the path (/graphql).

By @​carodewig in #​7405

🔍 Debuggability
Add graphql.operation.name attribute to apollo.router.opened.subscriptions counter (PR #​7606)

The apollo.router.opened.subscriptions metric has an graphql.operation.name attribute applied to identify the named operation of subscriptions which are still open.

By @​bnjjj in #​7606

v1.61.7

Compare Source

🔍 Debuggability
Log whether safe-listing enforcement was skipped (Issue #​7509)

When logging unknown operations encountered during safe-listing, include information about whether enforcement was skipped. This will help distinguish between truly problematic external operations (where enforcement_skipped is false) and internal operations that are intentionally allowed to bypass safelisting (where enforcement_skipped is true).

By @​DaleSeo in #​7509

v1.61.6

Compare Source

🐛 Fixes
Fix JWT metrics discrepancy (PR #​7258)

This fixes the apollo.router.operations.authentication.jwt counter metric to behave as documented: emitted for every request that uses JWT, with the authentication.jwt.failed attribute set to true or false for failed or successful authentication.

Previously, it was only used for failed authentication.

The attribute-less and accidentally-differently-named apollo.router.operations.jwt counter was and is only emitted for successful authentication, but is deprecated now.

By @​SimonSapin in #​7258

Fix Redis connection leak (PR #​7319)

The router performs a 'hot reload' whenever it detects a schema update. During this reload, it effectively instantiates a new internal router, warms it up (optional), redirects all traffic to this new router, and drops the old internal router.

This change fixes a bug in that drop process where the Redis connections are never told to terminate, even though the Redis client pool is dropped. This leads to an ever-increasing number of inactive Redis connections, which eats up memory.

It also adds a new up-down counter metric, apollo.router.cache.redis.connections, to track the number of open Redis connections. This metric includes a kind label to discriminate between different Redis connection pools, which mirrors the kind label on other cache metrics (ie apollo.router.cache.hit.time).

By @​carodewig in #​7319

Fix Parsing of Coprocessor GraphQL Responses (PR #​7141)

Previously Router ignored data: null property inside GraphQL response returned by coprocessor.
According to GraphQL Spectification:

If an error was raised during the execution that prevented a valid response, the "data" entry in the response should be null.

That means if coprocessor returned valid execution error, for example:

{
  "data": null,
  "errors": [{ "message": "Some execution error" }]
}

Router violated above restriction from GraphQL Specification by returning following response to client:

{
  "errors": [{ "message": "Some execution error" }]
}

This fix ensures full compliance with the GraphQL specification by preserving the complete structure of error responses from coprocessors.

Contributed by @​IvanGoncharov in #​7141

Avoid fractional decimals when generating apollo.router.operations.batching.size metrics for GraphQL request batch sizes (PR #​7306)

Correct the calculation of the apollo.router.operations.batching.size metric to reflect accurate batch sizes rather than occasionally returning fractional numbers.

By @​bnjjj in #​7306

📃 Configuration
Add configurable server header read timeout (PR #​7262)

This change exposes the server's header read timeout as the server.http.header_read_timeout configuration option.

By default, the server.http.header_read_timeout is set to previously hard-coded 10 seconds. A longer timeout can be configured using the server.http.header_read_timeout option.

server:
  http:
    header_read_timeout: 30s

By @​gwardwell in #​7262

🛠 Maintenance
Reject @skip/@include on subscription root fields in validation (PR #​7338)

This implements a GraphQL spec RFC, rejecting subscriptions in validation that can be invalid during execution.

By @​goto-bus-stop in #​7338

v1.61.5

Compare Source

🔍 Debuggability
Add compute job pool spans (PR #​7236)

The compute job pool in the router is used to execute CPU intensive work outside of the main I/O worker threads, including GraphQL parsing, query planning, and introspection.
This PR adds spans to jobs that are on this pool to allow users to see when latency is introduced due to
resource contention within the compute job pool.

  • compute_job:
    • job.type: (query_parsing|query_planning|introspection)
  • compute_job.execution
    • job.age: P1-P8
    • job.type: (query_parsing|query_planning|introspection)

Jobs are executed highest priority (P8) first. Jobs that are low priority (P1) age over time, eventually executing
at highest priority. The age of a job is can be used to diagnose if a job was waiting in the queue due to other higher
priority jobs also in the queue.

By @​bryncooke in #​7236

Add compute job pool metrics (PR #​7184)

The compute job pool in the router is used to execute CPU intensive work outside of the main I/O worker threads, including GraphQL parsing, query planning, and introspection.
When this pool becomes saturated it is difficult for users to see why so that they can take action.
This change adds new metrics to help users understand how long jobs are waiting to be processed.

New metrics:

  • apollo.router.compute_jobs.queue_is_full - A counter of requests rejected because the queue was full.
  • apollo.router.compute_jobs.duration - A histogram of time spent in the compute pipeline by the job, including the queue and query planning.
    • job.type: (query_planning, query_parsing, introspection)
    • job.outcome: (executed_ok, executed_error, channel_error, rejected_queue_full, abandoned)
  • apollo.router.compute_jobs.queue.wait.duration - A histogram of time spent in the compute queue by the job.
    • job.type: (query_planning, query_parsing, introspection)
  • apollo.router.compute_jobs.execution.duration - A histogram of time spent to execute job (excludes time spent in the queue).
    • job.type: (query_planning, query_parsing, introspection)
  • apollo.router.compute_jobs.active_jobs - A gauge of the number of compute jobs being processed in parallel.
    • job.type: (query_planning, query_parsing, introspection)

By @​carodewig in #​7184

🐛 Fixes
Fix hanging requests when compute job queue is full (PR #​7273)

The compute job pool in the router is used to execute CPU intensive work outside of the main I/O worker threads, including GraphQL parsing, query planning, and introspection. When the pool is busy, jobs enter a queue.

When the compute job queue was full, requests could hang until timeout. Now, the router immediately returns a SERVICE_UNAVAILABLE response to the user.

By @​BrynCooke in #​7273

Increase compute job pool queue size (PR #​7205)

The compute job pool in the router is used to execute CPU intensive work outside of the main I/O worker threads, including GraphQL parsing, query planning, and introspection. When the pool is busy, jobs enter a queue.

We previously set this queue size to 20 (per thread). However, this may be too small on resource constrained environments.

This patch increases the queue size to 1,000 jobs per thread. For reference, in older router versions before the introduction of the compute job worker pool, the equivalent queue size was 1,000.

By @​goto-bus-stop in #​7205

v1.61.4

Compare Source

🐛 Fixes
Entity-cache: handle multiple key directives (PR #​7228)

This PR fixes a bug in entity caching introduced by the fix in #​6888 for cases where several @key directives with different fields were declared on a type as documented here.

For example if you have this kind of entity in your schema:

type Product @​key(fields: "upc") @​key(fields: "sku") {
  upc: ID!
  sku: ID!
  name: String
}

By @​duckki & @​bnjjj in #​7228

v1.61.3

Compare Source

🐛 Fixes
Fix potential telemetry deadlock (PR #​7142)

The tracing_subscriber crate uses RwLocks to manage access to a Span's Extensions. Deadlocks are possible when
multiple threads access this lock, including with reentrant locks:

// Thread 1              |  // Thread 2
let _rg1 = lock.read();  |
                         |  // will block
                         |  let _wg = lock.write();
// may deadlock          |
let _rg2 = lock.read();  |

This fix removes an opportunity for reentrant locking while extracting a Datadog identifier.

There is also a potential for deadlocks when the root and active spans' Extensions are acquired at the same time, if
multiple threads are attempting to access those Extensions but in a different order. This fix removes a few cases
where multiple spans' Extensions are acquired at the same time.

By @​carodewig in #​7142

Connection shutdown timeout (PR #​7058)

When a connection is closed we call graceful_shutdown on hyper and then await for the connection to close.

Hyper 0.x has various issues around shutdown that may result in us waiting for extended periods for the connection to eventually be closed.

This PR introduces a configurable timeout from the termination signal to actual termination, defaulted to 60 seconds. The connection is forcibly terminated after the timeout is reached.

To configure, set the option in router yaml. It accepts human time durations:

supergraph:
  connection_shutdown_timeout: 60s

Note that even after connections have been terminated the router will still hang onto pipelines if early_cancel has not been configured to true. The router is trying to complete the request.

Users can either set early_cancel to true

supergraph:
  early_cancel: true

AND/OR use traffic shaping timeouts:

traffic_shaping:
  router:
    timeout: 60s

By @​BrynCooke in #​7058

Fix crash when an invalid query plan is generated (PR #​7214)

When an invalid query plan is generated, the router could panic and crash.
This could happen if there are gaps in the GraphQL validation implementation.
Now, even if there are unresolved gaps, the router will handle it gracefully and reject the request.

By @​goto-bus-stop in #​7214

Improve Error Message for Invalid JWT Header Values (PR #​7121)

Enhanced parsing error messages for JWT Authorization header values now provide developers with clear, actionable feedback while ensuring that no sensitive data is exposed.

Examples of the updated error messages:

-         Header Value: '<invalid value>' is not correctly formatted. prefix should be 'Bearer'
+         Value of 'authorization' JWT header should be prefixed with 'Bearer'
-         Header Value: 'Bearer' is not correctly formatted. Missing JWT
+         Value of 'authorization' JWT header has only 'Bearer' prefix but no JWT token

By @​IvanGoncharov in #​7121

v1.61.2

Compare Source

🔒 Security
Certain query patterns may cause resource exhaustion

Corrects a set of denial-of-service (DOS) vulnerabilities that made it possible for an attacker to render router inoperable with certain simple query patterns due to uncontrolled resource consumption. All prior-released versions and configurations are vulnerable except those where persisted_queries.enabled, persisted_queries.safelist.enabled, and persisted_queries.safelist.require_id are all true.

See the associated GitHub Advisories GHSA-3j43-9v8v-cp3f, GHSA-84m6-5m72-45fp, GHSA-75m2-jhh5-j5g2, and GHSA-94hh-jmq8-2fgp, and the apollo-compiler GitHub Advisory GHSA-7mpv-9xg6-5r79 for more information.

By @​sachindshinde and @​goto-bus-stop.

v1.61.1

Compare Source

🐛 Fixes
Use correct default values on omitted OTLP endpoints (PR #​6931)

Previously, when the configuration didn't specify an OTLP endpoint, the Router would always default to http://localhost:4318. However, port 4318 is the correct default only for the HTTP protocol, while port 4317 should be used for gRPC.

Additionally, all other telemetry defaults in the Router configuration consistently use 127.0.0.1 as the hostname rather than localhost.

With this change, the Router now uses:

  • http://127.0.0.1:4317 as the default for gRPC protocol
  • http://127.0.0.1:4318 as the default for HTTP protocol

This ensures protocol-appropriate port defaults and consistent hostname usage across all telemetry configurations.

By @​IvanGoncharov in #​6931

Separate entity keys and representation variables in entity cache key (Issue #​6673)

This fix separates the entity keys and representation variable values in the cache key, to avoid issues with @requires for example.

By @​bnjjj in #​6888

🔒 Security
Add batching.maximum_size configuration option to limit maximum client batch size (PR #​7005)

Add an optional maximum_size parameter to the batching configuration.

  • When specified, the router will reject requests which contain more than maximum_size queries in the client batch.
  • When unspecified, the router performs no size checking (the current behavior).

If the number of queries provided exceeds the maximum batch size, the entire batch fails with error code 422 (Unprocessable Content). For example:

{
  "errors": [
    {
      "message": "Invalid GraphQL request",
      "extensions": {
        "details": "Batch limits exceeded: you provided a batch with 3 entries, but the configured maximum router batch size is 2",
        "code": "BATCH_LIMIT_EXCEEDED"
      }
    }
  ]
}

By @​carodewig in #​7005

🔍 Debuggability
Add apollo.router.pipelines metrics (PR #​6967)

When the router reloads, either via schema change or config change, a new request pipeline is created.
Existing request pipelines are closed once their requests finish. However, this may not happen if there are ongoing long requests that do not finish, such as Subscriptions.

To enable debugging when request pipelines are being kept around, a new gauge metric has been added:

  • apollo.router.pipelines - The number of request pipelines active in the router
    • schema.id - The Apollo Studio schema hash associated with the pipeline.
    • launch.id - The Apollo Studio launch id associated with the pipeline (optional).
    • config.hash - The hash of the configuration

By @​BrynCooke in #​6967

Add apollo.router.open_connections metric (PR #​7023)

To help users to diagnose when connections are keeping pipelines hanging around, the following metric has been added:

  • apollo.router.open_connections - The number of request pipelines active in the router
    • schema.id - The Apollo Studio schema hash associated with the pipeline.
    • launch.id - The Apollo Studio launch id associated with the pipeline (optional).
    • config.hash - The hash of the configuration.
    • server.address - The address that the router is listening on.
    • server.port - The port that the router is listening on if not a unix socket.
    • http.connection.state - Either active or terminating.

You can use this metric to monitor when connections are open via long running requests or keepalive messages.

By @​BrynCooke in #​7009

v1.61.0: - LTS

Compare Source


This is an LTS release of Apollo Router

To find out more about our maintenance and support policy, please refer to our docs


🚀 Features
Query planner dry-run option (PR #​6656)

This PR adds a new dry-run option to the Apollo-Expose-Query-Plan header value that emits the query plans back to Studio for visualizations. This new value will only emit the query plan, and abort execution. This can be helpful for tools like rover, where query plan generation is needed but not full runtime, or for potentially prewarming query plan caches out of band.

curl --request POST --include \
     --header 'Accept: application/json' \
     --header 'Apollo-Expose-Query-Plan: dry-run' \
     --url 'http://127.0.0.1:4000/' \
     --data '{"query": "{ topProducts { upc name } }"}'

By @​aaronArinder and @​lennyburdette in #​6656.

Enable Remote Proxy Downloads

This enables users without direct download access to specify a remote proxy mirror location for the github download of
the Apollo Router releases.

By @​LongLiveCHIEF in #​6667

🐛 Fixes
Header propagation rules passthrough (PR #​6690)

Header propagation contains logic to prevent headers from being propagated more than once. This was broken
in #​6281 which always considered a header propagated regardless if a rule
actually matched.

This PR alters the logic so that a header is marked as fixed only when it's populated.

The following will now work again:

headers:
  all:
    request:
      - propagate:
          named: a
          rename: b
      - propagate:
          named: b

Note that defaulting a header WILL populate it, so make sure to include your defaults last in your propagation
rules.

headers:
  all:
    request:
      - propagate:
          named: a
          rename: b
          default: defaulted # This will prevent any further rule evaluation for header `b`
      - propagate:
          named: b

Instead, make sure that your headers are defaulted last:

headers:
  all:
    request:
      - propagate:
          named: a
          rename: b
      - propagate:
          named: b
          default: defaulted # OK

By @​BrynCooke in #​6690

Entity cache: fix directive conflicts in cache-control header (Issue #​6441)

Unnecessary cache-control directives are created in cache-control header. The router will now filter out unnecessary values from the cache-control header when the request resolves. So if there's max-age=10, no-cache, must-revalidate, no-store, the expected value for the cache-control header would simply be no-store. Please see the MDN docs for justification of this reasoning: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control#preventing_storing

By @​bnjjj in #​6543

Query Planning: fix __typename selections in sibling typename optimization

The query planner uses an optimization technique called "sibling typename", which attaches __typename selections to their sibling selections so the planner won't need to plan them separately.

Previously, when there were multiple identical selections and one of them has a __typename attached, the query planner could pick the one without the attachment, effectively losing a __typename selection.

Now, the query planner favors the one with a __typename attached without losing the __typename selection.

By @​duckki in #​6824

📃 Configuration
Promote experimental_otlp_tracing_sampler config to stable (PR #​6070)

The router's otlp tracing sampler feature that was previously experimental is now generally available.

If you used its experimental configuration, you should migrate to the new configuration option:

  • telemetry.apollo.experimental_otlp_tracing_sampler is now telemetry.apollo.otlp_tracing_sampler

The experimental configuration option is now deprecated. It remains functional but will log warnings.

By @​garypen in #​6070

Promote experimental_local_manifess config for persisted queries to stable

The experimental_local_manifests PQ configuration option is being promoted to stable. This change updates the configuration option name and any references to it, as well as the related documentation. The experimental_ usage remains valid as an alias for existing usages.

By @​trevor-scheer in #​6564

🛠 Maintenance
Reduce demand control allocations on start/reload (PR #​6754)

When demand control is enabled, the router now preallocates capacity for demand control's processed schema and shrinks to fit after processing. When it's disabled, the router skips the type processing entirely to minimize startup impact.

By @​tninesling in #​6754

v1.60.1

Compare Source

🐛 Fixes
Header propagation rules passthrough (PR #​6690)

Header propagation contains logic to prevent headers from being propagated more than once. This was broken
in #​6281 which always considered a header propagated regardless if a rule
actually matched.

This PR alters the logic so that only when a header is populated then the header is marked as fixed.

The following will now work again:

headers:
  all:
    request:
      - propagate:
          named: a
          rename: b
      - propagate:
          named: b

Note that defaulting a head WILL populate a header, so make sure to include your defaults last in your propagation
rules.

headers:
  all:
    request:
      - propagate:
          named: a
          rename: b
          default: defaulted # This will prevent any further rule evaluation for header `b`
      - propagate:
          named: b

Instead, make sure that your headers are defaulted last:

headers:
  all:
    request:
      - propagate:
          named: a
          rename: b
      - propagate:
          named: b
          default: defaulted # OK

By @​BrynCooke in #​6690

Entity cache: fix directive conflicts in cache-control header (Issue #​6441)

Unnecessary cache-control directives are created in cache-control header. The router will now filter out unnecessary values from the cache-control header when the request resolves. So if there's max-age=10, no-cache, must-revalidate, no-store, the expected value for the cache-control header would simply be no-store. Please see the MDN docs for justification of this reasoning: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control#preventing_storing

By @​bnjjj in #​6543

Resolve regressions in fragment compression for certain operations (PR #​6651)

In v1.58.0 we introduced a new compression strategy for subgraph GraphQL operations to replace an older, more complicated algorithm.

While we were able to validate improvements for a majority of cases, some regressions still surfaced. To address this, we are extending it to compress more operations with the following outcomes:

  • The P99 overhead of running the new compression algorithm on the largest operations in our corpus is now just 10ms
  • In case of better compression, at P99 it shrinks the operations by 50Kb when compared to the old algorithm
  • In case of worse compression, at P99 it only adds an additional 108 bytes compared to the old algorithm, which was an acceptable trade-off versus added complexity

By @​dariuszkuc in #​6651

v1.60.0

Compare Source

🚀 Features
Improve BatchProcessor observability (Issue #​6558)

A new metric has been introduced to allow observation of how many spans are being dropped by an telemetry batch processor.

  • apollo.router.telemetry.batch_processor.errors - The number of errors encountered by exporter batch processors.
    • name: One of apollo-tracing, datadog-tracing, jaeger-collector, otlp-tracing, zipkin-tracing.
    • error = One of channel closed, channel full.

By observing the number of spans dropped it is possible to estimate what batch processor settings will work for you.

In addition, the log message for dropped spans will now indicate which batch processor is affected.

By @​bryncooke in #​6558

🐛 Fixes
Improve performance of query hashing by using a precomputed schema hash (PR #​6622)

The router now uses a simpler and faster query hashing algorithm with more predictable CPU and memory usage. This improvement is enabled by using a precomputed hash of the entire schema, rather than computing and hashing the subset of types and fields used by each query.

For more details on why these design decisions were made, please see the PR description

By @​IvanGoncharov in #​6622

Truncate invalid error paths (PR #​6359)

This fix addresses an issue where the router was silently dropping subgraph errors that included invalid paths.

According to the GraphQL Specification an error path must point to a response field:

If an error can be associated to a particular field in the GraphQL result, it must contain an entry with the key path that details the path of the response field which experienced the error.

The router now truncates the path to the nearest valid field path if a subgraph error includes a path that can't be matched to a response field,

By @​IvanGoncharov in #​6359

Eagerly init subgraph operation for subscription primary nodes (PR #​6509)

When subgraph operations are deserialized, typically from a query plan cache, they are not automatically parsed into a full document. Instead, each node needs to initialize its operation(s) prior to execution. With this change, the primary node inside SubscriptionNode is initialized in the same way as other nodes in the plan.

By @​tninesling in #​6509

Fix increased memory usage in sysinfo since Router 1.59.0 (PR #​6634)

In version 1.59.0, Apollo Router started using the sysinfo crate to gather metrics about available CPUs and RAM. By default, that crate uses rayon internally to parallelize its handling of system processes. In turn, rayon creates a pool of long-lived threads.

In a particular benchmark on a 32-core Linux server, this caused resident memory use to increase by about 150 MB. This is likely a combination of stack space (which only gets freed when the thread terminates) and per-thread space reserved by the heap allocator to reduce cross-thread synchronization cost.

This regression is now fixed by:

  • Disabling sysinfo’s use of rayon, so the thread pool is not created and system processes information is gathered in a sequential loop.
  • Making sysinfo not gather that information in the first place since Router does not use it.

By @​SimonSapin in #​6634

Optimize demand control lookup (PR #​6450)

The performance of demand control in the router has been optimized.

Previously, demand control could reduce router throughput due to its extra processing required for scoring.

This fix improves performance by shifting more data to be computed at plugin initialization and consolidating lookup queries:

  • Cost directives for arguments are now stored in a map alongside those for field definitions
  • All precomputed directives are bundled into a struct for each field, along with that field's extended schema type. This reduces 5 individual lookups to a single lookup.
  • Response scoring was looking up each field's definition twice. This is now reduced to a single lookup.

By @​tninesling in #​6450

Fix missing Content-Length header in subgraph requests (Issue #​6503)

A change in 1.59.0 caused the Router to send requests to subgraphs without a Content-Length header, which would cause issues with some GraphQL servers that depend on that header.

This solves the underlying bug and reintroduces the Content-Length header.

By @​nmoutschen in #​6538

🛠 Maintenance
Remove the legacy query planner (PR #​6418)

The legacy query planner has been removed in this release. In the previous release, router v1.58, it was no longer used by default but was still available through the experimental_query_planner_mode configuration key. That key is now removed.

Also removed are configuration keys which were only relevant to the legacy planner:

  • supergraph.query_planning.experimental_parallelism: the new planner can always use available parallelism.
  • supergraph.experimental_reuse_query_fragments: this experimental algorithm that attempted to
    reuse fragments from the original operation while forming subgraph requests is no longer present. Instead, by default new fragment definitions are generated based on the shape of the subgraph operation.

By @​SimonSapin in #​6418

Migrate various metrics to OTel instruments (PR #​6476, PR #​6356, PR #​6539)

Various metrics using our legacy mechanism based on the tracing crate are migrated to OTel instruments.

By @​goto-bus-stop in #​6476, #​6356, #​6539

📚 Documentation
Add instrumentation configuration examples (PR #​6487)

The docs for router telemetry have new example configurations for common use cases for selectors and condition.

By @​shorgi in #​6487

🧪 Experimental
Remove experimental_retry option (PR #​6338)

The experimental_retry option has been removed due to its limited use and functionality during its experimental phase.

By @​bnjjj in #​6338

v1.59.2

Compare Source

[!IMPORTANT]

This release contains important fixes which address resource utilization regressions which impacted Router v1.59.0 and v1.59.1. These regressions were in the form of:

  1. A small baseline increase in memory usage; AND
  2. Additional per-request CPU and memory usage for queries which included references to abstract types with a large number of implementations

If you have enabled Distributed query plan caching, this release contains changes which necessarily alter the hashing algorithm used for the cache keys. On account of this, you should anticipate additional cache regeneration cost when updating between these versions while the new hashing algorithm comes into service.

🐛 Fixes
Improve performance of query hashing by using a precomputed schema hash (PR #​6622)

The router now uses a simpler and faster query hashing algorithm with more predictable CPU and memory usage. This improvement is enabled by using a precomputed hash of the entire schema, rather than computing and hashing the subset of types and fields used by each query.

For more details on why these design decisions were made, please see the PR description

By @​IvanGoncharov in #​6622

Fix increased memory usage in sysinfo since Router 1.59.0 (PR #​6634)

In version 1.59.0, Apollo Router started using the sysinfo crate to gather metrics about available CPUs and RAM. By default, that crate uses rayon internally to parallelize its handling of system processes. In turn, rayon creates a pool of long-lived threads.

In a particular benchmark on a 32-core Linux server, this caused resident memory use to increase by about 150 MB. This is likely a combination of stack space (which only gets freed when the thread terminates) and per-thread space reserved by the heap allocator to reduce cross-thread synchronization cost.

This regression is now fixed by:

  • Disabling sysinfo’s use of rayon, so the thread pool is not created and system processes information is gathered in a sequential loop.
  • Making sysinfo not gather that information in the first place since Router does not use it.

By @​SimonSapin in #​6634

v1.59.1

Compare Source

[!IMPORTANT]

This release was impacted by a resource utilization regression which was fixed in v1.59.2. See the release notes for that release for more details. As a result, we recommend using v1.59.2 rather than v1.59.1 or v1.59.0.

🐛 Fixes
Fix transmitted header value for Datadog priority sampling resolution (PR #​6017)

The router now transmits correct values of x-datadog-sampling-priority to downstream services.

Previously, an x-datadog-sampling-priority of -1 was incorrectly converted to 0 for downstream requests, and 2 was incorrectly converted to 1. When propagating to downstream services, this resulted in values of USER_REJECT being incorrectly transmitted as AUTO_REJECT.

Enable accurate Datadog APM metrics (PR #​6017)

The router supports a new preview feature, the preview_datadog_agent_sampling option, to enable sending all spans to the Datadog Agent so APM metrics and views are accurate.

Previously, the sampler option in telemetry.exporters.tracing.common.sampler wasn't Datadog-aware. To get accurate Datadog APM metrics, all spans must be sent to the Datadog Agent with a psr or sampling.priority attribute set appropriately to record the sampling decision.

The preview_datadog_agent_sampling option enables accurate Datadog APM metrics. It should be used when exporting to the Datadog Agent, via OTLP or Datadog-native.

telemetry:
  exporters:
    tracing:
      common:

##### Only 10 percent of spans will be forwarded from the Datadog agent to Datadog. Experiment to find a value that is good for you!
        sampler: 0.1

##### Send all spans to the Datadog agent.
        preview_datadog_agent_sampling: true

Using these options can decrease your Datadog bill, because you will be sending only a percentage of spans from the Datadog Agent to Datadog.

[!IMPORTANT]

  • Users must enable preview_datadog_agent_sampling to get accurate APM metrics. Users that have been using recent versions of the router will have to modify their configuration to retain full APM metrics.
  • The router doesn't support in-agent ingestion control.
  • Configuring traces_per_second in the Datadog Agent won't dynamically adjust the router's sampling rate to meet the target rate.
  • Sending all spans to the Datadog Agent may require that you tweak the batch_processor settings in your exporter config. This applies to both OTLP and Datadog native exporters.

Learn more by reading the updated Datadog tracing documentation for more information on configuration options and their implications.

Fix non-parent sampling (PR #​6481)

When the user specifies a non-parent sampler the router should ignore the information from upstream and use its own sampling rate.

The following configuration would not work correctly:

  exporters:
    tracing:
      common:
        service_name: router
        sampler: 0.00001
        parent_based_sampler: false

All spans are being sampled.
This is now fixed and the router will correctly ignore any upstream sampling decision.

By @​BrynCooke in #​6481

v1.59.0

Compare Source

[!IMPORTANT]
Router version 1.53.0 through to 1.59.0 have an issue where users of the Datadog exporter will see all traces sampled at 100%. This is due to the Router incorrectly
setting the priority sampled flag on spans 100% of the time.
This will cause all traces that are sent to Datadog agent to be forwarded on to Datadog, potentially incurring costs.

Update to 1.59.1 to resolve this issue.
Datadog users may wish to enable preview_datadog_agent_sampling to enable accurate APM metrics.

[!IMPORTANT]

This release was impacted by a resource utilization regression which was fixed in v1.59.2. See the release notes for that release for more details. As a result, we recommend using v1.59.2 rather than v1.59.1 or v1.59.0.

[!IMPORTANT]
If you have enabled distributed query plan caching, updates to the query planner in this release will result in query plan caches being regenerated rather than reused. On account of this, you should anticipate additional cache regeneration cost when updating to this router version while the new query plans come into service.

🚀 Features
General availability of native query planner

The router's native, Rust-based, query planner is now generally available and enabled by default.

The native query planner achieves better performance for a variety of graphs. In our tests, we observe:

  • 10x median improvement in query planning time (observed via apollo.router.query_planning.plan.duration)
  • 2.9x improvement in router’s CPU utilization
  • 2.2x improvement in router’s memory usage

Note: you can expect generated plans and subgraph operations in the native
query planner to have slight differences when compared to the legacy, JavaScript-based query planner. We've ascertained these differences to be semantically insignificant, based on comparing ~2.5 million known unique user operations in GraphOS as well as
comparing ~630 million operations across actual router deployments in shadow
mode for a four month duration.

The native query planner supports Federation v2 supergraphs. If you are using Federation v1 today, see our migration guide on how to update your composition build step. Subgraph changes are typically not needed.

The legacy, JavaScript, query planner is deprecated in this release, but you can still switch
back to it if you are still using Federation v1 supergraph:

experimental_query_planner_mode: legacy

Note: The subgraph operations generated by the query planner are not
guaranteed consistent release over release. We strongly recommend against
relying on the shape of planned subgraph operations, as new router features and
optimizations will continuously affect it.

By @​sachindshinde, @​goto-bus-stop, @​duckki, @​TylerBloom, @​SimonSapin, @​dariuszkuc, @​lrlna, @​clenfest, and @​o0Ignition0o.

Ability to skip persisted query list safelisting enforcement via plugin (PR #​6403)

If safelisting is enabled, a router_service plugin can skip enforcement of the safelist (including the require_id check) by adding the key apollo_persisted_queries::safelist::skip_enforcement with value true to the request context.

Note: this doesn't affect the logging of unknown operations by the persisted_queries.log_unknown option.

In cases where an operation would have been denied but is allowed due to the context key existing, the attribute persisted_queries.safelist.enforcement_skipped is set on the apollo.router.operations.persisted_queries metric with value true.

By @​glasser in #​6403


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate renovate bot requested a review from a team as a code owner March 16, 2025 13:09
@renovate renovate bot changed the title chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.0 chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.0 - autoclosed Mar 19, 2025
@renovate renovate bot closed this Mar 19, 2025
@renovate renovate bot deleted the renovate/apollo-graphql-packages branch March 19, 2025 22:06
@renovate renovate bot changed the title chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.0 - autoclosed chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.0 Mar 20, 2025
@renovate renovate bot reopened this Mar 20, 2025
@renovate renovate bot force-pushed the renovate/apollo-graphql-packages branch from 654d818 to cb5dc37 Compare March 20, 2025 02:07
@renovate renovate bot force-pushed the renovate/apollo-graphql-packages branch from cb5dc37 to 291fad6 Compare March 27, 2025 12:07
@renovate renovate bot changed the title chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.0 chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.1 Mar 27, 2025
@renovate renovate bot changed the title chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.1 chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.1 - autoclosed Mar 29, 2025
@renovate renovate bot closed this Mar 29, 2025
@renovate renovate bot changed the title chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.1 - autoclosed chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.1 Mar 29, 2025
@renovate renovate bot reopened this Mar 29, 2025
@renovate renovate bot force-pushed the renovate/apollo-graphql-packages branch from 43e2034 to 291fad6 Compare March 29, 2025 09:36
@renovate renovate bot changed the title chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.1 chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.1 - autoclosed Apr 5, 2025
@renovate renovate bot closed this Apr 5, 2025
@renovate renovate bot changed the title chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.1 - autoclosed chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.1 Apr 6, 2025
@renovate renovate bot reopened this Apr 6, 2025
@renovate renovate bot force-pushed the renovate/apollo-graphql-packages branch from 9d72d3f to 291fad6 Compare April 6, 2025 02:18
@renovate renovate bot changed the title chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.1 chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.1 - autoclosed Apr 6, 2025
@renovate renovate bot closed this Apr 6, 2025
@renovate renovate bot changed the title chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.1 - autoclosed chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.1 Apr 6, 2025
@renovate renovate bot reopened this Apr 6, 2025
@renovate renovate bot force-pushed the renovate/apollo-graphql-packages branch from 0242dcd to 291fad6 Compare April 6, 2025 13:51
@renovate renovate bot changed the title chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.1 chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.1 - autoclosed Apr 7, 2025
@renovate renovate bot closed this Apr 7, 2025
@renovate renovate bot changed the title chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.1 - autoclosed chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.1 Apr 7, 2025
@renovate renovate bot reopened this Apr 7, 2025
@renovate renovate bot changed the title chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.1 chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.2 Apr 7, 2025
@renovate renovate bot force-pushed the renovate/apollo-graphql-packages branch from 291fad6 to d9efacf Compare April 7, 2025 22:05
@renovate renovate bot force-pushed the renovate/apollo-graphql-packages branch from 1fc0267 to 8e7064c Compare August 16, 2025 14:06
@renovate renovate bot changed the title chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.10 chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.10 - autoclosed Aug 16, 2025
@renovate renovate bot closed this Aug 16, 2025
@renovate renovate bot changed the title chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.10 - autoclosed chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.10 Aug 16, 2025
@renovate renovate bot reopened this Aug 16, 2025
@renovate renovate bot force-pushed the renovate/apollo-graphql-packages branch from bde06df to 8e7064c Compare August 16, 2025 21:06
@renovate renovate bot changed the title chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.10 chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.10 - autoclosed Sep 2, 2025
@renovate renovate bot closed this Sep 2, 2025
@renovate renovate bot changed the title chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.10 - autoclosed chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.10 Sep 2, 2025
@renovate renovate bot reopened this Sep 2, 2025
@renovate renovate bot force-pushed the renovate/apollo-graphql-packages branch from efda0ab to 8e7064c Compare September 2, 2025 21:04
@renovate renovate bot changed the title chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.10 chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.10 - autoclosed Sep 4, 2025
@renovate renovate bot closed this Sep 4, 2025
@renovate renovate bot changed the title chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.10 - autoclosed chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.10 Sep 4, 2025
@renovate renovate bot reopened this Sep 4, 2025
@renovate renovate bot changed the title chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.10 chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.10 - autoclosed Sep 9, 2025
@renovate renovate bot closed this Sep 9, 2025
@renovate renovate bot changed the title chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.10 - autoclosed chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.10 Sep 9, 2025
@renovate renovate bot reopened this Sep 9, 2025
@renovate renovate bot force-pushed the renovate/apollo-graphql-packages branch from f8d4ae8 to 8e7064c Compare September 9, 2025 13:12
@renovate renovate bot changed the title chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.10 chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.10 - autoclosed Sep 10, 2025
@renovate renovate bot closed this Sep 10, 2025
@renovate renovate bot changed the title chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.10 - autoclosed chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.10 Sep 11, 2025
@renovate renovate bot reopened this Sep 11, 2025
@renovate renovate bot force-pushed the renovate/apollo-graphql-packages branch from b766629 to 8e7064c Compare September 11, 2025 02:08
@renovate renovate bot changed the title chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.10 chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.10 - autoclosed Sep 18, 2025
@renovate renovate bot closed this Sep 18, 2025
@renovate renovate bot changed the title chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.10 - autoclosed chore(deps): update ghcr.io/apollographql/router docker tag to v1.61.10 Sep 18, 2025
@renovate renovate bot reopened this Sep 18, 2025
@renovate renovate bot force-pushed the renovate/apollo-graphql-packages branch from 9e1681a to 8e7064c Compare September 18, 2025 05:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants