-
Couldn't load subscription status.
- Fork 10.5k
Fix incoming connections during app shutdown causing delays #64099
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Addresses race during application shutdown where new SignalR connections could still be accepted after ApplicationStopping, causing delayed teardown and lingering stateful reconnect sessions.
- Adds shutdown-aware logic to HttpConnectionManager (tracking _closed, locking CloseConnections, early closing on create when stopping).
- Adjusts WebSocket connection handling to defer creating CancellationTokenSource until needed and adds negotiate-time error when server is stopping.
- Expands test coverage for new shutdown scenarios (negotiate after stop, stateful reconnect close behavior, creating connection after stop).
Reviewed Changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 1 comment.
| File | Description |
|---|---|
| src/SignalR/common/Http.Connections/test/HttpConnectionManagerTests.cs | Adds test ensuring new connections are immediately disposed after application stopping. |
| src/SignalR/common/Http.Connections/test/HttpConnectionDispatcherTests.cs | Adds tests for negotiate failure post-stop and stateful reconnect shutdown behavior; overload for passing lifetime into manager. |
| src/SignalR/common/Http.Connections/src/Internal/HttpConnectionManager.cs | Introduces application lifetime awareness, lock, and closed state logic to prevent accepting new connections during shutdown. |
| src/SignalR/common/Http.Connections/src/Internal/HttpConnectionDispatcher.cs | Adjusts cancellation token timing and adds negotiate error when a disposed connection is created during shutdown. |
Comments suppressed due to low confidence (1)
src/SignalR/common/Http.Connections/src/Internal/HttpConnectionManager.cs:1
- Task.WaitAll executes a potentially long-running synchronous wait while holding _closeLock, increasing contention and risking blocked threads if disposal needs thread-pool scheduling. Collect the tasks inside the lock, then release the lock before awaiting (e.g., copy tasks to a local list, exit the lock, and use Task.WhenAll with a timeout via Task.WhenAny), or refactor CloseConnections to be async to avoid blocking while holding the lock.
// Licensed to the .NET Foundation under one or more agreements.
| private readonly IHostApplicationLifetime _applicationLifetime; | ||
| private readonly Lock _closeLock = new(); | ||
| private bool _closed; |
Copilot
AI
Oct 17, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The field _closed reads like a verb; renaming to _isClosed (and updating references) improves clarity by conveying a boolean state.
|
Looks like this PR hasn't been active for some time and the codebase could have been changed in the meantime. |
Fixes #58947
There is a race between when the
IHostApplicationLifetime.ApplicationStoppingtoken fires and when theIServer.StopAsync(which unbinds connection listeners) runs (ref). SignalR was listening toApplicationStoppingand closing all currently active connections, however new incoming connections right afterApplicationStoppingfires are still accepted by theIServerimplementation and SignalR was processing them and letting them stay around until theIServerimplementation ungracefully closed them (30 second default).The first part of the fix was to add a lock around
CloseConnectionsand then callCloseConnectionswhen creating a new connection if we know that the application is stopping.While testing to make sure this fix worked, I noticed that connections with stateful reconnect enabled weren't shutting down quickly, this was because when we first created the connection we immediately ran
CancelPreviousPollwhich would clear the cancellation token being used to stop the websocket read loop. The fix was to not runCancelPreviousPollfor the first connection and move where theCancellationTokenis created to be after the previous connection is canceled.