-
Notifications
You must be signed in to change notification settings - Fork 311
Async multi packet fixes for 6.1.0 #3534
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
fix 0 length read at the start of a packe in plp stream returning 0 when continuing handle char array sizing better change existing test to use multiple packet sizes
@ErikEJ this might reduce memory usage for string reads. It might be worth benching the artifacts if the CI runs green. |
/azp run |
Azure Pipelines successfully started running 2 pipeline(s). |
I've added an additional fix which is the same as the 0 length left in terminator case and which occurs on the varchar not nvarchar read path. |
/azp run |
Azure Pipelines successfully started running 2 pipeline(s). |
force process sni compatibility mode by default
@dotnet/sqlclientdevteam can I get a CI run on this please. I've added a new commit which forces process sni mode to compatibility mode (and by extension, disabled async-continue mode) and adds in a fix for the pending read counter imbalance that we discussed and that @rhuijben has been assisting with tracking down today. This is a possible stable current codebase state to evaluate. |
/azp run |
Azure Pipelines successfully started running 2 pipeline(s). |
I've aligned the appcontext switch test with the new defaults. Can i get another run please @dotnet/sqlclientdevteam |
/azp run |
Azure Pipelines successfully started running 2 pipeline(s). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Asking for some clarity on >> 1
versus / 2
.
@@ -13206,7 +13206,7 @@ bool writeDataSizeToSnapshot | |||
if (stateObj._longlen == 0) | |||
{ | |||
Debug.Assert(stateObj._longlenleft == 0); | |||
totalCharsRead = 0; | |||
totalCharsRead = startOffsetByteCount >> 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a division by 2 in disguise? Are you using a special property of right-bit-shift that divide-by-2 doesn't have? Something else?
If the former, please use startOffsetByteCount / 2
for clarity. If the either of the latter, please document why.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No magic. Just using the same idiom as the containing methods. I've changed it to use division instead of shift.
I've also changed the multiplexer test detection of compatibility to match the library which should skip the multiplexer tests correctly now.
@Wraith2 when I run the testcase from the other issue against this branch I get in DEBUG mode
In release mode the test passes. |
multitasking here, can you link me to the exact repro you're talking about? |
The testcase from I'm currently trying to get things reproduced against a docker instance of sqlserver 2019 so we can look at the same thing (and maybe even test this on github actions, like I do in the RepoDB project) |
This case rhuijben@1964bc1 fails for me on this docker setup. Too bad it is not the error I'm seeing myself, but it is still a valid testcase. Trying to extend this to include my case. It fails on the first (smallest) packetsize of 512.
|
Debug.Assert(TdsEnums.HEADER_LEN + Packet.GetDataLengthFromHeader(buffer) == read, "partially read packets cannot be appended to the snapshot"); read=512 503+8 = 511, so mismatch. Looks like the first byte of the next package is already in the buffer here. |
That assert will fire periodically when packet multiplexing is disabled. We should add in the context switch to the assertion. That might be correct. I saw something similar while looking at the multipart xml reads with a weird packet size. If the packet status does not include the last packet bit, and the requiredlength is less than the total packet as long as the transferred data amount is the same as the buffer size it's technically correct, I think. I'm referring to this as padded packets. I hadn't seen them before 2 weeks ago but the spec doesn't preclude them. When i saw them the remaining data space in the packet buffer was filled with FF. This is part of the reason that i added the DumpPackets and DumpInBuff functions to my debug branch. |
With packet size configured as 512 I see 511 byte packets (which fail these tests), but also one really large packet (>= 60 KB). Not sure if the debug assert does the right thing. It looks like the demultiplexer handles these cases just fine. With this packet code you also always have to handle short-reads caused by network security and TCP packets. There are standard proxies for that last case so you can always get small (or large) packets from the network layer. The DotNet core project uses fuzzing with that to catch http errors, as do a lot of other libraries. Looks like these asserts are on the wrong layer... as from the network you can have much smaller or larger packets than the TDS packets (smaller when processing really fast, and much longer when the network already delivered more data than a single packet... Which can also happen on slow networks when one packet got lost and is re-delivered, while others are already in the queue. |
I've pushed a bunch of new fixes. Can I get a CI run @dotnet/sqlclientdevteam and if that builds some testing by brave people who have reproduced known issues please? |
/azp run |
Azure Pipelines successfully started running 2 pipeline(s). |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #3534 +/- ##
==========================================
- Coverage 69.14% 63.55% -5.59%
==========================================
Files 276 268 -8
Lines 62414 62154 -260
==========================================
- Hits 43154 39504 -3650
- Misses 19260 22650 +3390
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
All green apart from a timeout. @rhuijben if you get opportunity could you have a try with this branch or PR artifacts and see if you can repro any problems? |
I'm away from my PC for a few days. Will follow up when I get back. |
I will test the artifact with my repro in the coming week |
@Wraith2 My repro code using the SQL 2019 VM that I gave you acces to with build
|
Odd. I can't connect to your server at all, could you check and see if the IP address changed? to confirm, are these the settings you're seeing errors with?
|
@Wraith2 I have just turned it back on 😄
|
Ok. Well I've run this branch against my server in sweden which is similar to yours for a while and no replication. Now i'm running directly against yours again and I'll let it go for a while but I'm not seeing any problems occurring. |
@Wraith2 This is my code, and the issue repros right away! using Microsoft.Data.SqlClient;
AppContext.SetSwitch("Switch.Microsoft.Data.SqlClient.UseManagedNetworkingOnWindows", false);
AppContext.SetSwitch("Switch.Microsoft.Data.SqlClient.UseCompatibilityAsyncBehaviour", false);
AppContext.SetSwitch("Switch.Microsoft.Data.SqlClient.UseCompatibilityProcessSni", false);
var connectionString = "Data Source=20.x.x.x,1433;Initial Catalog=TestDB;User Id=wraith2;Password=x;Encrypt=False;Trust Server Certificate=False;Command Timeout=30";
var dbTester = new DatabaseTester(connectionString);
var connectionsStringBuilder = new SqlConnectionStringBuilder(connectionString);
connectionsStringBuilder.Encrypt = true;
connectionsStringBuilder.TrustServerCertificate = true;
var dataSetTester = new DatabaseTester(connectionsStringBuilder.ConnectionString);
var originalRecords = await dataSetTester.GetAndCompareDataAsync(null);
while(true)
{
Console.WriteLine("RUNNING");
await dbTester.GetAndCompareDataAsync(originalRecords);
} |
@Wraith2 If I change above to this, the issue goes away
|
I still can't replicate. I'm a few commits ahead locally though so It's possible i've fixed it. I didn't push over the weekend so that the artifact would be available for people to test but since we're seeing a problem i've pushed now. @dotnet/sqlclientdevteam can you run CI please. |
/azp run |
Azure Pipelines successfully started running 2 pipeline(s). |
new artifacts are her https://sqlclientdrivers.visualstudio.com/904996cc-6198-4d39-8540-eca72bdf0b7b/_apis/build/builds/123164/artifacts?artifactName=Artifacts&api-version=7.1&%24format=zip if you could try them please. |
@Wraith2 When using |
That's a relief. Thanks. Edit: Though, reviewing the commits since that build I'm suspicious because none of them went near xml. I suspect that the problem was around continue mode with xml where I know there is a bug (I have a fix on my dev branch) so adding in the RequestContinue functionality is causing it to take the working vs broken path. |
I've been running my own test cases against the branch prior to the most recent changes, the results are available here. These covered most of the combinations I can think of:
I wasn't able to reproduce any exceptions which were unique to this PR; I'll kick them off against the latest version later this evening. |
I am a little confused about the current state of the switches - does "all at deafult" enable all the new async multi packet features? |
if (AppContext.TryGetSwitch(UseCompatibilityProcessSniString, out bool returnedValue) && !returnedValue) | ||
{ | ||
s_useCompatibilityProcessSni = Tristate.True; | ||
s_useCompatibilityProcessSni = Tristate.False; | ||
} | ||
else | ||
{ | ||
s_useCompatibilityProcessSni = Tristate.False; | ||
s_useCompatibilityProcessSni = Tristate.True; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ErikEJ in this PR the default is changed to that UseCompatibilityProcessSni is true by default. That means that new async behaviour is false by default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, so in a repro context, we should use:
AppContext.SetSwitch("Switch.Microsoft.Data.SqlClient.UseCompatibilityAsyncBehaviour", false);
AppContext.SetSwitch("Switch.Microsoft.Data.SqlClient.UseCompatibilityProcessSni", false);
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For public consumption the settings you give are the defaults.
If you can find an issue I want to know about it so I can fix it, regardless of the settings but you will need to tell me what the settings are so I can try to repro.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it. No issues found with latest build.
Fixes #3519
Fix 1:
When reading multi packet strings it is possible for multiple strings to happen in a single row. When reading asynchronously a snapshot is used which contains a linked list of packets. The current codebase has logic which keeps a cleared spare linked list node when the snapshot is cleared. The logic to clear the spare packet was faulty and did not clear all the fields leaving the data length in the node. In specific circumstances it is possible to re-use the spare linked list node containing an old data value as the first packet in a new linked list of packets. When this happens in a read which reaches the continue stage (3 or more packets) the size calculation is incorrect and various errors can occur.
The spare packet functionality is not very useful because it can store a single node. It doesn't retain the byte[] buffer so the memory saving is tiny. I have removed it and changed the linked list node fields to be readonly. This resolves the bug.
Fix 2:
When reading a multi packet string the plp chunks are read from each packet and the end is signalled by a terminator. It is possible for the data to align such that the contents of a string complete exactly at the end of a packet and the terminator is in the next packet. In this case some pre-existing logic checks for a 0 chars remaining and exists early.
This logic needed to be updated so that in when continuing it returns the entire existing length read and not a 0 value.
Fix 3:
While debugging the first two issues the buffer sizes and calculations were confusing me. I eventually realised that the code was directly using _longlenleft which is measured in bytes to size a char array, meaning that all char arrays were twice as long as needed. I have updated the code to handle that and use smaller appropriately sized arrays.
I have updated the existing test to iterate from 512 (minimum packet size) to 2048 bytes in size. This can cause lots of interesting alignments in the data testing the paths through the string reading code more effectively. The range could be increased but I considered that the runtime needed to be low enough to not timeout CI runs, most higher packet size will be similar to lower sized runs due to factoring.
Thanks to @erenes and @Suchiman for their help finding the reproduction that worked on my machine, without that I would have been unable to fix anything
@dotnet/sqlclientdevteam can I get a CI run please.
/cc @Jakimar