Skip to content

Conversation

gperciva
Copy link
Member

No description provided.

Also, move "select which connection to use" earlier.

This does not change any program behaviour, but will be useful in the
following commit.
If --aggressive-network is used, this effectively multiplies
MAXPENDING_WRITEBYTES by almost 8.
@gperciva
Copy link
Member Author

gperciva commented Mar 28, 2025

I tried a quick test with three random 50M files, and surprisingly this was about 1.5% faster. I was expecting it to be no change, since the actual uploading takes much longer than chunkification, so it shouldn't matter if we had 5M or almost 40M cache.

Upload times with master: 90.6, 91.05, 90.34 (mean 90.66).
Upload times with this branch: 89.40, 89.14, 89.08 (mean 89.21).

(If we were archiving directories that we'd previously archived -- say, making an hourly archive of a mail server -- then we should expect the larger cache to matter.)

The next PR will use the connection with the least nbytespending, rather than merely incrementing the connection number each time. nah, not worth it; I'll go straight to the thing we discussed.

case 0:
/* This write operation is no longer pending. */
C->S->nbytespending -= C->flen;
if (C->S->nbytespending[C->conn] < C->flen) {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is probably a good idea?

I mean, other than random memory corruption, I can't think of why this might occur. Now, if we have random corruption, then this check could be useful... but likely a whole bunch of other things would break anyway. shrug

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

1 participant