Skip to content

Conversation

@trnila
Copy link

@trnila trnila commented Mar 14, 2017

When system has set a large limit of opened file descriptors, then the function createChild is getting slow, because it tries to close all descriptors up to that limit. On my system it currently takes about 1-2 seconds.

If system has /proc/self/fd or /dev/fd then we can go only though currently opened descriptors and close them. Otherwise we can fallback to the current solution, that tries to call close on all possible descriptors.



def _closeFds(ignore_fds):
for path in ['/proc/self/fd', '/dev/fd']:
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/dev/fd is a symlink to /proc/self/fd. Is it really useful to test it?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/dev/fd is symlink in Linux.
Other systems like BSD and Mac have /dev/fd.
On FreeBSD until fdescfs is mounted it contains only static descriptors 0, 1, 2 so there will be needed another test, otherwise it wont close any descriptor.
I will look at python subprocess.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vstinner
Copy link
Owner

Ah yes, Python 3 has a similar code in subprocess. My http://www.python.org/dev/peps/pep-0446/ might allow to avoid completely closing all FDs, but I didn't try.

@vstinner
Copy link
Owner

I don't maintain this project anymore, I'm looking for a new maintainer.

Base automatically changed from master to main March 17, 2021 20:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants