Skip to content

Conversation

kolyshkin
Copy link
Contributor

@kolyshkin kolyshkin commented Jul 16, 2025

Requires (and currently includes) PR #4822; draft until that one is merged.

It makes sense to make runc exec benefit from clone2(CLONE_INTO_CGROUP), when
available. Since it requires a recent kernel and might not work, implement a fallback.

Based on:

Regarding E2BIG check in shouldRetryWithoutCgroupFD. The clone3 syscall
first appeared in kernel v5.3 via commit torvalds/linux@7f192e3, which added
a check that if the size of clone_args structure passed from the userspace
is larger than known to kernel, and the "unknown" part contains non-zero
values, E2BIG is returned. A similar check was already used in other similar
scenarios at the time, and later in kernel v5.4, this was generalized by
patch series https://lore.kernel.org/all/[email protected]/#r

Closes: #4782.

@kolyshkin kolyshkin force-pushed the exec-clone-into-cgroup branch from aa873c8 to 115aa1f Compare July 16, 2025 02:27
@kolyshkin

This comment was marked as resolved.

@cyphar

This comment was marked as resolved.

@kolyshkin

This comment was marked as resolved.

@kolyshkin kolyshkin force-pushed the exec-clone-into-cgroup branch 4 times, most recently from 467d16c to dfcf22a Compare July 16, 2025 07:47
@lifubang
Copy link
Member

I need some time to digest this. Any feedback is welcome.

I notice that all the failures occurred in rootless container tests. This might be related to:

// On cgroup v2 + nesting + domain controllers, WriteCgroupProc may fail with EBUSY.

However, you mentioned we're seeing an ENOENT error here, so that may not be the cause.

@kolyshkin kolyshkin force-pushed the exec-clone-into-cgroup branch from dfcf22a to 6095b61 Compare July 16, 2025 22:46
@cyphar
Copy link
Member

cyphar commented Jul 17, 2025

@kolyshkin Wait, I thought we always communicated with systemd when using cgroup2 -- systemd is very happy to mess with our cgroups (including clearing limits and various other quite dangerous behaviour) if we don't tell it that we are managing the cgroup with Delegate=yes. Maybe this has changed over the years, but I'm fairly certain the initial implementations of this stuff all communicated something with systemd regardless of the cgroup driver used.

Is this just for our testing, or are users actually using this? Because we will need to fix that if we have users on systemd-based systems using cgroups directly without transient units...

@kolyshkin
Copy link
Contributor Author

@kolyshkin Wait, I thought we always communicated with systemd when using cgroup2 -- systemd is very happy to mess with our cgroups (including clearing limits and various other quite dangerous behaviour) if we don't tell it that we are managing the cgroup with Delegate=yes. Maybe this has changed over the years, but I'm fairly certain the initial implementations of this stuff all communicated something with systemd regardless of the cgroup driver used.

Is this just for our testing, or are users actually using this? Because we will need to fix that if we have users on systemd-based systems using cgroups directly without transient units...

When you use runc directly, unless --systemd-cgroup is explicitly specified, the fs/fs2 driver is used and runc do not communicate with systemd in any way. Which might be just fine, if the systemd is configured to not touch a specific cgroup path and everything under it, and runc is creating cgroups under that path. Having said that, runc with fs/fs2 driver neither configures such thing, nor checks if it is configured.

I'm pretty sure it has been that way from the very beginning.

One other thing is, when using systemd, we configure everything via systemd and then use fs/fs2 driver to write to cgroup directly. This is also how things have always been. One reason for that is we did not care much to translate OCI spec into systemd settings, which is now mostly fixed. Another reason is, systemd doesn't support all per-cgroup settings that the kernel has (so some of those can't be expressed as systemd unit properties).

@kolyshkin
Copy link
Contributor Author

I need some time to digest this. Any feedback is welcome.

I notice that all the failures occurred in rootless container tests. This might be related to:

// On cgroup v2 + nesting + domain controllers, WriteCgroupProc may fail with EBUSY.

However, you mentioned we're seeing an ENOENT error here, so that may not be the cause.

The thing is, while the comment says "EBUSY", the actual code doesn't check for particular error, going for this fallback on any error (including ENOENT).

My guess is, with systemd driver we actually need AddPid cgroup driver method to add a pid (like an exec pid) into a pre-created cgroup (as opposed to Apply which creates the cgroup). I'm working on adding it.

@kolyshkin kolyshkin force-pushed the exec-clone-into-cgroup branch 6 times, most recently from 6e3bf36 to 9489925 Compare July 29, 2025 01:04
@kolyshkin
Copy link
Contributor Author

Apparently, we are also not placing rootless container exec's into the proper cgroup (which is still possible when using cgroup v2 systemd driver, but we'd need to use AttachProcessesToUnit). As a result, container init and exec are running in different cgroups. This could be a problem because rootless+cgroupv2+systemd-driver can still set resource limits, and exec is running without those.

Tacking it in #4822

@cyphar
Copy link
Member

cyphar commented Aug 15, 2025

I'm aware of the mixed fs/systemd setup, my confusion was more that systemd very strictly expects to be told about stuff in cgroupv2 because cgroupv2 was designed around a global management process -- it has been a long time since I reviewed the first cgroupv2 patches for runc, but I remember that being a thing I was worried about at the time.

The exec issue seems bad too...

@kolyshkin kolyshkin force-pushed the exec-clone-into-cgroup branch from 9489925 to f96e179 Compare September 8, 2025 19:45
@kolyshkin kolyshkin added this to the 1.4.0-rc.2 milestone Sep 16, 2025
@kolyshkin kolyshkin force-pushed the exec-clone-into-cgroup branch 4 times, most recently from b56a52b to 1652210 Compare September 17, 2025 00:01
@kolyshkin
Copy link
Contributor Author

OK I did some debugging and have very bad news to share.

Apparently GHA moves the process we create (container's init) to a different cgroup. Here's an excerpt from debug logs (using fs2 cgroup driver):

runc run -d --console-socket /tmp/bats-run-X8QSrN/runc.7IFV0b/tty/sock test_busybox (status=0)
time="2025-07-16T02:31:13Z" level=info msg="XXX container init cgroup /sys/fs/cgroup/system.slice/test_busybox"

Here ^^^ runc created a container and put its init to /system.slice/test_busybox cgroup.

runc exec test_busybox stat /tmp/mount-1/foo.txt /tmp/mount-2/foo.txt (status=255)
XXX container test_busybox init cgroup: /system.slice/hosted-compute-agent.service (present)

Here ^^^ the same container init is unexpectedly in the /system.slice/hosted-compute-agent.service cgroup.

time="2025-07-16T02:31:13Z" level=error msg="exec failed: unable to start container process: can't open cgroup: open /sys/fs/cgroup/system.slice/test_busybox: no such file or directory"

And here ^^^ runc exec failed because container's cgroup no longer exists.

Maybe this is what systemd does? But it doesn't do that on my machine.

I need some time to digest this. Any feedback is welcome.

Guess what, this is no longer happening. Based on cgroup name (hosted-compute-agent.service) I suspect it was caused by a bug in Azure (or, more specifically, GHA CI) infrastructure software.

Copilot

This comment was marked as spam.

@kolyshkin kolyshkin force-pushed the exec-clone-into-cgroup branch from 58ed84c to 99977d2 Compare September 18, 2025 02:01
@kolyshkin kolyshkin requested a review from rata September 18, 2025 02:02
Copy link
Member

@rata rata left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm unsure if retrying only when specific errors are returned is enough to not screw it up in older kernels.

Also, this will mean in old kernels we will always fail the first time calling clone. I guess it's fine?

@kolyshkin kolyshkin force-pushed the exec-clone-into-cgroup branch 2 times, most recently from 1b6d405 to d1d3712 Compare September 18, 2025 18:28
@kolyshkin kolyshkin requested a review from rata September 18, 2025 18:29
Copy link
Member

@rata rata left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks! One nit about a debug line, but worst case we can add it later.

// Rootless has no direct access to cgroup.
return true
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I fear we might nor retry in a case we should and then the container will never start for the user and be a big regression. Can we log as debug what was the error before returning false?

So in case that happens, we can ask them to run with debug and check the error they get and we just handle that case?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just rewrote the code to always emit the warning, and realized that if we won't retry without cgroupfd, when the original error from syscall.StartProcess will be returned to the caller (and then logged by runc).

I'm still keeping the new version, let me know if you like it slightly better.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To reiterate, with the previous code, the following scenarios are possible:

  1. Start with CLONE_INTO_CGROUP succeeds. Everything's fine.
  2. Start with CLONE_INTO_CGROUP fails, and we retry without it. In this case we're not very interested in the particular error, but it is still printed (in debug logs);
  3. Start with CLONE_INTO_CGROUP fails, and we don't retry. In this case, the error from clone2 with CLONE_INTO_CGROUP (or any other error from os.StartProcess) is returned and runc prints it with the error log level:

if err := parent.start(); err != nil {
return fmt.Errorf("unable to start container process: %w", err)

In other words, we either retry without cgroupfd, or print the cgroupfd error loud and clear, so your concern is not feasible.

With the new version (just pushed), runc produces the following output:

  1. When CLONE_INTO_CGROUP is used:
logrus.Debugf("using CLONE_INTO_CGROUP %q", cgroup)
  1. When CLONE_INTO_CGROUP fails:
logrus.Debugf("exec with CLONE_INTO_CGROUP failed: %v", err)
  1. When exec fails:
return fmt.Errorf("unable to start container process: %w", err)

The difference from the old version is (2) is printed in case of any error, and the same error may be printed by (3) if we don't retry without cgroupfd.

Copy link
Member

@rata rata Sep 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kolyshkin thanks! Now with the debug log I feel more comfortable.

My concern is possible, though. Maybe I didn't express myself correctly.

We only retry without the cgroupfd if this function returns true. That is based on the manpage and the possible errors that are returned. It wouldn't be the first time a manpage is not up to date (i.e. clone_into_cgroup can fail with some other error too), nor the first time more errors than what we think possible are returned.

If this fails with some other error than the ones handled here and it would succeed when retrying without cgroupfd, then we will have a bug in that case (because we won't return true, as it's not handled).

This is the case I worry about.

Copy link
Member

@rata rata Sep 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kolyshkin what if we just always retry on failure?

As other have pointed out, there were more errors we needed to handle. If we always retry on failure without the cgroupfd, then it will:

  • Either work on retry, which is great
  • Or not work, in which case we just did one more try in the failing case (I wouldn't care that much about the failure to execute the container)

Then for sure we can not screw it up if the kernel in the future starts returning some other error or some red hat backport changes the error returned or whatever.

What do you think?

@kolyshkin kolyshkin force-pushed the exec-clone-into-cgroup branch from d1d3712 to 8175772 Compare September 24, 2025 00:14
@kolyshkin
Copy link
Contributor Author

@cyphar @lifubang @AkihiroSuda PTAL

@kolyshkin kolyshkin requested a review from lifubang September 24, 2025 00:34
logrus.Debugf("exec with CLONE_INTO_CGROUP failed: %v", err)

switch {
// Either clone3(CLONE_INTO_CGROUP) is not supported (ENOSYS),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't had a chance to test this myself, but could you confirm whether it returns ENOSYS or EINVAL?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIK, ENOSYS means the kernel doesn't implement the clone3 syscall.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, maybe we should fall back on EINVAL as well?

Copy link
Member

@cyphar cyphar Sep 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You actually want to check for E2BIG, which is the error you'll get from an extensible struct syscall when you try to use an unsupported field. But including EINVAL wouldn't hurt either (you would get that in the very unlikely scenario that the cgroup fd is 0 on a pre-CLONE_INTO_CGROUP kernel).

(I also find the ErrUnsupported handling by syscall.Errno too magical and would prefer an explicit list of errnos, but I'm probably a minority opinion there).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIK, ENOSYS means the kernel doesn't implement the clone3 syscall.

You are right, my comment was not entirely correct; fixed.

So, maybe we should fall back on EINVAL as well?

Yes, CLONE_INTO_CGROUP support was introduced in commit ef2c41cf38a7, Feb 5 2020, which is half a year later than the clone3 itself (commit 7f192e3cd316, May 25 2019). Since Go stdlib always uses the latest version of clone_args struct, indeed the kernel will return EINVAL.

Fixed, thanks!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kolyshkin You need to check for E2BIG -- the error path in copy_struct_from_user necessarily comes before the flag checking code. But having both is better.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cyphar yep, if the "extra" fields are not zero, it returns E2BIG.

In kernel v5.3: https://github.com/torvalds/linux/blame/v5.3/kernel/fork.c#L2546-L2559

In kernels v5.4 to v5.7 (i.e. after commit torvalds/linux@f14c234), the same check is in copy_struct_from_user`.

So, do we need to check for EINVAL? Probably not.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated; PTAL @cyphar @lifubang

@kolyshkin kolyshkin force-pushed the exec-clone-into-cgroup branch 2 times, most recently from ffdcf6e to 9b80e9f Compare September 24, 2025 19:30
// No clone3 syscall (kernels < v5.3).
case errors.Is(err, unix.ENOSYS):
return true
// No CLONE_INTO_CGROUP flag support (kernels v5.3 to v5.7).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To clarify why clone3 returns E2BIG rather than EINVAL for kernels which don't support CLONE_INTO_CGROUP, I know this is because the kernel checked the oversized structures first, it might be helpful to include a link to the relevant kernel source (kernel/fork.c#L2525-L2536). This provides direct context for readers who are puzzled by this specific error code choice.

Copy link
Member

@cyphar cyphar Sep 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

torvalds/linux@f5a1a53 or https://github.com/torvalds/linux/blob/v5.4/include/linux/uaccess.h#L237 are better links -- the error is coming from copy_struct_from_user. The bit you linked to is for something else.

(This is a generic pattern for all extensible-struct syscalls by the way. bpf(2), openat2(2), clone3(2), and a few other interfaces have the same behaviour. I gave a talk about this in 2020.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I looked it all up when checking whether E2BIG is the correct error to check for, and found out this actually depends on the kernel version.

From my memory (might be wrong here):

  • in kernel v5.3, clone3 is already available but there is no copy_struct_from_user (yet clone3 returns E2BIG if the clone_args is larger than expected and the tail has non-zero values).
  • from v5.4 to v5.7, the above pattern is generalized in copy_struct_from_user by torvalds/linux@f5a1a53

I don't think the code reader need all those details and references though, but I've included a short explanation in the commit message so it can be found via git blame.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added the same text to PR description.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Not directly related to all this, but I remember then looking into in-kernel checkpoint-restore code (which is part of OpenVZ/Virtuozzo kernel, later reimplemented mostly in userspace, and thus CRIU was born), I saw ANK (who wrote most of in-kernel C/R single-handedly) was using structures with zero padding at the end. Not sure if he checked if the padding was actually zero. Also, it might not be the C/R code, but some of the COW layered filesystem (or block device) stuff we had).

It makes sense to make runc exec benefit from clone2(CLONE_INTO_CGROUP),
if it is available. Since it requires a recent kernel and might not work,
implement a fallback to older way of joining the cgroup.

Based on work done in
 - https://go-review.googlesource.com/c/go/+/417695
 - coreos/go-systemd#458
 - opencontainers/cgroups#26
 - opencontainers#4822

Regarding E2BIG check in shouldRetryWithoutCgroupFD. The clone3 syscall
first appeared in kernel v5.3 via commit [1], which added a check that
if the size of clone_args structure passed from the userspace is larger
than known to kernel, and the "unknown" part contains non-zero values,
E2BIG is returned. A similar check was already used in other similar
scenarios at the time, and later in kernel v5.4, this was generalized by
patch series [2].

[1]: torvalds/linux@7f192e3
[2]: https://lore.kernel.org/all/[email protected]/#r

Signed-off-by: Kir Kolyshkin <[email protected]>
@kolyshkin kolyshkin force-pushed the exec-clone-into-cgroup branch from 9b80e9f to 190a165 Compare September 26, 2025 16:35
return nil, fmt.Errorf("bad sub cgroup path: %s", sub)
}

fd, err := os.OpenFile(cgroup, unix.O_PATH|unix.O_DIRECTORY|unix.O_CLOEXEC, 0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer we use openat2 on the cached cgroupfs fd rather than having this double-check logic -- do we not have access to it anymore with the cgroup package split?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants