-
Notifications
You must be signed in to change notification settings - Fork 582
Add vTPM specification #1293
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add vTPM specification #1293
Conversation
Add the vTPM specification to the documentation, config.go, and schema description. The following is an example of a vTPM description that is found under the path /linux/resources/vtpms: "vtpms": [ { "statePath": "/var/lib/runc/myvtpm1", "vtpmVersion": "2", "createCerts": false, "runAs": "tss", "pcrBanks": "sha1,sha512" } ] Signed-off-by: Stefan Berger <[email protected]>
… the container Signed-off-by: Efim Verzakov <[email protected]>
If I understand correctly, the idea is that a runtime is expected to start an instance of This is perhaps mirroring some of the concerns expressed in #920, but what's the benefit of doing that over running To maybe help explain why this makes me nervous, what do we do if the container dies? The runtime is typically long gone at that point, so what makes sure Another aspect is how non-container runtimes (VMs, etc) are expected to implement this. If they can't support this, they should probably simply error, right? The same if So in short, why is the runtime layer the appropriate place for this and not, say, the orchestrators like containerd, Docker, kubernetes, etc? |
This is a good question. If I understand correctly we have a several container extension points:
We cannot use Runtime Hooks (e.g. createContainer) because the runtime/runc reads the container config only once. So, we won’t be able to extend linux devices. We cannot use Kubelet Device Manager plugins because there can be a possible use case to share the same vTPM between several containers in a pod. We cannot use only Kubelet Dynamic Resource Allocation plugins because NodePrepareResources https://github.com/kubernetes/kubernetes/blob/v1.33.4/pkg/kubelet/cm/dra/manager.go#L179 As for the Node Resouce Interface plugins it can be a good candidate to implement vTPM feature (because it can apply container config adjustments to pass device / device cgroup).
If swtpm is run with the runtime we can add it's pid to the container state file that's why such problem won't exist. However, the main weak point of using another container extension points than runtime is how runtime works with devices. As for the problem of not existing monitoring tools in runtime, in containerd there is a function to monitor task exit https://github.com/containerd/containerd/blob/v2.1.4/internal/cri/server/events.go#L147 . |
If the runtime do not have a vTPM feature / swtpm is not installed, then the error should be returned. |
Sorry for late reply I was on PTO :( |
This isn't quite true though, right? In QEMU at least (I'm not sure about other VM platforms), TPM support requires the operator to pre-launch an instance of If we take a similar approach in My biggest concern is the lifecycle management of that |
yes. However i have my concern: we want to create independent vTPM devices (created by several swtpm processes) and pass them to the different containers under the same dev path in the container (e.g. /dev/tpm0). To do this we need to be sure that their host dev path is different and pass their major and minor with the required container dev path (/dev/tpm0). So, my concern is the following: we can be sure what command will be used by runc only on runtime level. And this affects the value of dev path in the container device config. Anyway we need to extend current device config or add the new field in the runtime spec. |
I understand your concern that runc is called only once when the container is up. As i know, |
Hello! Dear @tianon, we reconsidered our approach to pass vTPM to the container:
Possible problems:
In runtime-spec the changes will have only: "vtpms": [
{
"containerPath": "/dev/tpm0",
"hostPath": "/dev/tpm-generated-0",
"vtpmMajor": 100,
"vtpmMinor": 1
}
], Now, i'm working on vTPM plugin PoC to check this approach. |
@everzakov Handling the swtpm process creation/lifecycle outside of the runtime as @tianon was saying is great, I think it is a blocker otherwise. Some questions:
I think if there isn't any host-device we need to "consume" for swtpm, then not sure why we are using DRA. Or can DRA model things with "infinite" capacity too? |
Add the vTPM (virtual Trusted Platform Module) specification to the documentation, config.go and schema description. Runtime uses this specification to create vTPMs and pass them to the container. This virtual module can be used to create quotes, signatures and perform Direct Anonymous Attestation.
Also, users can specify that vTPM should be manufactured with precreated certs / activated PCR banks and populated Endorsement Key Pair.
The following is an example of a vTPM description that is found under the path /linux/resources/vtpms:
This PR is based on #920