Context
At Microcks (https://github.com/microcks/microcks), we are currently discussing the introduction of AI-related checks and policies to help:
- avoid problematic or low-quality AI-generated contributions,
- reduce the review burden on maintainers and code owners,
- and provide clearer expectations to contributors using AI tools.
We’ve observed that many open source projects and organizations are moving in this direction.
Existing references
Some examples that illustrate different approaches:
We also see early signals of potential issues and confusion emerging across CNCF projects, for example:
#936
Question to the CNCF / TOC
Is the CNCF planning to:
- Define and publish a CNCF-wide AI policy (similar in spirit to OpenInfra’s) that projects could directly reference?
or
- Recommend that each CNCF project define its own AI policy, tailored to its community, governance, and contribution model?
Microcks perspective
From the Microcks side, we would strongly welcome:
This would help ensure consistency across the ecosystem while still allowing flexibility for individual projects.
Goal of this issue
The goal of this issue is to:
- clarify the CNCF’s current thinking and direction on AI governance for projects,
- understand whether a CNCF-level initiative is planned or recommended,
- and, if relevant, discuss how projects can collaborate on a shared, well-defined AI policy rather than reinventing it independently.
Thanks for the guidance and discussion.
Context
At Microcks (https://github.com/microcks/microcks), we are currently discussing the introduction of AI-related checks and policies to help:
We’ve observed that many open source projects and organizations are moving in this direction.
Existing references
Some examples that illustrate different approaches:
Ghostty project – a clear and fairly strict, but well-explained policy embedded directly in contribution workflows:
https://github.com/ghostty-org/ghostty/pull/10412/files
Linux Foundation (LFX) AI policy – currently quite light and high-level:
https://www.linuxfoundation.org/legal/generative-ai
OpenInfra Foundation AI policy – a more detailed and actively maintained policy that projects can directly rely on:
https://openinfra.org/legal/ai-policy
We also see early signals of potential issues and confusion emerging across CNCF projects, for example:
#936
Question to the CNCF / TOC
Is the CNCF planning to:
or
Microcks perspective
From the Microcks side, we would strongly welcome:
https://github.com/microcks/microcks/blob/master/CODE_OF_CONDUCT.md
This would help ensure consistency across the ecosystem while still allowing flexibility for individual projects.
Goal of this issue
The goal of this issue is to:
Thanks for the guidance and discussion.