@@ -174,10 +174,11 @@ Pre-commit CI
174
174
Introduction
175
175
------------
176
176
177
- Unlike most parts of the LLVM project, libc++ uses a pre-commit CI [# ]_. This
178
- CI is hosted on `Buildkite <https://buildkite.com/llvm-project/libcxx-ci >`__ and
179
- the build results are visible in the review on GitHub. Please make sure
180
- the CI is green before committing a patch.
177
+ Unlike most parts of the LLVM project, libc++ uses a pre-commit CI [# ]_. Some of
178
+ this CI is hosted on `Buildkite <https://buildkite.com/llvm-project/libcxx-ci >`__,
179
+ but some has migrated to the LLVM CI infrastructure. The build results are
180
+ visible in the review on GitHub. Please make sure the CI is green before
181
+ committing a patch.
181
182
182
183
The CI tests libc++ for all :ref: `supported platforms <SupportedPlatforms >`.
183
184
The build is started for every commit added to a Pull Request. A complete CI
@@ -246,21 +247,89 @@ Below is a short description of the most interesting CI builds [#]_:
246
247
Infrastructure
247
248
--------------
248
249
249
- All files of the CI infrastructure are in the directory ``libcxx/utils/ci ``.
250
- Note that quite a bit of this infrastructure is heavily Linux focused. This is
251
- the platform used by most of libc++'s Buildkite runners and developers.
250
+ The files for the CI infrastructure are split between the llvm-project
251
+ and the llvm-zorg repositories. All files of the CI infrastructure in
252
+ the llvm-project are in the directory ``libcxx/utils/ci ``. Note that
253
+ quite a bit of this infrastructure is heavily Linux focused. This is
254
+ the platform used by most of libc++'s Buildkite runners and
255
+ developers.
252
256
253
- Dockerfile
254
- ~~~~~~~~~~
257
+ Dockerfile/Container Images
258
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~
255
259
256
260
Contains the Docker image for the Ubuntu CI. Because the same Docker image is
257
261
used for the ``main `` and ``release `` branch, it should contain no hard-coded
258
- versions. It contains the used versions of Clang, various clang-tools,
262
+ versions. It contains the used versions of Clang, various clang-tools,
259
263
GCC, and CMake.
260
264
261
265
.. note :: This image is pulled from Docker hub and not rebuild when changing
262
266
the Dockerfile.
263
267
268
+ Updating the CI testing container images
269
+ ----------------------------------------
270
+
271
+ The libcxx linux premerge testing can run on one of three sets of runner
272
+ groups. The three runner group names are "llvm-premerge-libcxx-runners",
273
+ "llvm-premerge-libcxx-release-runners" and "llvm-premerge-libcxx-next-runners".
274
+ Which runner set to use is controlled by the contents of
275
+ https://github.com/llvm/llvm-project/blob/main/.github/workflows/libcxx-build-and-test.yaml.
276
+ By default, it uses "llvm-premerge-libcxx-runners". To switch to one of the
277
+ other runner sets, just replace all uses of "llvm-premerge-libcxx-runners" in
278
+ the yaml file with the desired runner set.
279
+
280
+ Which container image is used by these three runner sets is controlled
281
+ and set by the variable values in
282
+ https://github.com/llvm/llvm-zorg/blob/main/premerge/premerge_resources/variables.tf.
283
+ The table below shows the variable names and
284
+ the runner sets to which they correspond. To see their values, follow the
285
+ link above (to variables.tf in llvm-zorg).
286
+
287
+ +------------------------------------+---------------------------+
288
+ | Runner Set |Variable |
289
+ +====================================+===========================+
290
+ | llvm-premerge-libcxx-runners |libcxx_runner_image |
291
+ +------------------------------------+---------------------------+
292
+ | llvm-premerge-libcxx-release-runners|libcxx_release_runner_image|
293
+ +------------------------------------+---------------------------+
294
+ | llvm-premerge-libcxx-next-runners |libcxx_next_runner_image |
295
+ +------------------------------------+---------------------------+
296
+
297
+
298
+ When updating the container image you can either update just the
299
+ runner binary (the part the connects to Github), or you can update
300
+ everything (tools, etc.). Whether to update just the runner or to update
301
+ everything is controlled by the value of ``ACTIONS_BASE_IMAGE ``, under
302
+ ``actions-builder `` in ``libcxx/utils/ci/docker-compose.yml ``.
303
+
304
+ To update just the runner binary, change the value of ``ACTIONS_BASE_IMAGE ``
305
+ to be a modified version of one of the libcxx runner variable images from
306
+ https://github.com/llvm/llvm-zorg/blob/main/premerge/premerge_resources/variables.tf,
307
+ as follows: Find the libcxx runner image name you want to use from the
308
+ variables.tf file. The name will be something like
309
+ ``ghcr.io/llvm/libcxx-linux-builder:<some-commit-SHA> ``. Replace
310
+ ``libcxx-linux-builder `` with ``libcxx-linux-builder-base ``. Use this new image
311
+ name as the value you assign to ``ACTIONS_BASE_IMAGE ``.
312
+
313
+ To update the entire container image, set the value of ``ACTIONS_BASE_IMAGE ``
314
+ to ``builder-base ``. If the value is already ``builder-base `` (there
315
+ have been no just-the-runner updates since the last complete update), then you
316
+ need to find the line containing ``RUN echo "Last forced update executed on ``
317
+ in ``libcxx/utils/ci/Dockerfile `` and update the date to be the current date.
318
+
319
+ Once you have created and merged a PR with those changes, a new image
320
+ will be created, and a link to it can be found at
321
+ https://github.com/llvm/llvm-project/pkgs/container/libcxx-linux-builder,
322
+ where the actual image name should be
323
+ ``ghcr.io/llvm/libcxx-linux-builder:<SHA-of-committed-change-from-PR> ``.
324
+
325
+ Lastly you need to create a PR in the llvm-zorg repository,
326
+ updating the the value of the appropriate libcxx runner variable in
327
+ the variables.tf file mentioned above to the name of your newly created
328
+ image (see above paragraph about finding the image name). Once that change
329
+ has been merged, an LLVM premerge maintainer (a Google employee) must use
330
+ terraform to apply the change to the running GKE cluster.
331
+
332
+
264
333
run-buildbot-container
265
334
~~~~~~~~~~~~~~~~~~~~~~
266
335
0 commit comments