Skip to content

Conversation

@cockpituous
Copy link
Contributor

@cockpituous cockpituous commented Oct 12, 2025

Image refresh for fedora-rawhide-boot

  • image-refresh fedora-rawhide-boot

@github-actions github-actions bot added the bot label Oct 12, 2025
@cockpituous cockpituous changed the title Image refresh for fedora-rawhide-boot WIP: 3dca1485955a: [no-test] Image refresh for fedora-rawhide-boot Oct 12, 2025
@cockpituous
Copy link
Contributor

cockpituous pushed a commit that referenced this pull request Oct 12, 2025
@cockpituous cockpituous force-pushed the image-refresh-fedora-rawhide-boot-20251012-224209 branch from 2288688 to af03530 Compare October 12, 2025 22:42
@cockpituous cockpituous changed the title WIP: 3dca1485955a: [no-test] Image refresh for fedora-rawhide-boot Image refresh for fedora-rawhide-boot Oct 12, 2025
@cockpituous
Copy link
Contributor

@tomasmatus
Copy link
Member

@tomasmatus
Copy link
Member

@KKoukiou sorry to ping you twice but here TestStorageReclaimSpace.testReclaimSpaceOptional failed 3 times on the same error. Did something change here too and tests need to be updated?

I see that tests are trying to work with vda2 but only vda1 is present in the screenshot

Traceback (most recent call last):
  File "/work/make-checkout-workdir/test/check-storage-reclaim", line 88, in testReclaimSpaceOptional
    s.reclaim_remove_device("vda2")
    ~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^
  File "/work/make-checkout-workdir/test/helpers/storage.py", line 553, in reclaim_remove_device
    self.browser.click(f"#reclaim-space-modal-table tr:contains('{device}') button[aria-label='delete']")

KKoukiou pushed a commit to KKoukiou/bots that referenced this pull request Oct 14, 2025
@KKoukiou KKoukiou force-pushed the image-refresh-fedora-rawhide-boot-20251012-224209 branch from af03530 to 13d438d Compare October 14, 2025 13:20
@KKoukiou
Copy link
Contributor

rebased to make sure I look into the new test failures.

@KKoukiou
Copy link
Contributor

This is not a regression of this PR, this looks like an uncoverered incomplete cleanup:

When test/check-storage-reclaim TestStorageReclaimSpace.testReclaimExt4onLUKS runs before TestStorageReclaimSpace.testReclaimSpaceOptional the latter fails.

Please don't merge, I dont know why this does not happen on master, we need to debug.

@jelly jelly added the blocked label Oct 21, 2025
@jelly
Copy link
Member

jelly commented Oct 23, 2025

When test/check-storage-reclaim TestStorageReclaimSpace.testReclaimExt4onLUKS runs before TestStorageReclaimSpace.testReclaimSpaceOptional the latter fails.

Can reproduce this with:

TEST_OS=fedora-rawhide-boot TEST_SHOW_BROWSER=1 test/check-storage-reclaim -vst TestStorageReclaimSpace.testReclaimExt4onLUKS TestStorageReclaimSpace.testReclaimSpaceOptional

So it is something new in the image as the old one succeeded, the partitioning view in the terminal looks correct but the UI is out of date.

Disk /dev/vda: 15 GiB, 16106127360 bytes, 31457280 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 521DEE16-44B9-4420-94A6-B51DC4414F85

Device       Start      End  Sectors Size Type
/dev/vda1     2048     4095     2048   1M BIOS boot
/dev/vda2     4096  4198399  4194304   2G Linux filesystem
/dev/vda3  4198400 14684159 10485760   5G Linux filesystem
[anaconda root@localhost ~]# blkid
/run/install/repo/images/install.img: BLOCK_SIZE="131072" TYPE="squashfs"
/dev/sr0: BLOCK_SIZE="2048" UUID="2025-10-22-06-19-28-00" LABEL="Fedora-S-dvd-x86_64-rawh" TYPE="iso9660" PTTYPE="PMBR"
/dev/loop0: BLOCK_SIZE="131072" TYPE="squashfs"
/dev/zram0: LABEL="zram0" UUID="77d23f5f-d61b-4d93-bb3d-d15667cf5109" TYPE="swap"
/dev/vda2: UUID="08de7f55-5371-4640-b27e-753e5a0270a8" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="cad3563a-f7e1-42cf-9ec0-62720df6c972"
/dev/vda3: LABEL="btrfstestA" UUID="ad0ccde3-e53d-4095-99d4-3ef68499beae" UUID_SUB="402b41b8-7fee-4e08-8053-1dc978c4ee6f" BLOCK_SIZE="4096" TYPE="btrfs" PARTUUID="9260b8ee-50eb-4f90-9acb-776fdb3ef7ec"
/dev/vda1: PARTUUID="2115ca3d-3d9f-4e9d-85ad-059dfd0b63a1"

@rvykydal
Copy link

rvykydal commented Oct 24, 2025

I am looking into it. It really seems that our cleanup of the former /dev/vda1 LUKS partition does not work anymore in the new image.

Maybe caused by update of blivet:

< python3-blivet-1:3.13.0-1.fc44.noarch
---
> python3-blivet-1:3.12.1-8.fc44.noarch

packages diff:
packages.diff.txt

The Traceback from the log seems to be caused by unexpected detection of LUKS parttition:

Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]: DEBUG:anaconda.storage:Creating a copy of the storage model.
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]: DEBUG:blivet:starting Blivet copy
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]: DEBUG:blivet:                  PartitionDevice._set_parted_partition: vda1 ;
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]: DEBUG:blivet:device vda1 new parted_partition parted.Partition instance --
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:   disk: <parted.disk.Disk object at 0x7f1b10596850>  fileSystem: None
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:   number: 1  path: /dev/vda1  type: 0
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:   name:   active: True  busy: False
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:   geometry: <parted.geometry.Geometry object at 0x7f1b10479ef0>  PedPartition: <_ped.Partition object at 0x7f1b104449f0>
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]: DEBUG:blivet:finished Blivet copy
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]: DEBUG:anaconda.storage:Finished a copy of the storage model.
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]: DEBUG:dasbus.connection:Publishing an object at /org/fedoraproject/Anaconda/Modules/Storage/Task/3.
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]: INFO:anaconda.core.threads:Running Thread: AnaTaskThread-ScanDevicesTask-3 (139754125383360)
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]: INFO:anaconda.modules.common.task.task:Scan all devices
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]: INFO:program:Running... systemctl start iscsi-init.service
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com systemd[1]: iscsi-init.service - One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]: DEBUG:program:Return code: 0
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]: INFO:blivet:no initiator set
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]: INFO:anaconda.core.threads:Thread Failed: AnaTaskThread-ScanDevicesTask-3 (139754125383360)
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]: ERROR:anaconda.modules.common.task.task:Thread AnaTaskThread-ScanDevicesTask-3 has failed: Traceback (most recent call last):
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:   File "/usr/lib64/python3.14/site-packages/pyanaconda/core/threads.py", line 281, in run
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:     threading.Thread.run(self)
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:     ~~~~~~~~~~~~~~~~~~~~^^^^^^
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:   File "/usr/lib64/python3.14/threading.py", line 1023, in run
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:     self._target(*self._args, **self._kwargs)
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:     ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:   File "/usr/lib64/python3.14/site-packages/pyanaconda/modules/common/task/task.py", line 97, in _thread_run_callback
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:     self._task_run_callback()
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:     ~~~~~~~~~~~~~~~~~~~~~~~^^
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:   File "/usr/lib64/python3.14/site-packages/pyanaconda/modules/common/task/task.py", line 110, in _task_run_callback
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:     self._set_result(self.run())
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:                      ~~~~~~~~^^
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:   File "/usr/lib64/python3.14/site-packages/pyanaconda/modules/storage/reset.py", line 64, in run
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:     self._reset_storage(self._storage)
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:     ~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:   File "/usr/lib64/python3.14/site-packages/pyanaconda/modules/storage/reset.py", line 84, in _reset_storage
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:     storage.reset()
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:     ~~~~~~~~~~~~~^^
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:   File "/usr/lib/python3.14/site-packages/blivet/threads.py", line 48, in run_with_lock
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:     return m(*args, **kwargs)
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:   File "/usr/lib64/python3.14/site-packages/pyanaconda/modules/storage/devicetree/model.py", line 294, in reset
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:     self.save_passphrase(device)
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:     ~~~~~~~~~~~~~~~~~~~~^^^^^^^^
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:   File "/usr/lib/python3.14/site-packages/blivet/threads.py", line 48, in run_with_lock
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:     return m(*args, **kwargs)
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:   File "/usr/lib/python3.14/site-packages/blivet/blivet.py", line 367, in save_passphrase
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:     luks_data.save_passphrase(device)
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:     ~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:   File "/usr/lib/python3.14/site-packages/blivet/static_data/luks_data.py", line 89, in save_passphrase
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:     passphrase = pctx._passphrase
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]:                  ^^^^^^^^^^^^^^^^
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]: AttributeError: 'NoneType' object has no attribute '_passphrase'
Oct 24 11:00:14 ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com org.fedoraproject.Anaconda.Modules.Storage[6376]: INFO:anaconda.core.threads:Thread Done: AnaTaskThread-ScanDevicesTask-3 (139754125383360)

In the previous test testReclaimExt4onLUKS we set up /dev/vda1 LUKS partition.

WDYT? @vojtechtrefny?

@rvykydal
Copy link

@rvykydal
Copy link

rvykydal commented Oct 24, 2025

Anaconda is doing storage rescan in the new test before wiping the old disks. With the new blivet it seems to fail because of this change:
storaged-project/blivet@4aa6a18.
PR with possible fix: storaged-project/blivet#1428
Applying it locally fixes the reproducer from #8341 (comment) for me

@martinpitt martinpitt moved this to detriment in Pilot tasks Oct 26, 2025
@rvykydal
Copy link

Blivet with fix has been built for rawhide: https://koji.fedoraproject.org/koji/buildinfo?buildID=2851500

@jelly
Copy link
Member

jelly commented Oct 31, 2025

Spotted the package in the compose yesterday, so lets refresh.

@cockpituous cockpituous changed the title Image refresh for fedora-rawhide-boot WIP: f4bc1fc7f361: [no-test] Image refresh for fedora-rawhide-boot Oct 31, 2025
cockpituous pushed a commit that referenced this pull request Oct 31, 2025
@cockpituous
Copy link
Contributor

@cockpituous cockpituous changed the title WIP: f4bc1fc7f361: [no-test] Image refresh for fedora-rawhide-boot Image refresh for fedora-rawhide-boot Oct 31, 2025
@cockpituous
Copy link
Contributor

@jelly jelly force-pushed the image-refresh-fedora-rawhide-boot-20251012-224209 branch from 3dcd862 to 33e58ac Compare October 31, 2025 08:28
@jelly jelly removed the blocked label Oct 31, 2025
Copy link
Member

@martinpitt martinpitt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice!

@martinpitt martinpitt merged commit 13583cf into main Oct 31, 2025
10 checks passed
martinpitt pushed a commit that referenced this pull request Oct 31, 2025
@martinpitt martinpitt deleted the image-refresh-fedora-rawhide-boot-20251012-224209 branch October 31, 2025 16:16
@github-project-automation github-project-automation bot moved this from detriment to improvement in Pilot tasks Oct 31, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Status: improvement

Development

Successfully merging this pull request may close these issues.

7 participants