Please confirm
Distribution
Ubuntu
Distribution version
24.04
Output of "snap list --all lxd core20 core22 core24 snapd"
Nome Versione Rev Tracciamento Publisher Note
core20 20260105 2717 latest/stable canonical** base,disabilitato
core20 20260211 2769 latest/stable canonical** base
core22 20260128 2339 latest/stable canonical** base,disabilitato
core22 20260225 2411 latest/stable canonical** base
core24 20260107 1349 latest/stable canonical** base,disabilitato
core24 20260211 1499 latest/stable canonical** base
lxd 6.7-5c24579 38413 latest/stable canonical** disabilitato
lxd 6.7-12e2019 38450 latest/stable canonical** -
snapd 2.73 25935 latest/stable canonical** snapd,disabilitato
snapd 2.74.1 26382 latest/stable canonical** snapd
Output of "lxc info" or system info if it fails
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- network_sriov
- console
- restrict_devlxd
- migration_pre_copy
- infiniband
- maas_network
- devlxd_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- devlxd_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- backup_compression
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- snapshot_schedule_aliases
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
- container_disk_ceph
- virtual-machines
- image_profiles
- clustering_architecture
- resources_disk_id
- storage_lvm_stripes
- vm_boot_priority
- unix_hotplug_devices
- api_filtering
- instance_nic_network
- clustering_sizing
- firewall_driver
- projects_limits
- container_syscall_intercept_hugetlbfs
- limits_hugepages
- container_nic_routed_gateway
- projects_restrictions
- custom_volume_snapshot_expiry
- volume_snapshot_scheduling
- trust_ca_certificates
- snapshot_disk_usage
- clustering_edit_roles
- container_nic_routed_host_address
- container_nic_ipvlan_gateway
- resources_usb_pci
- resources_cpu_threads_numa
- resources_cpu_core_die
- api_os
- container_nic_routed_host_table
- container_nic_ipvlan_host_table
- container_nic_ipvlan_mode
- resources_system
- images_push_relay
- network_dns_search
- container_nic_routed_limits
- instance_nic_bridged_vlan
- network_state_bond_bridge
- usedby_consistency
- custom_block_volumes
- clustering_failure_domains
- resources_gpu_mdev
- console_vga_type
- projects_limits_disk
- network_type_macvlan
- network_type_sriov
- container_syscall_intercept_bpf_devices
- network_type_ovn
- projects_networks
- projects_networks_restricted_uplinks
- custom_volume_backup
- backup_override_name
- storage_rsync_compression
- network_type_physical
- network_ovn_external_subnets
- network_ovn_nat
- network_ovn_external_routes_remove
- tpm_device_type
- storage_zfs_clone_copy_rebase
- gpu_mdev
- resources_pci_iommu
- resources_network_usb
- resources_disk_address
- network_physical_ovn_ingress_mode
- network_ovn_dhcp
- network_physical_routes_anycast
- projects_limits_instances
- network_state_vlan
- instance_nic_bridged_port_isolation
- instance_bulk_state_change
- network_gvrp
- instance_pool_move
- gpu_sriov
- pci_device_type
- storage_volume_state
- network_acl
- migration_stateful
- disk_state_quota
- storage_ceph_features
- projects_compression
- projects_images_remote_cache_expiry
- certificate_project
- network_ovn_acl
- projects_images_auto_update
- projects_restricted_cluster_target
- images_default_architecture
- network_ovn_acl_defaults
- gpu_mig
- project_usage
- network_bridge_acl
- warnings
- projects_restricted_backups_and_snapshots
- clustering_join_token
- clustering_description
- server_trusted_proxy
- clustering_update_cert
- storage_api_project
- server_instance_driver_operational
- server_supported_storage_drivers
- event_lifecycle_requestor_address
- resources_gpu_usb
- clustering_evacuation
- network_ovn_nat_address
- network_bgp
- network_forward
- custom_volume_refresh
- network_counters_errors_dropped
- metrics
- image_source_project
- clustering_config
- network_peer
- linux_sysctl
- network_dns
- ovn_nic_acceleration
- certificate_self_renewal
- instance_project_move
- storage_volume_project_move
- cloud_init
- network_dns_nat
- database_leader
- instance_all_projects
- clustering_groups
- ceph_rbd_du
- instance_get_full
- qemu_metrics
- gpu_mig_uuid
- event_project
- clustering_evacuation_live
- instance_allow_inconsistent_copy
- network_state_ovn
- storage_volume_api_filtering
- image_restrictions
- storage_zfs_export
- network_dns_records
- storage_zfs_reserve_space
- network_acl_log
- storage_zfs_blocksize
- metrics_cpu_seconds
- instance_snapshot_never
- certificate_token
- instance_nic_routed_neighbor_probe
- event_hub
- agent_nic_config
- projects_restricted_intercept
- metrics_authentication
- images_target_project
- cluster_migration_inconsistent_copy
- cluster_ovn_chassis
- container_syscall_intercept_sched_setscheduler
- storage_lvm_thinpool_metadata_size
- storage_volume_state_total
- instance_file_head
- instances_nic_host_name
- image_copy_profile
- container_syscall_intercept_sysinfo
- clustering_evacuation_mode
- resources_pci_vpd
- qemu_raw_conf
- storage_cephfs_fscache
- network_load_balancer
- vsock_api
- instance_ready_state
- network_bgp_holdtime
- storage_volumes_all_projects
- metrics_memory_oom_total
- storage_buckets
- storage_buckets_create_credentials
- metrics_cpu_effective_total
- projects_networks_restricted_access
- storage_buckets_local
- loki
- acme
- internal_metrics
- cluster_join_token_expiry
- remote_token_expiry
- init_preseed
- storage_volumes_created_at
- cpu_hotplug
- projects_networks_zones
- network_txqueuelen
- cluster_member_state
- storage_pool_source_wipe
- zfs_block_mode
- instance_generation_id
- disk_io_cache
- amd_sev
- storage_pool_loop_resize
- migration_vm_live
- ovn_nic_nesting
- oidc
- network_ovn_l3only
- ovn_nic_acceleration_vdpa
- cluster_healing
- instances_state_total
- auth_user
- security_csm
- instances_rebuild
- numa_cpu_placement
- custom_volume_iso
- network_allocations
- storage_api_remote_volume_snapshot_copy
- zfs_delegate
- operations_get_query_all_projects
- metadata_configuration
- syslog_socket
- event_lifecycle_name_and_project
- instances_nic_limits_priority
- disk_initial_volume_configuration
- operation_wait
- cluster_internal_custom_volume_copy
- disk_io_bus
- storage_cephfs_create_missing
- instance_move_config
- ovn_ssl_config
- init_preseed_storage_volumes
- metrics_instances_count
- server_instance_type_info
- resources_disk_mounted
- server_version_lts
- oidc_groups_claim
- loki_config_instance
- storage_volatile_uuid
- import_instance_devices
- instances_uefi_vars
- instances_migration_stateful
- container_syscall_filtering_allow_deny_syntax
- access_management
- vm_disk_io_limits
- storage_volumes_all
- instances_files_modify_permissions
- image_restriction_nesting
- container_syscall_intercept_finit_module
- device_usb_serial
- network_allocate_external_ips
- explicit_trust_token
- shared_custom_block_volumes
- instance_import_conversion
- instance_create_start
- instance_protection_start
- devlxd_images_vm
- disk_io_bus_virtio_blk
- metrics_api_requests
- projects_limits_disk_pool
- ubuntu_pro_guest_attach
- metadata_configuration_entity_types
- access_management_tls
- network_allocations_ovn_uplink
- network_ovn_uplink_vlan
- state_logical_cpus
- vm_limits_cpu_pin_strategy
- gpu_cdi
- images_all_projects
- metadata_configuration_scope
- unix_device_hotplug_ownership_inherit
- unix_device_hotplug_subsystem_device_option
- storage_ceph_osd_pool_size
- network_get_target
- network_zones_all_projects
- vm_root_volume_attachment
- projects_limits_uplink_ips
- entities_with_entitlements
- profiles_all_projects
- storage_driver_powerflex
- storage_driver_pure
- cloud_init_ssh_keys
- oidc_scopes
- project_default_network_and_storage
- client_cert_presence
- clustering_groups_used_by
- container_bpf_delegation
- override_snapshot_profiles_on_copy
- resources_device_fs_uuid
- backup_metadata_version
- storage_buckets_all_projects
- network_acls_all_projects
- networks_all_projects
- clustering_restore_skip_mode
- disk_io_threads_virtiofsd
- oidc_client_secret
- pci_hotplug
- device_patch_removal
- daemon_storage_per_project
- ovn_internal_load_balancer
- auth_bearer_devlxd
- devlxd_volume_management
- storage_driver_alletra
- resources_disk_used_by
- ovn_dhcp_ranges
- operation_requestor
- import_custom_volume_tar
- projects_force_delete
- auth_oidc_sessions
- instance_snapshots_multi_volume
- vm_persistent_bus
- instance_placement_groups
- ovn_nic_acceleration_parent
- storage_and_profile_operations
- storage_source_recover
- instance_force_delete
- operation_metadata_entity_url
- instance_boot_mode
- auth_bearer
- vm_limits_max_bus_ports
- instances_state_selective_recursion
- project_delete_operation
- gpu_cdi_amd
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
- bearer
client_certificate: false
config:
volatile.uuid: 019aeb2e-0c3b-702f-9472-57e3e2d167cc
auth_user_name: marco
auth_user_method: unix
environment:
addresses: []
architectures:
- x86_64
- i686
backup_metadata_version_range:
- 1
- 2
certificate: |
-----BEGIN CERTIFICATE-----
MIICBTCCAYqgAwIBAgIRANLImwhUQvgWefkKO5ikdAcwCgYIKoZIzj0EAwMwNDEc
MBoGA1UEChMTbGludXhjb250YWluZXJzLm9yZzEUMBIGA1UEAwwLcm9vdEB0cmlj
a3kwHhcNMjIxMjE2MTUxNjI2WhcNMzIxMjEzMTUxNjI2WjA0MRwwGgYDVQQKExNs
aW51eGNvbnRhaW5lcnMub3JnMRQwEgYDVQQDDAtyb290QHRyaWNreTB2MBAGByqG
SM49AgEGBSuBBAAiA2IABJVoFY+zYI9yrGZ7bfjiT0LNhYqLJB+TIVqJXlzia6lC
6UhNstkf1Hy9D4G2sTywOx6BPubhQftZ6uR/VZ+EtrE4bOXNQcLhuxgOthIkNFKh
xVbSCN2TlW0u6tlHL7yxMaNgMF4wDgYDVR0PAQH/BAQDAgWgMBMGA1UdJQQMMAoG
CCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwKQYDVR0RBCIwIIIGdHJpY2t5hwR/AAAB
hxAAAAAAAAAAAAAAAAAAAAABMAoGCCqGSM49BAMDA2kAMGYCMQDL4Fhfj61UEZgV
yaYptAKH2UQ+XFdNiimOKNeiHoizk9EafUicNYA6qPVCZ4U2kCACMQCbLVhOqPIW
hR6SyCl8mnoyZ+dpUtdG5aNykgEDUDyhbty7tGyXDSdsgsFHYeZixvo=
-----END CERTIFICATE-----
certificate_fingerprint: 51ccfd7eb413252e14974350a2f0bc95999c160cce7ac2534c5b88d99671abb1
driver: lxc | qemu
driver_version: 6.0.6 | 10.2.1
instance_types:
- container
- virtual-machine
firewall: nftables
kernel: Linux
kernel_architecture: x86_64
kernel_features:
bpf_token: "true"
idmapped_mounts: "true"
netnsid_getifaddrs: "true"
seccomp_listener: "true"
seccomp_listener_continue: "true"
uevent_injection: "true"
unpriv_binfmt: "true"
unpriv_fscaps: "true"
kernel_version: 6.17.0-19-generic
lxc_features:
cgroup2: "true"
core_scheduling: "true"
devpts_fd: "true"
idmapped_mounts_v2: "true"
mount_injection_file: "true"
network_gateway_device_route: "true"
network_ipvlan: "true"
network_l2proxy: "true"
network_phys_macvlan_mtu: "true"
network_veth_router: "true"
pidfd: "true"
seccomp_allow_deny_syntax: "true"
seccomp_notify: "true"
seccomp_proxy_send_notify_fd: "true"
os_name: Ubuntu
os_version: "24.04"
project: default
server: lxd
server_clustered: false
server_event_mode: full-mesh
server_name: tricky
server_pid: 7287
server_version: "6.7"
server_lts: false
storage: dir
storage_version: "1"
storage_supported_drivers:
- name: pure
version: 2.8 (nvme-cli)
remote: true
- name: alletra
version: 2.8 (nvme-cli)
remote: true
- name: zfs
version: 2.3.4-1ubuntu2
remote: false
- name: ceph
version: 19.2.3
remote: true
- name: dir
version: "1"
remote: false
- name: lvm
version: 2.03.16(2) (2022-05-18) / 1.02.185 (2022-05-18) / 4.50.0
remote: false
- name: powerflex
version: 2.8 (nvme-cli)
remote: true
- name: btrfs
version: 6.6.3
remote: false
- name: cephfs
version: 19.2.3
remote: true
- name: cephobject
version: 19.2.3
remote: true
Issue description
I have running for some long time a couple of lxc instances running ubuntu 24.04 and 25.10 in my host and so far they were mostly staying in the background.
Few days ago I noticed that suspension stopped working in my host, but since I had days and days of uptime and some system configuration changes, I thought something in the kernel got in troubles...
Then I rebooted and... The issue was the same.
[ 881.562061] Lockdown: systemd-logind: hibernation is restricted; see man kernel_lockdown.7
[ 881.565598] Lockdown: systemd-logind: hibernation is restricted; see man kernel_lockdown.7
[ 881.604904] wlp2s0: deauthenticating from 02:eb:d8:7f:14:db by local choice (Reason: 3=DEAUTH_LEAVING)
[ 882.754656] PM: suspend entry (s2idle)
[ 882.768188] Filesystems sync: 0.013 seconds
[ 882.954190] Freezing user space processes
[ 902.963180] Freezing user space processes failed after 20.009 seconds (2 tasks refusing to freeze, wq_busy=0):
[ 902.963541] task:udisksd state:D stack:0 pid:33315 tgid:33315 ppid:9245 task_flags:0x400100 flags:0x00004006
[ 902.963548] Call Trace:
[ 902.963553] <TASK>
[ 902.963560] __schedule+0x30d/0x7a0
[ 902.963572] schedule+0x27/0x90
[ 902.963575] request_wait_answer+0xd8/0x260
[ 902.963583] ? __pfx_autoremove_wake_function+0x10/0x10
[ 902.963591] __fuse_simple_request+0xd9/0x2d0
[ 902.963594] fuse_file_poll+0x1b5/0x230
[ 902.963600] do_poll.constprop.0+0x118/0x360
[ 902.963607] do_sys_poll+0x19f/0x280
[ 902.963613] ? __pfx_pollwake+0x10/0x10
[ 902.963618] ? __pfx_pollwake+0x10/0x10
[ 902.963620] ? __pfx_pollwake+0x10/0x10
[ 902.963623] ? __pfx_pollwake+0x10/0x10
[ 902.963626] ? __pfx_pollwake+0x10/0x10
[ 902.963628] ? __pfx_pollwake+0x10/0x10
[ 902.963634] __x64_sys_poll+0xc5/0x150
[ 902.963637] x64_sys_call+0x14a8/0x2680
[ 902.963643] do_syscall_64+0x80/0xa40
[ 902.963647] ? srso_alias_return_thunk+0x5/0xfbef5
[ 902.963651] ? common_file_perm+0x6c/0x1a0
[ 902.963658] ? srso_alias_return_thunk+0x5/0xfbef5
[ 902.963659] ? eventfd_read+0xe1/0x210
[ 902.963665] ? srso_alias_return_thunk+0x5/0xfbef5
[ 902.963667] ? rw_verify_area+0x57/0x190
[ 902.963674] ? srso_alias_return_thunk+0x5/0xfbef5
[ 902.963675] ? vfs_read+0x25e/0x390
[ 902.963678] ? srso_alias_return_thunk+0x5/0xfbef5
[ 902.963680] ? ksys_read+0xdb/0xf0
[ 902.963682] ? srso_alias_return_thunk+0x5/0xfbef5
[ 902.963684] ? srso_alias_return_thunk+0x5/0xfbef5
[ 902.963686] ? arch_exit_to_user_mode_prepare.isra.0+0xd/0xe0
[ 902.963688] ? srso_alias_return_thunk+0x5/0xfbef5
[ 902.963689] ? do_syscall_64+0xb6/0xa40
[ 902.963691] ? srso_alias_return_thunk+0x5/0xfbef5
[ 902.963693] ? switch_fpu_return+0x5c/0x100
[ 902.963698] ? srso_alias_return_thunk+0x5/0xfbef5
[ 902.963700] ? arch_exit_to_user_mode_prepare.isra.0+0xc2/0xe0
[ 902.963702] ? srso_alias_return_thunk+0x5/0xfbef5
[ 902.963704] ? do_syscall_64+0xb6/0xa40
[ 902.963705] ? srso_alias_return_thunk+0x5/0xfbef5
[ 902.963707] ? do_syscall_64+0xb6/0xa40
[ 902.963709] ? srso_alias_return_thunk+0x5/0xfbef5
[ 902.963710] ? do_syscall_64+0xb6/0xa40
[ 902.963713] ? srso_alias_return_thunk+0x5/0xfbef5
[ 902.963714] ? srso_alias_return_thunk+0x5/0xfbef5
[ 902.963716] ? arch_exit_to_user_mode_prepare.isra.0+0xd/0xe0
[ 902.963718] ? srso_alias_return_thunk+0x5/0xfbef5
[ 902.963719] ? do_syscall_64+0xb6/0xa40
[ 902.963721] ? srso_alias_return_thunk+0x5/0xfbef5
[ 902.963723] ? arch_exit_to_user_mode_prepare.isra.0+0xc2/0xe0
[ 902.963725] ? srso_alias_return_thunk+0x5/0xfbef5
[ 902.963727] ? do_syscall_64+0xb6/0xa40
[ 902.963728] ? irqentry_exit+0x43/0x50
[ 902.963732] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 902.963734] RIP: 0033:0x7eca4231b4fd
[ 902.963738] RSP: 002b:00007ffe9ac26380 EFLAGS: 00000293 ORIG_RAX: 0000000000000007
[ 902.963741] RAX: ffffffffffffffda RBX: 00005a086f1fa210 RCX: 00007eca4231b4fd
[ 902.963743] RDX: 00000000000927ad RSI: 0000000000000006 RDI: 00007eca28001d70
[ 902.963744] RBP: 00007ffe9ac263a0 R08: 0000000000000000 R09: 000000007fffffff
[ 902.963745] R10: 00005a086f1fa210 R11: 0000000000000293 R12: 000000007fffffff
[ 902.963746] R13: 00007eca4251fbd0 R14: 0000000000000006 R15: 00007eca28001d70
[ 902.963751] </TASK>
[ 902.963999] task:udisksd state:D stack:0 pid:41797 tgid:41797 ppid:41406 task_flags:0x400100 flags:0x00004006
[ 902.964003] Call Trace:
[ 902.964005] <TASK>
[ 902.964007] __schedule+0x30d/0x7a0
[ 902.964011] schedule+0x27/0x90
[ 902.964013] request_wait_answer+0xd8/0x260
[ 902.964015] ? __pfx_autoremove_wake_function+0x10/0x10
[ 902.964018] __fuse_simple_request+0xd9/0x2d0
[ 902.964020] fuse_file_poll+0x1b5/0x230
[ 902.964025] do_poll.constprop.0+0x118/0x360
[ 902.964028] do_sys_poll+0x19f/0x280
[ 902.964034] ? __pfx_pollwake+0x10/0x10
[ 902.964037] ? __pfx_pollwake+0x10/0x10
[ 902.964039] ? __pfx_pollwake+0x10/0x10
[ 902.964042] ? __pfx_pollwake+0x10/0x10
[ 902.964045] ? __pfx_pollwake+0x10/0x10
[ 902.964047] ? __pfx_pollwake+0x10/0x10
[ 902.964052] ? srso_alias_return_thunk+0x5/0xfbef5
[ 902.964054] __x64_sys_ppoll+0xdd/0x170
[ 902.964057] x64_sys_call+0x1a0c/0x2680
[ 902.964059] do_syscall_64+0x80/0xa40
[ 902.964062] ? srso_alias_return_thunk+0x5/0xfbef5
[ 902.964063] ? srso_alias_return_thunk+0x5/0xfbef5
[ 902.964065] ? arch_exit_to_user_mode_prepare.isra.0+0xd/0xe0
[ 902.964067] ? srso_alias_return_thunk+0x5/0xfbef5
[ 902.964069] ? do_syscall_64+0xb6/0xa40
[ 902.964070] ? arch_exit_to_user_mode_prepare.isra.0+0xd/0xe0
[ 902.964072] ? srso_alias_return_thunk+0x5/0xfbef5
[ 902.964074] ? do_syscall_64+0xb6/0xa40
[ 902.964076] ? exc_page_fault+0x90/0x1b0
[ 902.964079] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 902.964080] RIP: 0033:0x73d2bc0ac772
[ 902.964082] RSP: 002b:00007ffe5ae3b1a8 EFLAGS: 00000246 ORIG_RAX: 000000000000010f
[ 902.964083] RAX: ffffffffffffffda RBX: 00005bc0398be670 RCX: 000073d2bc0ac772
[ 902.964085] RDX: 00007ffe5ae3b1f0 RSI: 0000000000000006 RDI: 000073d2a4001ef0
[ 902.964086] RBP: 00007ffe5ae3b1d0 R08: 0000000000000008 R09: 0000000000000000
[ 902.964087] R10: 0000000000000000 R11: 0000000000000246 R12: 000000007fffffff
[ 902.964088] R13: 00007ffe5ae3b260 R14: 0000000000000006 R15: 000073d2a4001ef0
[ 902.964091] </TASK>
[ 902.964361] OOM killer enabled.
[ 902.964363] Restarting tasks: Starting
[ 902.970593] Restarting tasks: Done
[ 902.970684] random: crng reseeded on system resumption
[ 903.179239] PM: suspend exit
[ 903.179340] PM: suspend entry (s2idle)
Thus I checked the the processes
❯ ps aux |grep udisksd
root 3570 0.1 0.0 479208 25272 ? Ssl 01:54 0:01 /usr/libexec/udisks2/udisksd
1000000 33315 0.0 0.0 469256 13640 ? Ssl 01:57 0:00 /usr/libexec/udisks2/udisksd
1000000 41797 0.0 0.0 404512 14392 ? Ssl 02:04 0:00 /usr/libexec/udisks2/udisksd
And indeed the ones causing troubles were running into the containers, and once I stopped the containers (or systemctl stop udisks2.service from the containers, I was able to suspend again 🥳
Now... This is kinda weird and apparently started just recently since some lxd auto-update, as these machines are here for some months / years now without causing any trouble to the host
Steps to reproduce
Nothing a part having lxd containers with udisks installed and running
Information to attach
Please confirm
Distribution
Ubuntu
Distribution version
24.04
Output of "snap list --all lxd core20 core22 core24 snapd"
Output of "lxc info" or system info if it fails
Issue description
I have running for some long time a couple of lxc instances running ubuntu 24.04 and 25.10 in my host and so far they were mostly staying in the background.
Few days ago I noticed that suspension stopped working in my host, but since I had days and days of uptime and some system configuration changes, I thought something in the kernel got in troubles...
Then I rebooted and... The issue was the same.
Thus I checked the the processes
And indeed the ones causing troubles were running into the containers, and once I stopped the containers (or
systemctl stop udisks2.servicefrom the containers, I was able to suspend again 🥳Now... This is kinda weird and apparently started just recently since some lxd auto-update, as these machines are here for some months / years now without causing any trouble to the host
Steps to reproduce
Nothing a part having lxd containers with udisks installed and running
Information to attach
dmesg)lxc info NAME --show-log)lxc config show NAME --expanded)/var/log/lxd/lxd.logor/var/snap/lxd/common/lxd/logs/lxd.log)--debug--debug(or uselxc monitorwhile reproducing the issue)