RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2046170 - Possible hang or crash of libvirtd/virtqemud when starting a VM and device mapper is not available
Summary: Possible hang or crash of libvirtd/virtqemud when starting a VM and device ma...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: libvirt
Version: 9.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Peter Krempa
QA Contact: Meina Li
URL:
Whiteboard:
Depends On:
Blocks: 2046172
TreeView+ depends on / blocked
 
Reported: 2022-01-26 10:38 UTC by Peter Krempa
Modified: 2022-05-17 13:08 UTC (History)
7 users (show)

Fixed In Version: libvirt-8.0.0-3.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2046172 (view as bug list)
Environment:
Last Closed: 2022-05-17 12:46:17 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github autotest tp-libvirt pull 4071 0 None open Add case to cover starting guest with multiple disks when device mapp… 2022-05-10 03:37:29 UTC
Red Hat Issue Tracker RHELPLAN-109786 0 None None None 2022-01-26 10:44:03 UTC
Red Hat Product Errata RHBA-2022:2390 0 None None None 2022-05-17 12:46:49 UTC

Description Peter Krempa 2022-01-26 10:38:36 UTC
Description of problem:
When starting a VM and device mapper is not available. (e.g. module is removed or libvirt is being used in a container that doesn't grant access to the device mapper control socket) libvirtd/virtqemud enters a code path where a uninitialized variable is dereferenced causing undefined behaviour. Until now I observed a case of nothing happening and two cases of hung/looping process, but a crash is theoretically possible too.

Version-Release number of selected component (if applicable):
>=libvirt-7.8

How reproducible:
Unknown, depends on stack layout. 

Steps to Reproduce:
1. Make device mapper inaccessible (remove kernel module, or remove /dev/mapper/control device node)
2. Try to start a VM with 3+ disks, with local file-based storage
3. look for the startup process getting stuck

Actual results:
libvirtd/virtqemud gets stuck or crashes

Expected results:


Additional info:
https://gitlab.com/libvirt/libvirt/-/issues/268

Fixed upstream by:

commit ddb2384f0c78a91c40d95afdbc7fe325e95ef2bc 
Author: Peter Krempa <pkrempa>
Date:   Tue Jan 25 17:49:00 2022 +0100

    qemuDomainSetupDisk: Initialize 'targetPaths'
    
    Compiler isn't able to see that 'virDevMapperGetTargets' in cases e.g.
    when the devmapper isn't available may not initialize the value in the
    pointer passed as the second argument.
    
    The usage 'qemuDomainSetupDisk' lead to an accidental infinite loop as
    previous calls apparently doctored the stack to a point where
    'g_slist_concat' would end up in an infinite loop trying to find the end
    of the list.
    
    Fixes: 6c49c2ee9fcb88de02cdc333f666a8e95d60a3b0
    Closes: https://gitlab.com/libvirt/libvirt/-/issues/268
    Signed-off-by: Peter Krempa <pkrempa>
    Reviewed-by: Andrea Bolognani <abologna>

v8.0.0-180-gddb2384f0c

Note that the patch adds a trivial NULL-initialization of a pointer, so even if it's not possible to reproduce the issue the fix is trivial and safe.

Comment 2 Meina Li 2022-01-30 03:21:19 UTC
Can reproduce in libvirt-8.0.0-2.el9.x86_64 and qemu-kvm-6.2.0-5.el9.x86_64:
1. Prepare a guest with three disks.
# virsh domblklist lmn
 Target   Source
-----------------------------------------------
 vda      /var/lib/libvirt/images/lmn.qcow2
 sda      /var/lib/libvirt/images/test.qcow2
 sdb      /var/lib/libvirt/images/test1.qcow2
2. Start the guest.
# virsh start lmn
----hang
3. Check the libvirtd log.
2022-01-30 02:41:06.402+0000: 428107: debug : qemuProcessLaunch:7495 : Writing early domain status to disk
2022-01-30 02:41:06.403+0000: 428107: debug : qemuProcessLaunch:7499 : Waiting for handshake from child
2022-01-30 02:41:06.403+0000: 428107: debug : virCommandHandshakeWait:2852 : Wait for handshake on 27
2022-01-30 02:41:06.403+0000: 428107: debug : qemuProcessLaunch:7507 : Building domain mount namespace (if required)
2022-01-30 02:41:06.403+0000: 428107: debug : qemuDomainSetupAllDisks:296 : Setting up disks
2022-01-30 02:41:06.403+0000: 428107: debug : virDMOpen:141 : device mapper not available
2022-01-30 02:41:06.403+0000: 428107: debug : virDMOpen:141 : device mapper not available
2022-01-30 02:41:06.403+0000: 428107: debug : virDMOpen:141 : device mapper not available

Can be passed in libvirt-8.1.0-1.fc35.x86_64 and qemu-kvm-6.1.0-13.fc35.x86_64 with split daemon mode.
1. Prepare a guest with three disks.
# virsh domblklist lmn
 Target   Source
-----------------------------------------------
 vda      /var/lib/libvirt/images/lmn.qcow2
 sda      /var/lib/libvirt/images/test.qcow2
 sdb      /var/lib/libvirt/images/test1.qcow2
2. Start the guest.
# virsh start lmn
Domain 'lmn' started
3. Check the libvirtd log.
2021-12-15 02:15:22.803+0000: 100807: debug : qemuProcessLaunch:7558 : Writing early domain status to disk
2021-12-15 02:15:22.803+0000: 100807: debug : qemuProcessLaunch:7562 : Waiting for handshake from child
2021-12-15 02:15:22.803+0000: 100807: debug : virCommandHandshakeWait:2851 : Wait for handshake on 38
2021-12-15 02:15:22.803+0000: 100807: debug : qemuProcessLaunch:7570 : Building domain mount namespace (if required)
2021-12-15 02:15:22.803+0000: 100807: debug : qemuDomainSetupAllDisks:296 : Setting up disks
2021-12-15 02:15:22.803+0000: 100807: debug : qemuDomainSetupAllDisks:304 : Setup all disks
2021-12-15 02:15:22.803+0000: 100807: debug : qemuDomainSetupAllHostdevs:337 : Setting up hostdevs
2021-12-15 02:15:22.803+0000: 100807: debug : qemuDomainSetupAllHostdevs:345 : Setup all hostdevs

Comment 3 Meina Li 2022-01-30 03:39:45 UTC
(In reply to Meina Li from comment #2)

> Can be passed in libvirt-8.1.0-1.fc35.x86_64 and
> qemu-kvm-6.1.0-13.fc35.x86_64 with split daemon mode.

Forget to remove /dev/mapper/control device node:
# rm -rf /dev/mapper/control

> 1. Prepare a guest with three disks.
> # virsh domblklist lmn
>  Target   Source
> -----------------------------------------------
>  vda      /var/lib/libvirt/images/lmn.qcow2
>  sda      /var/lib/libvirt/images/test.qcow2
>  sdb      /var/lib/libvirt/images/test1.qcow2
> 2. Start the guest.
> # virsh start lmn
> Domain 'lmn' started
> 3. Check the libvirtd log.
022-01-30 03:29:19.094+0000: 109432: debug : virCommandHandshakeWait:2852 : Wait for handshake on 28
2022-01-30 03:29:19.095+0000: 109432: debug : qemuProcessLaunch:7495 : Building domain mount namespace (if required)
2022-01-30 03:29:19.095+0000: 109432: debug : qemuDomainSetupAllDisks:296 : Setting up disks
2022-01-30 03:29:19.095+0000: 109432: debug : virDMOpen:141 : device mapper not available
2022-01-30 03:29:19.095+0000: 109432: debug : virDMOpen:141 : device mapper not available
2022-01-30 03:29:19.095+0000: 109432: debug : virDMOpen:141 : device mapper not available
2022-01-30 03:29:19.095+0000: 109432: debug : qemuDomainSetupAllDisks:304 : Setup all disks

Comment 7 Meina Li 2022-02-09 10:02:44 UTC
Verified Version:
libvirt-8.0.0-3.el9.x86_64
qemu-kvm-6.2.0-7.el9.x86_64

Verified Steps:
1. Remove /dev/mapper/control.
# rm -rf /dev/mapper/control 
2. Start a guest with three disks.
# virsh domblklist lmn
 Target   Source
-----------------------------------------------
 vda      /var/lib/libvirt/images/lmn.qcow2
 vdb      /var/lib/libvirt/images/test.qcow2
 vdc      /var/lib/libvirt/images/test1.qcow2
# virsh start lmn
Domain 'lmn' started
------no hang
3. Check the libvirtd log.
2022-02-09 10:01:53.747+0000: 159442: debug : qemuProcessLaunch:7499 : Waiting for handshake from child 
2022-02-09 10:01:53.747+0000: 159442: debug : virCommandHandshakeWait:2852 : Wait for handshake on 27
2022-02-09 10:01:53.747+0000: 159442: debug : qemuProcessLaunch:7507 : Building domain mount namespace (if required)
2022-02-09 10:01:53.747+0000: 159442: debug : qemuDomainSetupAllDisks:296 : Setting up disks
2022-02-09 10:01:53.747+0000: 159442: debug : virDMOpen:141 : device mapper not available
2022-02-09 10:01:53.747+0000: 159442: debug : virDMOpen:141 : device mapper not available 
2022-02-09 10:01:53.747+0000: 159442: debug : virDMOpen:141 : device mapper not available
2022-02-09 10:01:53.747+0000: 159442: debug : qemuDomainSetupAllDisks:304 : Setup all disks

Comment 9 errata-xmlrpc 2022-05-17 12:46:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (new packages: libvirt), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:2390


Note You need to log in before you can comment on or make changes to this bug.