Bug 2168762

Summary: virtqemud coredumped when migrate transient vm with external vtpm
Product: Red Hat Enterprise Linux 9 Reporter: Yanqiu Zhang <yanqzhan>
Component: libvirtAssignee: Michal Privoznik <mprivozn>
libvirt sub component: Live Migration QA Contact: Yanqiu Zhang <yanqzhan>
Status: CLOSED ERRATA Docs Contact:
Severity: unspecified    
Priority: unspecified CC: dzheng, fjin, jdenemar, jtomko, lcheng, lmen, mprivozn, pkrempa, virt-maint, xuzhang, yanqzhan
Version: 9.2Keywords: Triaged, Upstream
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: libvirt-9.0.0-5.el9 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-05-09 07:27:45 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
coredumpinfo none

Description Yanqiu Zhang 2023-02-10 02:48:57 UTC
Created attachment 1943203 [details]
coredumpinfo

Description of problem:
virtqemud coredumped when vm migrating back with external vtpm.
Also reproduces when migrating a transient vm out with external vtpm.

Version-Release number of selected component (if applicable):
libvirt-9.0.0-4.el9.x86_64
qemu-kvm-7.2.0-7.el9.x86_64
swtpm-0.8.0-1.el9.x86_64
libtpms-0.9.1-2.20211126git1ff6fe1f43.el9.x86_64

How reproducible:
100%

Steps to Reproduce:
1. prepare external swtpm process
# mkdir /tmp/mytpm
#  chcon -t virtd_exec_t /usr/bin/swtpm_setup
#  chcon -t virtd_exec_t /usr/bin/swtpm
# systemd-run  swtpm_setup --tpm2 --tpmstate /tmp/mytpm --create-ek-cert --create-platform-cert --overwrite
Running as unit: run-r995379940785468196f426010edf0da2.service
# systemd-run /usr/bin/swtpm socket  --ctrl type=unixio,path=/tmp/guest-swtpm.sock,mode=0600 --tpmstate dir=/tmp/mytpm,mode=0600 --tpm2
Running as unit: run-r0a37494893cc4c6e87e80d3b53a6bcac.service
# chcon -t svirt_image_t /tmp/guest-swtpm.sock; chown qemu:qemu   /tmp/guest-swtpm.sock
#  ll -hZ /tmp/guest-swtpm.sock
srw-------. 1 qemu qemu system_u:object_r:svirt_image_t:s0 0 Jan 17 22:51 /tmp/guest-swtpm.sock

2. start vm with external vtpm
    <tpm model='tpm-crb'>
      <backend type='external'>
        <source type='unix' mode='connect' path='/tmp/guest-swtpm.sock'/>
      </backend>
      <alias name='tpm0'/>
    </tpm>

3. migrate to target(prepare external swtpm process on target too)
#  virsh migrate avocado-vt-vm1 --live --verbose qemu+ssh://hostB/system
Migration: [100 %]

4. migrate back(run external 'swtpm socket' process on source again):
[hostB]# virsh migrate avocado-vt-vm1 --live --verbose qemu+ssh://hostA/system
Migration: [ 99 %]2023-02-09 13:46:50.093+0000: 111849: info : libvirt version: 9.0.0, package: 4.el9 (Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>, 2023-02-09-05:54:47, )
2023-02-09 13:46:50.093+0000: 111849: info : hostname: hostB
2023-02-09 13:46:50.093+0000: 111849: warning : virDomainMigrateVersion3Full:3493 : Guest avocado-vt-vm1 probably left in 'paused' state on source
error: Disconnected from qemu:///system due to end of file
Migration: [100 %]

[hostB]# coredumpctl list
TIME                           PID UID GID SIG     COREFILE EXE                 SIZE
Thu 2023-02-09 08:20:20 EST 109275   0   0 SIGSEGV present  /usr/sbin/virtqemud 1.1M
Thu 2023-02-09 08:22:22 EST 109757   0   0 SIGSEGV present  /usr/sbin/virtqemud 1.0M
Thu 2023-02-09 08:46:50 EST 111593   0   0 SIGSEGV present  /usr/sbin/virtqemud 1.0M

But on hostA, the vm running well and vtpm works well in OS.


Actual results:


Expected results:


Additional info:
1.Not reproduce for normal vtpm like:
    <tpm model='tpm-crb'>
      <backend type='emulator' version='2.0'/>
    </tpm>
2. coredump info pls check attachment.

Comment 1 Peter Krempa 2023-02-10 08:08:25 UTC
Crashes in:

                Stack trace of thread 111598:
                #0  0x00007f95eccd0e9c __strrchr_evex (libc.so.6 + 0xd0e9c)
                #1  0x00007f95ed2e24e5 virFileIsSharedFSType (libvirt.so.0 + 0xe24e5)
                #2  0x00007f95e85091a4 qemuExtTPMCleanupHost (libvirt_driver_qemu.so + 0x1581a4)
                #3  0x00007f95e84a101a qemuExtDevicesCleanupHost (libvirt_driver_qemu.so + 0xf001a)
                #4  0x00007f95e84677cb qemuDomainRemoveInactiveCommon (libvirt_driver_qemu.so + 0xb67cb)
                #5  0x00007f95e84679c4 qemuDomainRemoveInactive (libvirt_driver_qemu.so + 0xb69c4)
                #6  0x00007f95e84bac2f qemuMigrationSrcConfirm (libvirt_driver_qemu.so + 0x109c2f)
                #7  0x00007f95e848df03 qemuDomainMigrateConfirm3Params.lto_priv.0 (libvirt_driver_qemu.so + 0xdcf03)
                #8  0x00007f95ed4f0755 virDomainMigrateConfirm3Params (libvirt.so.0 + 0x2f0755)
                #9  0x000055c8f86ddb42 remoteDispatchDomainMigrateConfirm3ParamsHelper.lto_priv.0 (virtqemud + 0x40b42)
                #10 0x00007f95ed3f54cc virNetServerProgramDispatch (libvirt.so.0 + 0x1f54cc)
                #11 0x00007f95ed3fb2b8 virNetServerHandleJob (libvirt.so.0 + 0x1fb2b8)
                #12 0x00007f95ed3345e3 virThreadPoolWorker (libvirt.so.0 + 0x1345e3)
                #13 0x00007f95ed333b99 virThreadHelper (libvirt.so.0 + 0x133b99)
                #14 0x00007f95ecc9f802 start_thread (libc.so.6 + 0x9f802)
                #15 0x00007f95ecc3f450 __clone3 (libc.so.6 + 0x3f450)

Apparently 'tpm->data.emulator.storagepath' is NULL.

Comment 3 Michal Privoznik 2023-02-10 08:55:10 UTC
Patch posted on the list:

https://listman.redhat.com/archives/libvir-list/2023-February/237695.html

Comment 4 Michal Privoznik 2023-02-10 10:49:06 UTC
Merged upstream as:

commit 03f76e577d66f8eea6aa7cc513e75026527b4cda
Author:     Michal Prívozník <mprivozn>
AuthorDate: Fri Feb 10 09:47:05 2023 +0100
Commit:     Michal Prívozník <mprivozn>
CommitDate: Fri Feb 10 10:49:13 2023 +0100

    qemu_extdevice: Do cleanup host only for VIR_DOMAIN_TPM_TYPE_EMULATOR
    
    We only set up host for VIR_DOMAIN_TPM_TYPE_EMULATOR and thus
    similarly, we should do cleanup for the same type. This also
    fixes a crasher, in which qemuTPMEmulatorCleanupHost() accesses
    tpm->data.emulator.storagepath which is NULL for
    VIR_DOMAIN_TPM_TYPE_EXTERNAL.
    
    Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2168762
    Signed-off-by: Michal Privoznik <mprivozn>
    Reviewed-by: Ján Tomko <jtomko>

v9.0.0-199-g03f76e577d

Comment 6 Yanqiu Zhang 2023-02-15 03:00:45 UTC
Tested with:
libvirt-9.0.0-5.el9
qemu-kvm-7.2.0-8.el9.x86_64
swtpm-0.8.0-1.el9.x86_64
libtpms-0.9.1-2.20211126git1ff6fe1f43.el9.x86_64

    <tpm model='tpm-crb'>
      <backend type='external'>
        <source type='unix' mode='connect' path='/tmp/guest-swtpm.sock'/>
      </backend>
      <alias name='tpm0'/>
    </tpm>

1. define, start, migrate to then back pass:

[root@hostB ~]# pidof virtqemud
67168
[root@hostB ~]#  virsh migrate avocado-vt-vm1 --live --verbose qemu+ssh://hostA/system
Migration: [100 %]
[root@hostB ~]# pidof virtqemud
67168

2. migrate a persistent vm out with --undefinesource pass:
[root@hostA ~]# pidof virtqemud
54951
[root@hostA ~]# virsh migrate avocado-vt-vm1 --live --verbose qemu+ssh://hostB/system --undefinesource
Migration: [100 %]
[root@hostA ~]# pidof virtqemud
54951
[root@dhostA ~]# virsh list --all
 Id   Name   State
--------------------

3. migrate a transient vm out
[root@hostA ~]# pidof virtqemud
54951
[root@hostA ~]# virsh migrate avocado-vt-vm1 --live --verbose qemu+ssh://hostB/system
Migration: [100 %]
[root@hostA ~]# pidof virtqemud
54951
[root@hostA ~]# virsh list --all
 Id   Name   State
--------------------

4. check coredumpctl:
[root@hostA ~]# coredumpctl list
No coredumps found.

[root@hostB ~]# coredumpctl list
No coredumps found.

Comment 9 Yanqiu Zhang 2023-02-15 07:53:18 UTC
Mark as verified per comment6 test with downstream pkg libvirt-9.0.0-5.el9.

Comment 11 errata-xmlrpc 2023-05-09 07:27:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (libvirt bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:2171