Bug 2061275

Summary: Error message changed to a unhelpful message when QEMU fails with vtpm migration
Product: Red Hat Enterprise Linux 9 Reporter: lcheng
Component: libvirtAssignee: Jiri Denemark <jdenemar>
libvirt sub component: Live Migration QA Contact: Yanqiu Zhang <yanqzhan>
Status: CLOSED MIGRATED Docs Contact:
Severity: low    
Priority: low CC: fjin, jdenemar, jsuchane, lmen, mprivozn, nilal, virt-maint, xuzhang, yanqzhan
Version: 9.0Keywords: MigratedToJIRA, Triaged
Target Milestone: rcFlags: pm-rhel: mirror+
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-09-22 12:20:20 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description lcheng 2022-03-07 08:47:28 UTC
Description of problem:
Error message changed to a unhelpful message when QEMU fails with vtpm migration


Version-Release number of selected component (if applicable):
libvirt-8.0.0-6.el9.x86_64
qemu-kvm-6.2.0-10.el9.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Install swtpm and swtpm-tools on both hosts.
# yum install swtpm swtpm-tools

2. Create vtpm secret with same uuid but different values on both hosts.
# virsh secret-list
 UUID                                   Usage
-----------------------------------------------------------
 1498c26a-fead-4b2f-8071-94f2aa6ca0c0   vtpm VTPM_example

3. Start guest with encrypted vtpm on source host.
    <tpm model='tpm-crb'>
      <backend type='emulator' version='2.0'>
        <encryption secret='1498c26a-fead-4b2f-8071-94f2aa6ca0c0'/>
      </backend>
      <alias name='tpm0'/>
    </tpm>

4. Migrate guest to target host.
# virsh -c 'qemu:///system' migrate --live --verbose --domain avocado-vt-vm1 --desturi qemu+ssh://X.X.X.X/system
Migration: [ 98 %]error: operation failed: domain is no longer running

5. Check guest log on target host.
# cat /var/log/libvirt/qemu/vm1.log
...
2022-03-07T02:46:53.281294Z qemu-kvm: tpm-emulator: Setting the stateblob (type 1) failed with a TPM error 0x21 decryption error
2022-03-07T02:46:53.281312Z qemu-kvm: error while loading state for instance 0x0 of device 'tpm-emulator'
2022-03-07T02:46:53.281713Z qemu-kvm: load of migration failed: Input/output error
2022-03-07 02:46:53.499+0000: shutting down, reason=crashed



Actual results:
libvirt report following error message:
  error: operation failed: domain is no longer running


Expected results:
libvirt report following error message:
qemu-kvm: tpm-emulator: Setting the stateblob (type 1) failed with a TPM error 0x21 decryption error
qemu-kvm: error while loading state for instance 0x0 of device 'tpm-emulator'
qemu-kvm: load of migration failed: Input/output error


Additional info:
Cannot reproduce this issue on libvirt-7.10.0-1.
# virsh -c 'qemu:///system' migrate --live --verbose --domain avocado-vt-vm1 --desturi qemu+ssh://X.X.X.X/system
Migration: [ 89 %]error: internal error: qemu unexpectedly closed the monitor: 2022-01-06T18:18:00.877274Z qemu-kvm: tpm-emulator: Setting the stateblob (type 1) failed with a TPM error 0x21 decryption error
2022-01-06T18:18:00.877312Z qemu-kvm: error while loading state for instance 0x0 of device 'tpm-emulator'
2022-01-06T18:18:00.877780Z qemu-kvm: load of migration failed: Input/output error

Comment 1 Jaroslav Suchanek 2023-03-29 15:42:46 UTC
Can you please retest this with current libvirt build? Thanks.

Comment 2 Yanqiu Zhang 2023-03-30 03:54:29 UTC
Hi Jaroslav,

Liping(lcheng) gave me the auto case url: https://libvirt-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/view/libvirt/view/RHEL-9.2%20x86_64/job/libvirt-RHEL-9.2-runtest-x86_64-function-migration_priority_3/32/testReport/rhel.virsh/migrate_options_shared/negative_test_vtpm_emulator_without_postcopy/

The latest error msg changed to:
"error: operation failed: job 'migration out' unexpectedly failed"

And per my manual test, besides this error output, target qemu log got same detailed msgs as previous.
#  virsh migrate avocado-vt-vm1 qemu+ssh://hostB/system --live  --verbose
Migration: [ 97 %]error: operation failed: job 'migration out' unexpectedly failed

[hostB]# tail -4 /var/log/libvirt/qemu/avocado-vt-vm1.log 
2023-03-30T03:28:22.430806Z qemu-kvm: tpm-emulator: Setting the stateblob (type 1) failed with a TPM error 0x1f 
2023-03-30T03:28:22.430841Z qemu-kvm: error while loading state for instance 0x0 of device 'tpm-emulator'
2023-03-30T03:28:22.433638Z qemu-kvm: load of migration failed: Input/output error
2023-03-30 03:28:22.637+0000: shutting down, reason=crashed

So the current error output is still somewhat vague. Is it possible to print specific msgs since they're already there in qemu log?

Thanks!

Comment 3 Michal Privoznik 2023-04-11 13:57:25 UTC
This feels like we're overwriting error message somewhere in cleanup path after failed migration. But unfortunately, I'm unable to reproduce the error during migration. Is there anything special I need to do?

Comment 4 Yanqiu Zhang 2023-04-12 03:46:50 UTC
(In reply to Michal Privoznik from comment #3)
> But unfortunately, I'm unable to reproduce the error
> during migration. Is there anything special I need to do?

Hi Michal,
Just try vtpm secrets with different values please:
host A:
# MYSECRET=`printf %s "open sesame" | base64`
# virsh secret-set-value 6dd3e4a5-1d76-44ce-961f-f119f5aad935 $MYSECRET
Secret value set

hostB:
# MYSECRET=`printf %s "my abc" | base64`
# virsh secret-set-value 6dd3e4a5-1d76-44ce-961f-f119f5aad935 $MYSECRET
Secret value set

Comment 8 Jiri Denemark 2023-08-29 09:08:06 UTC
This is a general migration issue not connected to vtpm at all. It's a race
between Finish migration phase and a cleanup code that triggers when QEMU
process dies. If libvirt on the destination host cleans everything up before
the source (for p2p, or a client for non-p2p migration), the destination just
reports the domain does not exist anymore.

Fixing it will be nice, although we'd need to come up with a clever solution
to avoid keeping a dead domain present forever in case Finish is never called.

Comment 9 RHEL Program Management 2023-09-22 12:19:05 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 10 RHEL Program Management 2023-09-22 12:20:20 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.