+++ This bug was initially created as a clone of Bug #2130192 +++ For successful migration with TPM state on a shared filesystem (e.g. NFS, CEPH) we need to backport some patches that landed upstream after v7.1.0: commit a0bcec03761477371ff7c2e80dc07fff14222d92 Author: Ross Lagerwall <ross.lagerwall> AuthorDate: Mon Aug 1 15:25:25 2022 +0100 Commit: Stefan Berger <stefanb.ibm.com> CommitDate: Fri Sep 9 17:55:45 2022 -0400 tpm_emulator: Avoid double initialization during migration When resuming after a migration, the backend sends CMD_INIT to the emulator from the startup callback, then it sends the migration state from the vmstate to the emulator, then it sends CMD_INIT again. Skip the first CMD_INIT during a migration to avoid initializing the TPM twice. Signed-off-by: Ross Lagerwall <ross.lagerwall> Reviewed-by: Marc-André Lureau <marcandre.lureau> Tested-by: Stefan Berger <stefanb.com> Signed-off-by: Stefan Berger <stefanb.com> and commit 99bdcd2cc2d05833f5c11caca22193f8dd878ae9 Author: Stefan Berger <stefanb.ibm.com> AuthorDate: Mon Sep 12 13:47:41 2022 -0400 Commit: Stefan Berger <stefanb.ibm.com> CommitDate: Tue Sep 13 10:27:17 2022 -0400 tpm_emulator: Have swtpm relock storage upon migration fall-back Swtpm may release the lock once the last one of its state blobs has been migrated out. In case of VM migration failure QEMU now needs to notify swtpm that it should again take the lock, which it can otherwise only do once it has received the first TPM command from the VM. Only try to send the lock command if swtpm supports it. It will not have released the lock (and support shared storage setups) if it doesn't support the locking command since the functionality of releasing the lock upon state blob reception and the lock command were added to swtpm 'together'. If QEMU sends the lock command and the storage has already been locked no error is reported. If swtpm does not receive the lock command (from older version of QEMU), it will lock the storage once the first TPM command has been received. So sending the lock command is an optimization. Signed-off-by: Stefan Berger <stefanb.com> Reviewed-by: Marc-André Lureau <marcandre.lureau> Message-id: 20220912174741.1542330-3-stefanb.com For clean backport I found the following order applies cleanly: d1c637ecff6f8c13cc9983b96a7aad2922d283f9 a0bcec03761477371ff7c2e80dc07fff14222d92 f0ccce6a95f6ff947040692ef941230918181562 efef4756c7f66e51fd5bfa132680ee0fb585f7a5 99bdcd2cc2d05833f5c11caca22193f8dd878ae9
These should all be part of qemu-7.2 which will be rebased as part of bug 2135806 for RHEL 9.2; however, is this bug being added because you need to backport some fixes into qemu-6.2 which was used for RHEL 9.0? If so, then you should set the ZTR and add the zstream? flag as well. I'm also updating the bug to assign to Marc-Andre since he reviewed for qemu-kvm, moving to POST, setting DTM, adding the rebase dependency, and updating the devel whiteboard. We'll need a qa_ack+ and ITM in order to get release+
(In reply to John Ferlan from comment #2) > These should all be part of qemu-7.2 which will be rebased as part of bug > 2135806 for RHEL 9.2; however, is this bug being added because you need to > backport some fixes into qemu-6.2 which was used for RHEL 9.0? If so, then > you should set the ZTR and add the zstream? flag as well. > > I'm also updating the bug to assign to Marc-Andre since he reviewed for > qemu-kvm, moving to POST, setting DTM, adding the rebase dependency, and > updating the devel whiteboard. > > We'll need a qa_ack+ and ITM in order to get release+ Ah, sorry. I did not realize that QEMU is going to rebase to 7.2.0. So far, the feature is targeted for RHEL-9.2, so no z-stream needed. Thanks for your help!
QE bot(pre verify): Set 'Verified:Tested,SanityOnly' as gating/tier1 test pass.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: qemu-kvm security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:2162