Bug 1689165
Summary: | Can not get the active block job when do mangedsave after restart libvirtd | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux Advanced Virtualization | Reporter: | yafu <yafu> |
Component: | libvirt | Assignee: | Peter Krempa <pkrempa> |
Status: | CLOSED ERRATA | QA Contact: | yisun |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 8.1 | CC: | dyuan, jdenemar, lmen, pkrempa, xuzhang, yalzhang, yisun |
Target Milestone: | rc | ||
Target Release: | 8.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | libvirt-5.3.0-1.el8 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2019-11-06 07:13:50 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
yafu
2019-03-15 10:34:38 UTC
This might have been broken by one of my refactors. This was already fixed recently: commit 9ed9124d0d72fbc1dbaa4859fcfdc998ce060488 Author: Peter Krempa <pkrempa> AuthorDate: Thu Oct 18 12:34:49 2018 +0200 Commit: Peter Krempa <pkrempa> CommitDate: Thu Jan 17 17:12:50 2019 +0100 qemu: process: refresh block jobs on reconnect Block job state was widely untracked by libvirt across restarts which was allowed by a stateless block job finishing handler which discarded disk state and redetected it. This is undesirable since we'll need to track more information for individual blockjobs due to -blockdev integration requirements. In case of legacy blockjobs we can recover whether the job is present at reconnect time by querying qemu. Adding tracking whether a job is present will allow simplification of the non-shared-storage cancellation code. git desc 9ed9124d0d7 --contains v5.1.0-rc1~462 verified with: libvirt-5.5.0-1.module+el8.1.0+3580+d7f6488d.x86_64 [root@dell-per730-67 ~]# virsh domblklist avocado-vt-vm1 Target Source ------------------------------------------------------------------------ vda /var/lib/avocado/data/avocado-vt/images/jeos-27-x86_64.qcow2 [root@dell-per730-67 ~]# for i in {s1,s2,s3,s4}; do virsh snapshot-create-as avocado-vt-vm1 $i --disk-only; done Domain snapshot s1 created Domain snapshot s2 created Domain snapshot s3 created Domain snapshot s4 created [root@dell-per730-67 ~]# virsh blockcommit avocado-vt-vm1 vda --base vda[2] --active --wait --verbose Block commit: [100 %] Now in synchronized phase [root@dell-per730-67 ~]# virsh blockjob avocado-vt-vm1 vda --info Active Block Commit: [100 %] [root@dell-per730-67 ~]# virsh managedsave avocado-vt-vm1 error: Failed to save domain avocado-vt-vm1 state error: Requested operation is not valid: domain has active block job [root@dell-per730-67 ~]# systemctl restart libvirtd [root@dell-per730-67 ~]# virsh managedsave avocado-vt-vm1 error: Failed to save domain avocado-vt-vm1 state error: Requested operation is not valid: domain has active block job <====== same expected error message here. issue fixed. [root@dell-per730-67 ~]# virsh blockjob avocado-vt-vm1 vda --info Active Block Commit: [100 %] Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:3723 |