Bug 1258349

Summary: [libvirt] incorrect XML restore on dehibernation path
Product: [oVirt] vdsm Reporter: Francesco Romani <fromani>
Component: GeneralAssignee: Francesco Romani <fromani>
Status: CLOSED CURRENTRELEASE QA Contact: sefi litmanovich <slitmano>
Severity: urgent Docs Contact:
Priority: urgent    
Version: ---CC: bazulay, bugs, ecohen, fromani, gklein, lsurette, mgoldboi, ofrenkel, rbalakri, sbonazzo, slitmano, ycui, yeylon, ylavi
Target Milestone: ovirt-3.5.5Flags: ylavi: ovirt-3.5.z?
ylavi: ovirt-3.6.0?
ylavi: planning_ack+
rule-engine: devel_ack+
rule-engine: testing_ack?
Target Release: 4.17.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: virt
Fixed In Version: 4.16.27 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-10-26 13:43:58 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Virt RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Francesco Romani 2015-08-31 07:13:20 UTC
Description of problem:
libvirt 1.2.8 up until 1.2.8-16.el7_1.4 (found in CentOS 7.1) has a bug which causes incorrect XML reload when virDomainRestore is called. We use this API in the restore (dehibernation) flow, so we must make sure we depend on fixed libvirt.

Please note the bug is fixed already upstream, libvirt 1.2.13 (in CentOS 7.2) works fine

How reproducible:
100% on CentOS 7.1 with libvirt < el7_1.4


Steps to Reproduce:
1. suspend VM
2. resume VM
3. check the XML of the resumed VM

Actual results:
some fields missing or unexpectedly changed

Expected results:
no unwanted changes

Additional info:
No code change needed on VDSM

Comment 1 Francesco Romani 2015-09-08 06:22:25 UTC
waiting for package availability.

Comment 2 Francesco Romani 2015-09-16 11:03:19 UTC
(In reply to Francesco Romani from comment #1)
> waiting for package availability.

Package available on CentOS 7.1 since this morning: 

[root@goji ~]# rpm -qa | grep libvirt
libvirt-debuginfo-1.2.17-5.el7.centos.x86_64
libvirt-daemon-driver-nwfilter-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-config-network-1.2.8-16.el7_1.4.x86_64
libvirt-docs-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-driver-qemu-1.2.8-16.el7_1.4.x86_64
libvirt-devel-1.2.8-16.el7_1.4.x86_64
libvirt-lock-sanlock-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-driver-lxc-1.2.8-16.el7_1.4.x86_64
libvirt-python-1.2.8-7.el7_1.1.x86_64
libvirt-daemon-driver-network-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-driver-secret-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-config-nwfilter-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-lxc-1.2.8-16.el7_1.4.x86_64
libvirt-python-debuginfo-1.2.13-1.el7.centos.x86_64
libvirt-daemon-driver-nodedev-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-kvm-1.2.8-16.el7_1.4.x86_64
libvirt-login-shell-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-driver-storage-1.2.8-16.el7_1.4.x86_64
libvirt-client-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-driver-interface-1.2.8-16.el7_1.4.x86_64
libvirt-1.2.8-16.el7_1.4.x86_64

Comment 3 Yaniv Lavi 2015-10-14 14:50:47 UTC
What is the status of this bug?
Can you please set TM\target if this needs to be ON_QA?
Are the flags correct?

Comment 4 Francesco Romani 2015-10-15 14:47:41 UTC
(In reply to Yaniv Dary from comment #3)
> What is the status of this bug?
> Can you please set TM\target if this needs to be ON_QA?
> Are the flags correct?

It needs to be in 3.5.5 (fixed in version 4.16.27) and in 3.6.0 (fixed in version 4.17.7) Targeting 3.5.5.

Comment 5 Francesco Romani 2015-10-15 14:53:30 UTC
Please note that we are just consuming a fix from libvirt, hence the amount of testing needed is really minimal.

Comment 6 Red Hat Bugzilla Rules Engine 2015-10-18 08:34:13 UTC
Bug tickets that are moved to testing must have target release set to make sure tester knows what to test. Please set the correct target release before moving to ON_QA.

Comment 7 sefi litmanovich 2015-10-19 09:40:38 UTC
Verified on both rhevm 3.5.5 and 3.6:

1. engine: rhevm-3.5.5-0.1.el6ev.noarch
host: 
vdsm-4.16.27-1.el7ev.x86_64

[root@host~]# rpm -qa | grep libvirt
libvirt-daemon-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-config-nwfilter-1.2.8-16.el7_1.4.x86_64
libvirt-lock-sanlock-1.2.8-16.el7_1.4.x86_64
libvirt-client-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-driver-nwfilter-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-driver-qemu-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-driver-interface-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-driver-nodedev-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-driver-storage-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-kvm-1.2.8-16.el7_1.4.x86_64
libvirt-python-1.2.8-7.el7_1.1.x86_64
libvirt-daemon-driver-network-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-driver-secret-1.2.8-16.el7_1.4.x86_64


2. engine:rhevm-3.6.0.1-0.1.el6.noarch
host:
vdsm-4.17.9-1.el7ev.noarch

[root@localhost ~]# rpm -qa | grep libvirt
libvirt-daemon-driver-nwfilter-1.2.17-12.el7.x86_64
libvirt-daemon-driver-storage-1.2.17-12.el7.x86_64
libvirt-daemon-config-nwfilter-1.2.17-12.el7.x86_64
libvirt-client-1.2.17-12.el7.x86_64
libvirt-daemon-driver-secret-1.2.17-12.el7.x86_64
libvirt-lock-sanlock-1.2.17-12.el7.x86_64
libvirt-daemon-1.2.17-12.el7.x86_64
libvirt-daemon-driver-nodedev-1.2.17-12.el7.x86_64
libvirt-daemon-kvm-1.2.17-12.el7.x86_64
libvirt-daemon-driver-interface-1.2.17-12.el7.x86_64
libvirt-daemon-driver-network-1.2.17-12.el7.x86_64
libvirt-python-1.2.17-2.el7.x86_64
libvirt-daemon-driver-qemu-1.2.17-12.el7.x86_64

In both cases xmldump of vm is the same before and after suspend and resume, no problem opening a console to the vm via webadmin or ticketing the vm.

Comment 8 Francesco Romani 2015-10-19 21:39:58 UTC
(In reply to Red Hat Bugzilla Rules Engine from comment #6)
> Bug tickets that are moved to testing must have target release set to make
> sure tester knows what to test. Please set the correct target release before
> moving to ON_QA.

this bug was already verified  as per comment 7. Furthermore, I can't see any 4.16.x relase in the "target release" dropdown menu.

Comment 9 Sandro Bonazzola 2015-10-26 13:43:58 UTC
oVirt 3.5.5 has been released including fixes for this issue.