Bug 603696

Summary: virsh does not provide the ability to determine pool nor domain persistence
Product: Red Hat Enterprise Linux 6 Reporter: Justin Clift <justin>
Component: libvirtAssignee: Eric Blake <eblake>
Status: CLOSED CURRENTRELEASE QA Contact: Virtualization Bugs <virt-bugs>
Severity: medium Docs Contact:
Priority: low    
Version: 6.0CC: dallan, eblake, jclift, mjenner, nzhang, xen-maint, xhu
Target Milestone: rc   
Target Release: 6.0   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: libvirt-0_8_1-10_el6 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2010-11-11 14:50:02 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Justin Clift 2010-06-14 11:34:52 UTC
Description of problem:

In RHEL 6 beta 1, virsh doesn't provide a way to determine if a domain or pool that is present is persistent.

This can cause unexpected data loss when a domain or pool that was expected to be persistent wasn't.  When the libvirtd daemon is restarted, the domain/pools are gone.


Version-Release number of selected component (if applicable):
libvirt-0.7.6-2.el6.x86_64.rpm


How reproducible:

Every time.


Steps to Reproduce:
1. Migrate a virtualised guest to a host server using a client that doesn't automatically define the guest on the new host.
2. Using virsh, try to determine whether the guest will be persistent after a restart of libvirtd.  No luck, it can't be done.
2. Restart the libvirtd on the host. 
3. The guest is now gone, no longer existing on either source or destination host.  Data loss here. :/


Actual results:

Restarting of libvirtd can result in data loss, as it's not possible to tell which objects will persist.


Expected results:

Some way to determine which objects will persist, in order to take actions or make allowances appropriately when needing to restart libvirtd.


Additional info:

Comment 2 RHEL Program Management 2010-06-14 12:03:26 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release.  Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release.  This request is not yet committed for
inclusion.

Comment 3 Justin Clift 2010-06-14 12:15:06 UTC
Initial patch submission to fix this upstream:

  http://www.redhat.com/archives/libvir-list/2010-June/msg00342.html

Comment 5 Dave Allan 2010-06-24 01:27:19 UTC
libvirt-0_8_1-10_el6 has been built in RHEL-6-candidate with the fix.

Dave

Comment 7 Nan Zhang 2010-07-09 03:41:37 UTC
Verified with libvirt-0.8.1-13.el6.x86_64. Moving to VERIFIED.


On client:
# virsh migrate --live foo qemu+ssh://10.66.70.152/system
root.70.152's password:

#

On host:
# virsh list --all
 Id Name                 State
----------------------------------
  2 foo                  running

# service libvirtd restart
Stopping libvirtd daemon:                                  [  OK  ]
Starting libvirtd daemon:                                  [  OK  ]
# virsh list --all
 Id Name                 State
----------------------------------
  2 foo                  running

Comment 10 xhu 2010-09-08 05:30:02 UTC
Verified this bug with RHEL6 RC build and it passed:
libvirt-0.8.1-27.el6.x86_64
qemu-kvm-0.12.1.2-2.113.el6.x86_64
kernel-2.6.32-71.el6.x86_64

Comment 11 Nan Zhang 2010-09-09 10:01:13 UTC
# virsh dominfo foo
Id:             2
Name:           foo
UUID:           92abbc4c-23fc-49e4-fb04-f4f195324d67
OS Type:        hvm
State:          running
CPU(s):         1
CPU time:       826.1s
Max memory:     524288 kB
Used memory:    524288 kB
Persistent:     no
Autostart:      disable
Security model: selinux
Security DOI:   0
Security label: system_u:system_r:svirt_t:s0:c462,c699 (permissive)

Comment 12 releng-rhel@redhat.com 2010-11-11 14:50:02 UTC
Red Hat Enterprise Linux 6.0 is now available and should resolve
the problem described in this bug report. This report is therefore being closed
with a resolution of CURRENTRELEASE. You may reopen this bug report if the
solution does not work for you.