Bug 1011572

Summary: KVM vg pool remain inactive if in the vg is present a lv thinpool
Product: [Fedora] Fedora Reporter: Andrea Perotti <aperotti>
Component: libvirtAssignee: Libvirt Maintainers <libvirt-maint>
Status: CLOSED DUPLICATE QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 19CC: aperotti, berrange, clalancette, crobinso, itamar, jforbes, jyang, laine, libvirt-maint, veillard, virt-maint
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-09-24 17:47:35 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Andrea Perotti 2013-09-24 14:49:39 UTC
Description of problem:

If in a VG, used as KVM storage pool, is present a LV ThinPool, that storage pool remain inactive and if I try to activate it I get this error:

Error refreshing pool 'kvm': internal error
Child process (/usr/sbin/vgchange -aln vg_int) unexpected exit status 5:   Can't deactivate volume group "vg_int" with 6 open logical volume(s)

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 100, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 122, in tmpcb
    callback(*args, **kwargs)
  File "/usr/share/virt-manager/virtManager/host.py", line 732, in cb
    pool.refresh()
  File "/usr/share/virt-manager/virtManager/storagepool.py", line 129, in refresh
    self.pool.refresh(0)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2485, in refresh
    if ret == -1: raise libvirtError ('virStoragePoolRefresh() failed', pool=self)
libvirtError: errore interno Child process (/usr/sbin/vgchange -aln vg_int) unexpected exit status 5:   Can't deactivate volume group "vg_int" with 6 open logical volume(s)


If I remove that LV, the pool is activated and everything work as expected.


Version-Release number of selected component (if applicable):


rpm -qa | fgrep virt
[...]
libvirt-1.0.5.5-1.fc19.x86_64
libvirt-daemon-driver-storage-1.0.5.5-1.fc19.x86_64
libvirt-daemon-driver-nodedev-1.0.5.5-1.fc19.x86_64
libvirt-daemon-kvm-1.0.5.5-1.fc19.x86_64
libvirt-1.0.5.5-1.fc19.x86_64
libvirt-daemon-1.0.5.5-1.fc19.x86_64
libvirt-client-1.0.5.5-1.fc19.x86_64
libvirt-python-1.0.5.5-1.fc19.x86_64
libvirt-daemon-driver-interface-1.0.5.5-1.fc19.x86_64


Steps to Reproduce:
1. lvcreate --size 500M  --type thin-pool --thinpool lv_thinpool vg_int
2. virt-manager
3. Modify -> Connection Details -> Storage -> KVM

Actual results:
Storage Pool Status: inactive

Expected results:
Storage Pool Status: active

Additional info:

Comment 1 Cole Robinson 2013-09-24 17:47:35 UTC

*** This bug has been marked as a duplicate of bug 924672 ***