Description of problem:
If in a VG, used as KVM storage pool, is present a LV ThinPool, that storage pool remain inactive and if I try to activate it I get this error:
Error refreshing pool 'kvm': internal error
Child process (/usr/sbin/vgchange -aln vg_int) unexpected exit status 5: Can't deactivate volume group "vg_int" with 6 open logical volume(s)
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 100, in cb_wrapper
callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 122, in tmpcb
callback(*args, **kwargs)
File "/usr/share/virt-manager/virtManager/host.py", line 732, in cb
pool.refresh()
File "/usr/share/virt-manager/virtManager/storagepool.py", line 129, in refresh
self.pool.refresh(0)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2485, in refresh
if ret == -1: raise libvirtError ('virStoragePoolRefresh() failed', pool=self)
libvirtError: errore interno Child process (/usr/sbin/vgchange -aln vg_int) unexpected exit status 5: Can't deactivate volume group "vg_int" with 6 open logical volume(s)
If I remove that LV, the pool is activated and everything work as expected.
Version-Release number of selected component (if applicable):
rpm -qa | fgrep virt
[...]
libvirt-1.0.5.5-1.fc19.x86_64
libvirt-daemon-driver-storage-1.0.5.5-1.fc19.x86_64
libvirt-daemon-driver-nodedev-1.0.5.5-1.fc19.x86_64
libvirt-daemon-kvm-1.0.5.5-1.fc19.x86_64
libvirt-1.0.5.5-1.fc19.x86_64
libvirt-daemon-1.0.5.5-1.fc19.x86_64
libvirt-client-1.0.5.5-1.fc19.x86_64
libvirt-python-1.0.5.5-1.fc19.x86_64
libvirt-daemon-driver-interface-1.0.5.5-1.fc19.x86_64
Steps to Reproduce:
1. lvcreate --size 500M --type thin-pool --thinpool lv_thinpool vg_int
2. virt-manager
3. Modify -> Connection Details -> Storage -> KVM
Actual results:
Storage Pool Status: inactive
Expected results:
Storage Pool Status: active
Additional info:
Description of problem: If in a VG, used as KVM storage pool, is present a LV ThinPool, that storage pool remain inactive and if I try to activate it I get this error: Error refreshing pool 'kvm': internal error Child process (/usr/sbin/vgchange -aln vg_int) unexpected exit status 5: Can't deactivate volume group "vg_int" with 6 open logical volume(s) Traceback (most recent call last): File "/usr/share/virt-manager/virtManager/asyncjob.py", line 100, in cb_wrapper callback(asyncjob, *args, **kwargs) File "/usr/share/virt-manager/virtManager/asyncjob.py", line 122, in tmpcb callback(*args, **kwargs) File "/usr/share/virt-manager/virtManager/host.py", line 732, in cb pool.refresh() File "/usr/share/virt-manager/virtManager/storagepool.py", line 129, in refresh self.pool.refresh(0) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2485, in refresh if ret == -1: raise libvirtError ('virStoragePoolRefresh() failed', pool=self) libvirtError: errore interno Child process (/usr/sbin/vgchange -aln vg_int) unexpected exit status 5: Can't deactivate volume group "vg_int" with 6 open logical volume(s) If I remove that LV, the pool is activated and everything work as expected. Version-Release number of selected component (if applicable): rpm -qa | fgrep virt [...] libvirt-1.0.5.5-1.fc19.x86_64 libvirt-daemon-driver-storage-1.0.5.5-1.fc19.x86_64 libvirt-daemon-driver-nodedev-1.0.5.5-1.fc19.x86_64 libvirt-daemon-kvm-1.0.5.5-1.fc19.x86_64 libvirt-1.0.5.5-1.fc19.x86_64 libvirt-daemon-1.0.5.5-1.fc19.x86_64 libvirt-client-1.0.5.5-1.fc19.x86_64 libvirt-python-1.0.5.5-1.fc19.x86_64 libvirt-daemon-driver-interface-1.0.5.5-1.fc19.x86_64 Steps to Reproduce: 1. lvcreate --size 500M --type thin-pool --thinpool lv_thinpool vg_int 2. virt-manager 3. Modify -> Connection Details -> Storage -> KVM Actual results: Storage Pool Status: inactive Expected results: Storage Pool Status: active Additional info: