Bug 1011572 - KVM vg pool remain inactive if in the vg is present a lv thinpool
KVM vg pool remain inactive if in the vg is present a lv thinpool
Status: CLOSED DUPLICATE of bug 924672
Product: Fedora
Classification: Fedora
Component: libvirt (Show other bugs)
19
x86_64 Linux
unspecified Severity medium
: ---
: ---
Assigned To: Libvirt Maintainers
Fedora Extras Quality Assurance
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-09-24 10:49 EDT by Andrea Perotti
Modified: 2013-09-24 13:47 EDT (History)
11 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-09-24 13:47:35 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Andrea Perotti 2013-09-24 10:49:39 EDT
Description of problem:

If in a VG, used as KVM storage pool, is present a LV ThinPool, that storage pool remain inactive and if I try to activate it I get this error:

Error refreshing pool 'kvm': internal error
Child process (/usr/sbin/vgchange -aln vg_int) unexpected exit status 5:   Can't deactivate volume group "vg_int" with 6 open logical volume(s)

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 100, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 122, in tmpcb
    callback(*args, **kwargs)
  File "/usr/share/virt-manager/virtManager/host.py", line 732, in cb
    pool.refresh()
  File "/usr/share/virt-manager/virtManager/storagepool.py", line 129, in refresh
    self.pool.refresh(0)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2485, in refresh
    if ret == -1: raise libvirtError ('virStoragePoolRefresh() failed', pool=self)
libvirtError: errore interno Child process (/usr/sbin/vgchange -aln vg_int) unexpected exit status 5:   Can't deactivate volume group "vg_int" with 6 open logical volume(s)


If I remove that LV, the pool is activated and everything work as expected.


Version-Release number of selected component (if applicable):


rpm -qa | fgrep virt
[...]
libvirt-1.0.5.5-1.fc19.x86_64
libvirt-daemon-driver-storage-1.0.5.5-1.fc19.x86_64
libvirt-daemon-driver-nodedev-1.0.5.5-1.fc19.x86_64
libvirt-daemon-kvm-1.0.5.5-1.fc19.x86_64
libvirt-1.0.5.5-1.fc19.x86_64
libvirt-daemon-1.0.5.5-1.fc19.x86_64
libvirt-client-1.0.5.5-1.fc19.x86_64
libvirt-python-1.0.5.5-1.fc19.x86_64
libvirt-daemon-driver-interface-1.0.5.5-1.fc19.x86_64


Steps to Reproduce:
1. lvcreate --size 500M  --type thin-pool --thinpool lv_thinpool vg_int
2. virt-manager
3. Modify -> Connection Details -> Storage -> KVM

Actual results:
Storage Pool Status: inactive

Expected results:
Storage Pool Status: active

Additional info:
Comment 1 Cole Robinson 2013-09-24 13:47:35 EDT

*** This bug has been marked as a duplicate of bug 924672 ***

Note You need to log in before you can comment on or make changes to this bug.