Bug 924672 - Adding a thinpool logical volume prevents using the volume group as a storage pool in virt-manager
Adding a thinpool logical volume prevents using the volume group as a storage...
Status: CLOSED UPSTREAM
Product: Virtualization Tools
Classification: Community
Component: libvirt (Show other bugs)
unspecified
x86_64 Linux
unspecified Severity unspecified
: ---
: ---
Assigned To: Dave Allan
:
: 1011572 (view as bug list)
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-03-22 06:20 EDT by L.L.Robinson
Modified: 2016-04-26 16:04 EDT (History)
18 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2014-03-27 02:48:39 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description L.L.Robinson 2013-03-22 06:20:46 EDT
Description of problem:
If you create an lvm thinpool with lvcreate you can no longer add or use the volume group you created it in as a storage pool in virt-manager

Version-Release number of selected component (if applicable):

Version     : 0.9.4
Release     : 4.fc18

How reproducible: Always


Steps to Reproduce:
1. Create a volume group with free space
2. lvcreate  -L 20G --thinpool thinvolname volgroupname
3. in virt-manager right click on connection name and select "Details"
4. Go to storage tab
5. Click plus sign to add storage pool
6. Choose name and type logical, click forward
7. choose target path as /dev/volgroupname, click Finish

Actual results:
Error creating pool: Could not start storage pool: internal error Child process (/usr/sbin/vgchange -aln vg_latidude) unexpected exit status 5:   Can't deactivate volume group "vg_latidude" with 4 open logical volume(s)

Details 

Error creating pool: Could not start storage pool: internal error Child process (/usr/sbin/vgchange -aln vg_latidude) unexpected exit status 5:   Can't deactivate volume group "vg_latidude" with 4 open logical volume(s)


Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 96, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/createpool.py", line 500, in _async_pool_create
    poolobj = self._pool.install(create=True, meter=meter, build=build)
  File "/usr/lib/python2.7/site-packages/virtinst/Storage.py", line 744, in install
    build=build, autostart=False)
  File "/usr/lib/python2.7/site-packages/virtinst/Storage.py", line 489, in install
    raise RuntimeError(errmsg)
RuntimeError: Could not start storage pool: internal error Child process (/usr/sbin/vgchange -aln vg_latidude) unexpected exit status 5:   Can't deactivate volume group "vg_latidude" with 4 open logical volume(s)




Expected results:

Add volume group and either include or ignore thin pool, 
Offer option of using thin pool as libvirt storage pool

Additional info:
Comment 1 Cole Robinson 2013-08-31 20:40:23 EDT
Still seems relevant with upstream. If I create a thin volume as specified above, on an existing volume group that libvirt knows about, then try to refresh the pool, I get:

sudo virsh pool-refresh vgvirt
error: Failed to refresh pool vgvirt
error: cannot stat file '/dev/vgvirt/thinvol': No such file or directory

Thin volumes don't seem to show up in /dev right away, so libvirt's assumptions probably need tweaking. Moving to the upstream tracker
Comment 2 Cole Robinson 2013-09-24 13:47:35 EDT
*** Bug 1011572 has been marked as a duplicate of this bug. ***
Comment 3 Dusty Mabe 2013-09-24 18:20:09 EDT
Potention patch submitted upstream:
https://www.redhat.com/archives/libvir-list/2013-September/msg01215.html
Comment 5 Ján Tomko 2014-03-27 02:48:39 EDT
Fixed by commit 4132dede0652b7f0cc83868fd454423310bc1a9c
Author:     Dusty Mabe <dustymabe@gmail.com>
CommitDate: 2013-10-15 16:52:57 -0400

    Ignore thin pool LVM devices.

git describe: v1.1.3-159-g4132ded contains: v1.1.4-rc1~74

Note You need to log in before you can comment on or make changes to this bug.