Bug 924672 - Adding a thinpool logical volume prevents using the volume group as a storage pool in virt-manager
Summary: Adding a thinpool logical volume prevents using the volume group as a storage...
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: Virtualization Tools
Classification: Community
Component: libvirt
Version: unspecified
Hardware: x86_64
OS: Linux
unspecified
unspecified
Target Milestone: ---
Assignee: Dave Allan
QA Contact:
URL:
Whiteboard:
: 1011572 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-03-22 10:20 UTC by L.L.Robinson
Modified: 2016-04-26 20:04 UTC (History)
18 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-03-27 06:48:39 UTC
Embargoed:


Attachments (Terms of Use)

Description L.L.Robinson 2013-03-22 10:20:46 UTC
Description of problem:
If you create an lvm thinpool with lvcreate you can no longer add or use the volume group you created it in as a storage pool in virt-manager

Version-Release number of selected component (if applicable):

Version     : 0.9.4
Release     : 4.fc18

How reproducible: Always


Steps to Reproduce:
1. Create a volume group with free space
2. lvcreate  -L 20G --thinpool thinvolname volgroupname
3. in virt-manager right click on connection name and select "Details"
4. Go to storage tab
5. Click plus sign to add storage pool
6. Choose name and type logical, click forward
7. choose target path as /dev/volgroupname, click Finish

Actual results:
Error creating pool: Could not start storage pool: internal error Child process (/usr/sbin/vgchange -aln vg_latidude) unexpected exit status 5:   Can't deactivate volume group "vg_latidude" with 4 open logical volume(s)

Details 

Error creating pool: Could not start storage pool: internal error Child process (/usr/sbin/vgchange -aln vg_latidude) unexpected exit status 5:   Can't deactivate volume group "vg_latidude" with 4 open logical volume(s)


Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 96, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/createpool.py", line 500, in _async_pool_create
    poolobj = self._pool.install(create=True, meter=meter, build=build)
  File "/usr/lib/python2.7/site-packages/virtinst/Storage.py", line 744, in install
    build=build, autostart=False)
  File "/usr/lib/python2.7/site-packages/virtinst/Storage.py", line 489, in install
    raise RuntimeError(errmsg)
RuntimeError: Could not start storage pool: internal error Child process (/usr/sbin/vgchange -aln vg_latidude) unexpected exit status 5:   Can't deactivate volume group "vg_latidude" with 4 open logical volume(s)




Expected results:

Add volume group and either include or ignore thin pool, 
Offer option of using thin pool as libvirt storage pool

Additional info:

Comment 1 Cole Robinson 2013-09-01 00:40:23 UTC
Still seems relevant with upstream. If I create a thin volume as specified above, on an existing volume group that libvirt knows about, then try to refresh the pool, I get:

sudo virsh pool-refresh vgvirt
error: Failed to refresh pool vgvirt
error: cannot stat file '/dev/vgvirt/thinvol': No such file or directory

Thin volumes don't seem to show up in /dev right away, so libvirt's assumptions probably need tweaking. Moving to the upstream tracker

Comment 2 Cole Robinson 2013-09-24 17:47:35 UTC
*** Bug 1011572 has been marked as a duplicate of this bug. ***

Comment 3 Dusty Mabe 2013-09-24 22:20:09 UTC
Potention patch submitted upstream:
https://www.redhat.com/archives/libvir-list/2013-September/msg01215.html

Comment 5 Ján Tomko 2014-03-27 06:48:39 UTC
Fixed by commit 4132dede0652b7f0cc83868fd454423310bc1a9c
Author:     Dusty Mabe <dustymabe>
CommitDate: 2013-10-15 16:52:57 -0400

    Ignore thin pool LVM devices.

git describe: v1.1.3-159-g4132ded contains: v1.1.4-rc1~74


Note You need to log in before you can comment on or make changes to this bug.