Bug 713688 - Failed to activate or create LVM storage pool out of the VG with existing mirror volumes
Summary: Failed to activate or create LVM storage pool out of the VG with existing mir...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Fedora
Classification: Fedora
Component: libvirt
Version: 14
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Libvirt Maintainers
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks: 786831
TreeView+ depends on / blocked
 
Reported: 2011-06-16 09:10 UTC by Peter Rajnoha
Modified: 2012-02-02 14:17 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 786831 (view as bug list)
Environment:
Last Closed: 2012-01-24 22:41:38 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)

Description Peter Rajnoha 2011-06-16 09:10:31 UTC
Description of problem:
Could not activate/create LVM storage pool if it contains mirror volumes.

Version-Release number of selected component (if applicable):
virt-manager-0.8.7-2.fc14.noarch
python-virtinst-0.500.6-1.fc14.noarch

How reproducible:
Create a mirror volume in a VG that is used as a virt storage pool.

Steps to Reproduce:
1. vgcreate vg /dev/sda /dev/sdb /dev/... ...
2. lvcreate -l1 -m1 --alloc anywhere vg
3. open virt-manager's storage management tab/Add Storage Pool - LVM Volume Group, target path set to /dev/vg
4. and "Finish"
  
Actual results:
Failed to create VG storage pool with an error log:

Error creating pool: Could not start storage pool: internal error lvs command failed
Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 45, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/createpool.py", line 421, in _async_pool_create
    poolobj = self._pool.install(create=True, meter=meter, build=build)
  File "/usr/lib/python2.7/site-packages/virtinst/Storage.py", line 733, in install
    build=build)
  File "/usr/lib/python2.7/site-packages/virtinst/Storage.py", line 478, in install
    raise RuntimeError(errmsg)
RuntimeError: Could not start storage pool: internal error lvs command failed


Expected results:
VG storage pool created.

Additional info:
- lvs command is working fine when entered directly from command line
- the same problem happens when activating already existing storage pool where the mirror volume was added manually before
- reproducible on F15 as well

Comment 1 Doug Magee 2011-08-02 19:57:10 UTC
I've discovered the same problem on F15, except my message indicates virt-manager is trying to DEactivate the VG (so it's obvious why it fails). Any updates?

pkg versions:
libvirt-0.8.8-7.fc15.x86_64
lvm2-2.02.84-3.fc15.x86_64
virt-manager-0.8.7-4.fc15.noarch

Error:
Error creating pool: Could not start storage pool: internal error '/sbin/vgchange -an vg' exited with non-zero status 5 and signal 0:   Can't deactivate volume group "vg" with 3 open logical volume(s)

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 45, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/createpool.py", line 421, in _async_pool_create
    poolobj = self._pool.install(create=True, meter=meter, build=build)
  File "/usr/lib/python2.7/site-packages/virtinst/Storage.py", line 733, in install
    build=build)
  File "/usr/lib/python2.7/site-packages/virtinst/Storage.py", line 478, in install
    raise RuntimeError(errmsg)
RuntimeError: Could not start storage pool: internal error '/sbin/vgchange -an vg' exited with non-zero status 5 and signal 0:   Can't deactivate volume group "vg" with 3 open logical volume(s)

Comment 2 Peter Rajnoha 2011-08-03 07:07:39 UTC
(In reply to comment #1)
> I've discovered the same problem on F15, except my message indicates
> virt-manager is trying to DEactivate the VG (so it's obvious why it fails). 

This might be related to bug #570359 and bug #702260.

Comment 3 Doug Magee 2011-08-03 13:27:45 UTC
(In reply to comment #2)
> This might be related to bug #570359 and bug #702260.

Both those bugs are related to the behavior of the lvremove command, but i'm not attempting to remove a LV; i want to add a volume group as a storage pool in libvirt.  I don't see why it would be dependent on a 'vgchange -an', as there could be active LVs, especially in this case, my host filesystem is in the same volume group.  Furthermore, why didn't it fail the same way when there were no mirrors in the VG?  It still would have failed to deactivate the VG...

Comment 4 Fedora Admin XMLRPC Client 2011-09-22 17:56:12 UTC
This package has changed ownership in the Fedora Package Database.  Reassigning to the new owner of this component.

Comment 5 Fedora Admin XMLRPC Client 2011-09-22 17:59:32 UTC
This package has changed ownership in the Fedora Package Database.  Reassigning to the new owner of this component.

Comment 6 Fedora Admin XMLRPC Client 2011-11-30 19:55:12 UTC
This package has changed ownership in the Fedora Package Database.  Reassigning to the new owner of this component.

Comment 7 Fedora Admin XMLRPC Client 2011-11-30 19:57:16 UTC
This package has changed ownership in the Fedora Package Database.  Reassigning to the new owner of this component.

Comment 8 Fedora Admin XMLRPC Client 2011-11-30 20:01:23 UTC
This package has changed ownership in the Fedora Package Database.  Reassigning to the new owner of this component.

Comment 9 Fedora Admin XMLRPC Client 2011-11-30 20:02:52 UTC
This package has changed ownership in the Fedora Package Database.  Reassigning to the new owner of this component.

Comment 10 Cole Robinson 2012-01-24 22:41:38 UTC
Sorry for not addressing this bug, but F14 is EOL now, so I'm closing this
report. Please reopen if this is still relevant in a more recent fedora.


Note You need to log in before you can comment on or make changes to this bug.