Bug 713688 - Failed to activate or create LVM storage pool out of the VG with existing mirror volumes
Failed to activate or create LVM storage pool out of the VG with existing mir...
Status: CLOSED WONTFIX
Product: Fedora
Classification: Fedora
Component: libvirt (Show other bugs)
14
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Libvirt Maintainers
Fedora Extras Quality Assurance
:
Depends On:
Blocks: 786831
  Show dependency treegraph
 
Reported: 2011-06-16 05:10 EDT by Peter Rajnoha
Modified: 2012-02-02 09:17 EST (History)
13 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 786831 (view as bug list)
Environment:
Last Closed: 2012-01-24 17:41:38 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Peter Rajnoha 2011-06-16 05:10:31 EDT
Description of problem:
Could not activate/create LVM storage pool if it contains mirror volumes.

Version-Release number of selected component (if applicable):
virt-manager-0.8.7-2.fc14.noarch
python-virtinst-0.500.6-1.fc14.noarch

How reproducible:
Create a mirror volume in a VG that is used as a virt storage pool.

Steps to Reproduce:
1. vgcreate vg /dev/sda /dev/sdb /dev/... ...
2. lvcreate -l1 -m1 --alloc anywhere vg
3. open virt-manager's storage management tab/Add Storage Pool - LVM Volume Group, target path set to /dev/vg
4. and "Finish"
  
Actual results:
Failed to create VG storage pool with an error log:

Error creating pool: Could not start storage pool: internal error lvs command failed
Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 45, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/createpool.py", line 421, in _async_pool_create
    poolobj = self._pool.install(create=True, meter=meter, build=build)
  File "/usr/lib/python2.7/site-packages/virtinst/Storage.py", line 733, in install
    build=build)
  File "/usr/lib/python2.7/site-packages/virtinst/Storage.py", line 478, in install
    raise RuntimeError(errmsg)
RuntimeError: Could not start storage pool: internal error lvs command failed


Expected results:
VG storage pool created.

Additional info:
- lvs command is working fine when entered directly from command line
- the same problem happens when activating already existing storage pool where the mirror volume was added manually before
- reproducible on F15 as well
Comment 1 Doug Magee 2011-08-02 15:57:10 EDT
I've discovered the same problem on F15, except my message indicates virt-manager is trying to DEactivate the VG (so it's obvious why it fails). Any updates?

pkg versions:
libvirt-0.8.8-7.fc15.x86_64
lvm2-2.02.84-3.fc15.x86_64
virt-manager-0.8.7-4.fc15.noarch

Error:
Error creating pool: Could not start storage pool: internal error '/sbin/vgchange -an vg' exited with non-zero status 5 and signal 0:   Can't deactivate volume group "vg" with 3 open logical volume(s)

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 45, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/createpool.py", line 421, in _async_pool_create
    poolobj = self._pool.install(create=True, meter=meter, build=build)
  File "/usr/lib/python2.7/site-packages/virtinst/Storage.py", line 733, in install
    build=build)
  File "/usr/lib/python2.7/site-packages/virtinst/Storage.py", line 478, in install
    raise RuntimeError(errmsg)
RuntimeError: Could not start storage pool: internal error '/sbin/vgchange -an vg' exited with non-zero status 5 and signal 0:   Can't deactivate volume group "vg" with 3 open logical volume(s)
Comment 2 Peter Rajnoha 2011-08-03 03:07:39 EDT
(In reply to comment #1)
> I've discovered the same problem on F15, except my message indicates
> virt-manager is trying to DEactivate the VG (so it's obvious why it fails). 

This might be related to bug #570359 and bug #702260.
Comment 3 Doug Magee 2011-08-03 09:27:45 EDT
(In reply to comment #2)
> This might be related to bug #570359 and bug #702260.

Both those bugs are related to the behavior of the lvremove command, but i'm not attempting to remove a LV; i want to add a volume group as a storage pool in libvirt.  I don't see why it would be dependent on a 'vgchange -an', as there could be active LVs, especially in this case, my host filesystem is in the same volume group.  Furthermore, why didn't it fail the same way when there were no mirrors in the VG?  It still would have failed to deactivate the VG...
Comment 4 Fedora Admin XMLRPC Client 2011-09-22 13:56:12 EDT
This package has changed ownership in the Fedora Package Database.  Reassigning to the new owner of this component.
Comment 5 Fedora Admin XMLRPC Client 2011-09-22 13:59:32 EDT
This package has changed ownership in the Fedora Package Database.  Reassigning to the new owner of this component.
Comment 6 Fedora Admin XMLRPC Client 2011-11-30 14:55:12 EST
This package has changed ownership in the Fedora Package Database.  Reassigning to the new owner of this component.
Comment 7 Fedora Admin XMLRPC Client 2011-11-30 14:57:16 EST
This package has changed ownership in the Fedora Package Database.  Reassigning to the new owner of this component.
Comment 8 Fedora Admin XMLRPC Client 2011-11-30 15:01:23 EST
This package has changed ownership in the Fedora Package Database.  Reassigning to the new owner of this component.
Comment 9 Fedora Admin XMLRPC Client 2011-11-30 15:02:52 EST
This package has changed ownership in the Fedora Package Database.  Reassigning to the new owner of this component.
Comment 10 Cole Robinson 2012-01-24 17:41:38 EST
Sorry for not addressing this bug, but F14 is EOL now, so I'm closing this
report. Please reopen if this is still relevant in a more recent fedora.

Note You need to log in before you can comment on or make changes to this bug.