Bug 786831 - Failed to activate or create LVM storage pool out of the VG with existing mirror volumes
Summary: Failed to activate or create LVM storage pool out of the VG with existing mir...
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: Fedora
Classification: Fedora
Component: libvirt
Version: 16
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: Libvirt Maintainers
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On: 713688
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-02-02 14:17 UTC by Peter Rajnoha
Modified: 2012-10-21 01:14 UTC (History)
16 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 713688
Environment:
Last Closed: 2012-10-21 01:14:44 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)

Description Peter Rajnoha 2012-02-02 14:17:55 UTC
This bug still exists in recent Fedora.

+++ This bug was initially created as a clone of Bug #713688 +++

Description of problem:
Could not activate/create LVM storage pool if it contains mirror volumes.

Version-Release number of selected component (if applicable):
virt-manager-0.8.7-2.fc14.noarch
python-virtinst-0.500.6-1.fc14.noarch

How reproducible:
Create a mirror volume in a VG that is used as a virt storage pool.

Steps to Reproduce:
1. vgcreate vg /dev/sda /dev/sdb /dev/... ...
2. lvcreate -l1 -m1 --alloc anywhere vg
3. open virt-manager's storage management tab/Add Storage Pool - LVM Volume Group, target path set to /dev/vg
4. and "Finish"
  
Actual results:
Failed to create VG storage pool with an error log:

Error creating pool: Could not start storage pool: internal error lvs command failed
Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 45, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/createpool.py", line 421, in _async_pool_create
    poolobj = self._pool.install(create=True, meter=meter, build=build)
  File "/usr/lib/python2.7/site-packages/virtinst/Storage.py", line 733, in install
    build=build)
  File "/usr/lib/python2.7/site-packages/virtinst/Storage.py", line 478, in install
    raise RuntimeError(errmsg)
RuntimeError: Could not start storage pool: internal error lvs command failed


Expected results:
VG storage pool created.

Additional info:
- lvs command is working fine when entered directly from command line
- the same problem happens when activating already existing storage pool where the mirror volume was added manually before
- reproducible on F15 as well

--- Additional comment from djmagee on 2011-08-02 21:57:10 CEST ---

I've discovered the same problem on F15, except my message indicates virt-manager is trying to DEactivate the VG (so it's obvious why it fails). Any updates?

pkg versions:
libvirt-0.8.8-7.fc15.x86_64
lvm2-2.02.84-3.fc15.x86_64
virt-manager-0.8.7-4.fc15.noarch

Error:
Error creating pool: Could not start storage pool: internal error '/sbin/vgchange -an vg' exited with non-zero status 5 and signal 0:   Can't deactivate volume group "vg" with 3 open logical volume(s)

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 45, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/createpool.py", line 421, in _async_pool_create
    poolobj = self._pool.install(create=True, meter=meter, build=build)
  File "/usr/lib/python2.7/site-packages/virtinst/Storage.py", line 733, in install
    build=build)
  File "/usr/lib/python2.7/site-packages/virtinst/Storage.py", line 478, in install
    raise RuntimeError(errmsg)
RuntimeError: Could not start storage pool: internal error '/sbin/vgchange -an vg' exited with non-zero status 5 and signal 0:   Can't deactivate volume group "vg" with 3 open logical volume(s)

--- Additional comment from prajnoha on 2011-08-03 09:07:39 CEST ---

(In reply to comment #1)
> I've discovered the same problem on F15, except my message indicates
> virt-manager is trying to DEactivate the VG (so it's obvious why it fails). 

This might be related to bug #570359 and bug #702260.

--- Additional comment from djmagee on 2011-08-03 15:27:45 CEST ---

(In reply to comment #2)
> This might be related to bug #570359 and bug #702260.

Both those bugs are related to the behavior of the lvremove command, but i'm not attempting to remove a LV; i want to add a volume group as a storage pool in libvirt.  I don't see why it would be dependent on a 'vgchange -an', as there could be active LVs, especially in this case, my host filesystem is in the same volume group.  Furthermore, why didn't it fail the same way when there were no mirrors in the VG?  It still would have failed to deactivate the VG...

--- Additional comment from fedora-admin-xmlrpc on 2011-09-22 19:56:12 CEST ---

This package has changed ownership in the Fedora Package Database.  Reassigning to the new owner of this component.

--- Additional comment from fedora-admin-xmlrpc on 2011-09-22 19:59:32 CEST ---

This package has changed ownership in the Fedora Package Database.  Reassigning to the new owner of this component.

--- Additional comment from fedora-admin-xmlrpc on 2011-11-30 20:55:12 CET ---

This package has changed ownership in the Fedora Package Database.  Reassigning to the new owner of this component.

--- Additional comment from fedora-admin-xmlrpc on 2011-11-30 20:57:16 CET ---

This package has changed ownership in the Fedora Package Database.  Reassigning to the new owner of this component.

--- Additional comment from fedora-admin-xmlrpc on 2011-11-30 21:01:23 CET ---

This package has changed ownership in the Fedora Package Database.  Reassigning to the new owner of this component.

--- Additional comment from fedora-admin-xmlrpc on 2011-11-30 21:02:52 CET ---

This package has changed ownership in the Fedora Package Database.  Reassigning to the new owner of this component.

--- Additional comment from crobinso on 2012-01-24 23:41:38 CET ---

Sorry for not addressing this bug, but F14 is EOL now, so I'm closing this
report. Please reopen if this is still relevant in a more recent fedora.

Comment 1 Cole Robinson 2012-06-07 20:05:05 UTC
Hi Peter, thanks for reopening, sorry about lack of response on the original report. Can you provide:

1) sudo virsh pool-dumpxml $poolname   (for every logical pool)
2) sudo vgs
3) sudo lvs

libvirt is actually running the following command

lvs --separator , --noheadings --units b --unbuffered --nosuffix --options lv_name,origin,uuid,devices,seg_size,vg_extent_size $SOURCENAME

where $SOURCENAME is <source name="$SOURCENAME"/> from the dumpxml output. Could you try that as well?

Comment 2 Peter Rajnoha 2012-06-11 11:35:15 UTC
Well, it's a bit different now. I'm using F17 now and it seems to be working there...

However, I still have F16 set up in a virt machine, so I tried to install virt-manager and stuff (just storage to test for the error - as that would be virt under virt, I expect that has no influence here). Now, I end up with *different* error message with current F16 updated version:

When trying to create a new pool out of existing VG with a mirror volume:
=========================================================================

Error creating pool: Could not start storage pool: cannot open volume '/dev/vg/lvol0,': No such file or directory

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 45, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/createpool.py", line 482, in _async_pool_create
    poolobj = self._pool.install(create=True, meter=meter, build=build)
  File "/usr/lib/python2.7/site-packages/virtinst/Storage.py", line 744, in install
    build=build, autostart=False)
  File "/usr/lib/python2.7/site-packages/virtinst/Storage.py", line 489, in install
    raise RuntimeError(errmsg)
RuntimeError: Could not start storage pool: cannot open volume '/dev/vg/lvol0,': No such file or directory


Also when trying to refresh an existing pool where the mirror volume has been added:
===============================================================================

Error refreshing pool 'vg': cannot open volume '/dev/vg/lvol1,': No such file or directory

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 45, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 66, in tmpcb
    callback(*args, **kwargs)
  File "/usr/share/virt-manager/virtManager/host.py", line 647, in cb
    pool.refresh()
  File "/usr/share/virt-manager/virtManager/storagepool.py", line 115, in refresh
    self.pool.refresh(0)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1832, in refresh
    if ret == -1: raise libvirtError ('virStoragePoolRefresh() failed', pool=self)
libvirtError: cannot open volume '/dev/vg/lvol1,': No such file or directory


All the LVs in that VG are deactivated then. The only problem appears in F16, F17 is already fine, so it must have been resolved just recently. Has anything changed with respect to LVM handling in between these releases?

Comment 3 Peter Rajnoha 2012-06-11 11:38:26 UTC
(In reply to comment #2)
> Error creating pool: Could not start storage pool: cannot open volume
> '/dev/vg/lvol0,': No such file or directory

...the LVs were activated before I tried to manage them under virt-manager. Seems like libvirt (or whatever comes here in the play) is deactivating the whole VG if it can't handle a volume (like the mirror one in this case).

Comment 4 Daniel Berrangé 2012-06-11 11:39:30 UTC
Yes, there is a significant fix in F17 related to LVM handling which will affect you:

commit 82c1740ab92682d69ec8f02adb36b13e1902acd1
Author: Osier Yang <jyang>
Date:   Mon Oct 10 20:34:59 2011 +0800

    storage: Do not use comma as seperator for lvs output
    
    * src/storage/storage_backend_logical.c:
    
    If a logical vol is created as striped. (e.g. --stripes 3),
    the "device" field of lvs output will have multiple fileds which are
    seperated by comma. Thus the RE we write in the codes will not
    work well anymore. E.g. (lvs output for a stripped vol, uses "#" as
    seperator here):
    
    test_stripes##fSLSZH-zAS2-yAIb-n4mV-Al9u-HA3V-oo9K1B#\
    /dev/sdc1(10240),/dev/sdd1(0)#42949672960#4194304
    
    The RE we use:
    
        const char *regexes[] = {
            "^\\s*(\\S+),(\\S*),(\\S+),(\\S+)\\((\\S+)\\),(\\S+),([0-9]+),?\\s*$"
        };
    
    Also the RE doesn't match the "devices" field of striped vol properly,
    it contains multiple "device path" and "offset".
    
    This patch mainly does:
        1) Change the seperator into "#"
        2) Change the RE for "devices" field from "(\\S+)\\((\\S+)\\)"
           into "(\\S+)".
        3) Add two new options for lvs command, (segtype, stripes)
        4) Extend the RE to match the value for the two new fields.
        5) Parse the "devices" field seperately in virStorageBackendLogicalMakeVol,
           multiple "extents" info are generated if the vol is striped. The
           number of "extents" is equal to the stripes number of the striped vol.
    
    A incidental fix: (virStorageBackendLogicalMakeVol)
        Free "vol" if it's new created and there is error.
    
    Demo on striped vol with the patch applied:
    
    
    % virsh vol-dumpxml /dev/test_vg/vol_striped2
    <volume>
      <name>vol_striped2</name>
      <key>QuWqmn-kIkZ-IATt-67rc-OWEP-1PHX-Cl2ICs</key>
      <source>
        <device path='/dev/sda5'>
          <extent start='79691776' end='88080384'/>
        </device>
        <device path='/dev/sda6'>
          <extent start='62914560' end='71303168'/>
        </device>
      </source>
      <capacity>8388608</capacity>
      <allocation>8388608</allocation>
      <target>
        <path>/dev/test_vg/vol_striped2</path>
        <permissions>
          <mode>0660</mode>
          <owner>0</owner>
          <group>6</group>
          <label>system_u:object_r:fixed_disk_device_t:s0</label>
        </permissions>
      </target>
    </volume>
    
    RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=727474

Comment 5 Peter Rajnoha 2012-06-11 11:57:01 UTC
Aha, ok, so it seems it has solved the problem with mirror volumes as well. I think we can close this bug with NEXTRELEASE then if you don't intend to backport the patch for F16... Thanks.

Comment 6 Alasdair Kergon 2012-06-11 12:22:42 UTC
(In reply to comment #4)
> uses "#" as
>     seperator here):
>     
>     test_stripes##fSLSZH-zAS2-yAIb-n4mV-Al9u-HA3V-oo9K1B#\
>     /dev/sdc1(10240),/dev/sdd1(0)#42949672960#4194304

Note that udev permits # in device names:

* Character whitelist: 0-9, A-Z, a-z, #+-.:=@_

Whilst it's unlikely you'll end up with a # within the 'devices' field, it's not impossible (unless you have full control of the system's udev rules and lvm configuration files and avoid it) so it's probably better to pick a character that udev does not permit - an old-fashioned <TAB> is perhaps the obvious choice.
(Comma does work if you put the 'devices' field last and match the correct number of fields.)

Comment 7 Cole Robinson 2012-10-21 01:14:44 UTC
Closing NEXTRELEASE as suggested in Comment #5


Note You need to log in before you can comment on or make changes to this bug.