Bug 727474 - creating libvirt storage pools fails for lvm volume groups with striped volumes
creating libvirt storage pools fails for lvm volume groups with striped volumes
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt (Show other bugs)
6.1
x86_64 Linux
high Severity high
: rc
: ---
Assigned To: Osier Yang
Virtualization Bugs
:
Depends On:
Blocks: 747120 896052
  Show dependency treegraph
 
Reported: 2011-08-02 04:53 EDT by Gerd v. Egidy
Modified: 2015-07-28 11:05 EDT (History)
12 users (show)

See Also:
Fixed In Version: libvirt-0.9.4-17.el6
Doc Type: Bug Fix
Doc Text:
Cause libvirt used improper seperator (comma) of "lvs", as if there is striped lvm volume exists, the output of "device" field will use "comma" to seperate the multiple device paths. This cause the regular expression used in the codes to parse the "lvs" output doesn't work normally anymore. Consequence Any logical pool creation will fail if it has striped logical volume. Also libvirt doesn't have right mechnism to format multiple <device> XML (with <extents>) for the multiple device paths of striped volume. Fix Use different seperator (#), and write new codes to parse the multiple device paths of striped volume, and format multiple <devices> XML. Result Users are able to create logical pool contains striped volume, and get proper XML for the striped volume.
Story Points: ---
Clone Of:
: 896052 (view as bug list)
Environment:
Last Closed: 2011-12-06 06:20:35 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Knowledge Base (Legacy) 65146 None None None Never

  None (edit)
Description Gerd v. Egidy 2011-08-02 04:53:51 EDT
Description of problem:

When I try to define a lvm volume group as storage pool in libvirt this fails as soon as this volume group contains at least one volume which is using striping. Defining storage pools that do not use striping works as expected.

We want to use striped lvs to maximize the io performance of our vms.

Version-Release number of selected component (if applicable):

> cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 6.1 (Santiago)
> rpm -q libvirt
libvirt-0.8.7-18.el6.x86_64
> rpm -q lvm2
lvm2-2.02.83-3.el6.x86_64
> rpm -q udev
udev-147-2.35.el6.x86_64

How reproducible:
always

Steps to Reproduce:

> pvcreate /dev/sdc1
  Physical volume "/dev/sdc1" successfully created
> pvcreate /dev/sdd1
  Physical volume "/dev/sdd1" successfully created
> vgcreate vg_ssd /dev/sdc1 /dev/sdd1
  Volume group "vg_ssd" successfully created
> lvcreate --size 40GB --name test_nostripes vg_ssd
  Logical volume "vmtest" created
> vgchange -a y vg_ssd
  1 logical volume(s) in volume group "vg_ssd" now active
> virsh pool-create-as vg_ssd logical --target /dev/vg_ssd
Pool vg_ssd created
> virsh pool-info vg_ssd
Name:           vg_ssd
UUID:           ea3f0222-9b31-054e-c411-fa57f0d7e4b4
State:          running
Persistent:     no
Autostart:      no
Capacity:       558.91 GB
Allocation:     0.00 
Available:      558.91 GB

-> Now we have created a working storage pool from a lvm vg that does not use any striping.

> virsh pool-destroy vg_ssd
Pool vg_ssd destroyed
> lvcreate --size 40GB --stripes 2 --stripesize 8kb --name test_stripes vg_ssd
  Logical volume "vmtest" created
> vgchange -a y vg_ssd
  2 logical volume(s) in volume group "vg_ssd" now active
> stat /dev/vg_ssd/test_stripes 
  File: `/dev/vg_ssd/test_stripes' -> `../dm-3'
  Size: 7               Blocks: 0          IO Block: 4096   symbolic link
Device: 5h/5d   Inode: 25160       Links: 1
Access: (0777/lrwxrwxrwx)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2011-08-02 10:30:58.616055653 +0200
Modify: 2011-08-02 10:30:58.609053092 +0200
Change: 2011-08-02 10:30:58.609053092 +0200


> virsh -d 5 pool-create-as vg_ssd logical --target /dev/vg_ssd
pool-create-as: name(optdata): vg_ssd
pool-create-as: type(optdata): logical
pool-create-as: target(optdata): /dev/vg_ssd
error: Failed to create pool vg_ssd
error: internal error lvs command failed

-> As soon as we have a lv with striping within the vg, defining the storage pool fails.

If I try to first define the storage pool and add the striped lv later on, the storage pool can't be activated.

Output into messages log (with full libvirt debugging enabled):

[...]
libvirtd: 10:41:20.917: 4657: debug : remoteDispatchClientRequest:373 : prog=536903814 ver=1 type=0 status=0 serial=3 proc=76
libvirtd: 10:41:20.917: 4657: debug : virStoragePoolCreateXML:7986 : conn=0x7f3778000a00, xmlDesc=<pool type='logical'>#012  <name>vg_ssd</name>#012  <target>#012    <path>/dev/vg_ssd</path>#012  </target>#012</pool>#012
libvirtd: 10:41:20.917: 4656: debug : virEventCleanupTimeouts:484 : Cleanup 3
libvirtd: 10:41:20.917: 4656: debug : virEventCleanupHandles:532 : Cleanup 7
libvirtd: 10:41:20.917: 4656: debug : virEventMakePollFDs:362 : Prepare n=0 w=1, f=5 e=1 d=0
libvirtd: 10:41:20.917: 4656: debug : virEventMakePollFDs:362 : Prepare n=1 w=2, f=7 e=1 d=0
libvirtd: 10:41:20.917: 4656: debug : virEventMakePollFDs:362 : Prepare n=2 w=3, f=13 e=1 d=0
libvirtd: 10:41:20.917: 4656: debug : virEventMakePollFDs:362 : Prepare n=3 w=4, f=16 e=1 d=0
libvirtd: 10:41:20.917: 4656: debug : virEventMakePollFDs:362 : Prepare n=4 w=5, f=12 e=25 d=0
libvirtd: 10:41:20.917: 4656: debug : virEventMakePollFDs:362 : Prepare n=5 w=6, f=11 e=25 d=0
libvirtd: 10:41:20.917: 4656: debug : virEventMakePollFDs:362 : Prepare n=6 w=7, f=17 e=1 d=0
libvirtd: 10:41:20.917: 4656: debug : virEventCalculateTimeout:302 : Calculate expiry of 3 timers
libvirtd: 10:41:20.917: 4656: debug : virEventCalculateTimeout:332 : Timeout at 0 due in -1 ms
libvirtd: 10:41:20.917: 4656: debug : virEventRunOnce:594 : Poll on 7 handles 0x7f37880011e0 timeout -1
libvirtd: 10:41:20.918: 4657: debug : virRunWithHook:833 : /sbin/vgchange -ay vg_ssd
libvirtd: 10:41:21.244: 4657: debug : virRunWithHook:849 : Command stdout:   2 logical volume(s) in volume group "vg_ssd" now active#012
libvirtd: 10:41:21.246: 4657: debug : virRunWithHook:833 : /sbin/udevadm settle
libvirtd: 10:41:21.291: 4657: debug : virExecWithHook:725 : /sbin/lvs --separator , --noheadings --units b --unbuffered --nosuffix --options lv_name,origin,uuid,devices,seg_size,vg_extent_size vg_ssd
libvirtd: 10:41:21.421: 4657: error : virStorageBackendVolOpenCheckMode:1025 : cannot open volume '/dev/vg_ssd/test_stripes,': No such file or directory
libvirtd: 10:41:21.421: 4657: error : virStorageBackendLogicalFindLVs:219 : internal error lvs command failed
libvirtd: 10:41:21.421: 4657: debug : virRunWithHook:833 : /sbin/vgchange -an vg_ssd
libvirtd: 10:41:21.536: 4656: debug : virEventRunOnce:603 : Poll got 1 event(s)
libvirtd: 10:41:21.536: 4656: debug : virEventDispatchTimeouts:394 : Dispatch 3
libvirtd: 10:41:21.536: 4656: debug : virEventDispatchHandles:439 : Dispatch 7
libvirtd: 10:41:21.537: 4656: debug : virEventDispatchHandles:453 : i=0 w=1
libvirtd: 10:41:21.537: 4656: debug : virEventDispatchHandles:453 : i=1 w=2
libvirtd: 10:41:21.537: 4656: debug : virEventDispatchHandles:453 : i=2 w=3
libvirtd: 10:41:21.537: 4656: debug : virEventDispatchHandles:453 : i=3 w=4
libvirtd: 10:41:21.537: 4656: debug : virEventDispatchHandles:466 : Dispatch n=3 f=16 w=4 e=1 (nil)
libvirtd: 10:41:21.541: 4656: debug : udevEventHandleCallback:1458 : udev action: 'remove'
libvirtd: 10:41:21.541: 4656: debug : udevRemoveOneDevice:1213 : Failed to find device to remove that has udev name '/sys/devices/virtual/block/dm-2'
[...]

As you can see from the stat call above, the dir /dev/vg_ssd/test_stripes did exist before the call to pool-create-as. So the entry in the log seems just to be a side-effect, the /dev/vg_ssd dir is gone after the failed call because the vg has become inactive.

Running the lvs call that libvirt does yields no unexpected results:
> /sbin/lvs --separator , --noheadings --units b --unbuffered --nosuffix --options lv_name,origin,uuid,devices,seg_size,vg_extent_size vg_ssd; echo $?
  test_nostripes,,N8cT8q-H5oH-hYcw-P85y-HcwH-Ms1r-Z830xi,/dev/sdc1(0),42949672960,4194304
  test_stripes,,fSLSZH-zAS2-yAIb-n4mV-Al9u-HA3V-oo9K1B,/dev/sdc1(10240),/dev/sdd1(0),42949672960,4194304
0

So maybe the dir is removed by udev and udevadm settle does not wait for it to be recreated?
Comment 1 Gerd v. Egidy 2011-08-02 05:04:11 EDT
I just took a deep look at the output of the lvs command. Counting commas...

When you change it into this it becomes more obvious:

> /sbin/lvs --separator "#" --noheadings --units b --unbuffered --nosuffix --options lv_name,origin,uuid,devices,seg_size,vg_extent_size vg_ssd
  test_nostripes##N8cT8q-H5oH-hYcw-P85y-HcwH-Ms1r-Z830xi#/dev/sdc1(0)#42949672960#4194304
  test_stripes##fSLSZH-zAS2-yAIb-n4mV-Al9u-HA3V-oo9K1B#/dev/sdc1(10240),/dev/sdd1(0)#42949672960#4194304

Seems like lvs always writes the devices as comma separated list. Most probably libvirt does not correctly handle the extra comma introduced by the extra device.
Comment 5 Osier Yang 2011-09-21 00:32:43 EDT
> 
> Seems like lvs always writes the devices as comma separated list. Most probably
> libvirt does not correctly handle the extra comma introduced by the extra
> device.

Hi, Gerd, 

It looks to me you are right, there is an additional "," in the volume path. Definitely it's a wrong file path, that's why it says:

<snip>
libvirtd: 10:41:21.421: 4657: error : virStorageBackendVolOpenCheckMode:1025 :
cannot open volume '/dev/vg_ssd/test_stripes,': No such file or directory
</snip>

NB, the "," at the end of the volume path, it should be caused by libvirt uses "," as the seperator, and the lvs output for "test_stripes" has one more field. And the RE libvirt uses can't matches it out. 
 
test_stripes##fSLSZH-zAS2-yAIb-n4mV-Al9u-HA3V-oo9K1B#/dev/sdc1(10240),/dev/sdd1(0)#42949672960#4194304

For the directory "/dev/vg_ssd" is removed, you may get some clew from the udevadm monitor.
Comment 6 Osier Yang 2011-09-21 06:16:11 EDT
Patch posted to upstream.

http://www.redhat.com/archives/libvir-list/2011-September/msg00809.html
Comment 7 Gerd v. Egidy 2011-09-21 16:47:01 EDT
Hi Osier,

thanks a lot for looking into this. If I understood Daniels comment on the mailinglist correctly, your patch is not complete yet and some other part needs to be rewritten to cope with the multiple device entries.

I will test your patch as soon as it is deemed complete and ready to go upstream. Just notify me when you are ready.
Comment 10 Osier Yang 2011-10-10 00:07:46 EDT
(In reply to comment #7)
> Hi Osier,
> 
> thanks a lot for looking into this. If I understood Daniels comment on the
> mailinglist correctly, your patch is not complete yet and some other part needs
> to be rewritten to cope with the multiple device entries.
> 
> I will test your patch as soon as it is deemed complete and ready to go
> upstream. Just notify me when you are ready.

Hi, Gerd, 

I posted a v2 in upstream, I did some testing on my own box, but more testing is 
always welcomed. :-)

http://www.redhat.com/archives/libvir-list/2011-October/msg00296.html
Comment 11 Osier Yang 2011-10-10 09:04:05 EDT
patch posted internally, move to POST.

http://post-office.corp.redhat.com/archives/rhvirt-patches/2011-October/msg00300.html
Comment 12 Gerd v. Egidy 2011-10-11 10:24:41 EDT
Hi Osier,

I just tested the version of your patch from libvirt-git. I just added it to my older version of libvirt.

It works like a charm.

Thanks for fixing this.
Comment 14 Huang Wenlong 2011-10-12 02:02:17 EDT
Verify this bug with libvirt-0.9.4-17.el6.x86_64


[root@rhel62-wh ~]# lvcreate --size 1GB --stripes 2 --stripesize 8kb --name test_stripes vg_ssd

> pvcreate /dev/sda6
  Physical volume "/dev/sda6" successfully created
> pvcreate /dev/sda7
  Physical volume "/dev/sda7" successfully created
> vgcreate vg_ssd /dev/sda6 /dev/sda7
  Volume group "vg_ssd" successfully created
> lvcreate --size 2GB --name test_nostripes vg_ssd
  Logical volume "vmtest" created
> vgchange -a y vg_ssd
  1 logical volume(s) in volume group "vg_ssd" now active

> lvcreate --size 2GB --stripes 2 --stripesize 8kb --name test_stripes vg_ssd
  Logical volume "test_stripes" created
> vgchange -a y vg_ssd
  2 logical volume(s) in volume group "vg_ssd" now active

> stat /dev/vg_ssd/test_stripes 
  File: `/dev/vg_ssd/test_stripes' -> `../dm-1'
  Size: 7         	Blocks: 0          IO Block: 4096   symbolic link
Device: 5h/5d	Inode: 2166057     Links: 1
Access: (0777/lrwxrwxrwx)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2011-10-12 13:51:55.633639378 +0800
Modify: 2011-10-12 13:51:55.633639378 +0800
Change: 2011-10-12 13:51:55.633639378 +0800


> virsh -d 5 pool-create-as vg_ssd logical --target /dev/vg_ssd
Pool vg_ssd created


> virsh pool-list 
Name                 State      Autostart 
-----------------------------------------
default              active     yes       
r6                   active     yes       
sda5                 active     no        
vg_ssd               active     no        

> virsh vol-list vg_ssd
Name                 Path                                    
-----------------------------------------
test_nostripes       /dev/vg_ssd/test_nostripes              
test_stripes         /dev/vg_ssd/test_stripes
Comment 16 Osier Yang 2011-11-14 03:02:18 EST
    Technical note added. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    New Contents:
Cause
   libvirt used improper seperator (comma) of "lvs", as if there is striped lvm volume exists, the output of "device" field will use "comma" to seperate the multiple device paths. This cause the regular expression used in the codes to parse the "lvs" output doesn't work normally anymore.

Consequence
   Any logical pool creation will fail if it has striped logical volume. Also libvirt doesn't have right mechnism to format
multiple <device> XML (with <extents>) for the multiple device paths of striped volume.

Fix
   Use different seperator (#), and write new codes to parse the multiple device paths of striped volume, and format multiple <devices> XML.

Result
    Users are able to create logical pool contains striped volume, and get proper XML for the striped volume.
Comment 19 errata-xmlrpc 2011-12-06 06:20:35 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2011-1513.html

Note You need to log in before you can comment on or make changes to this bug.