+++ This bug was initially created as a clone of Bug #727474 +++ Description of problem: When I try to define a lvm volume group as storage pool in libvirt this fails as soon as this volume group contains at least one volume which is using striping. Defining storage pools that do not use striping works as expected. We want to use striped lvs to maximize the io performance of our vms. Version-Release number of selected component (if applicable): > cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.1 (Santiago) > rpm -q libvirt libvirt-0.8.7-18.el6.x86_64 > rpm -q lvm2 lvm2-2.02.83-3.el6.x86_64 > rpm -q udev udev-147-2.35.el6.x86_64 How reproducible: always Steps to Reproduce: > pvcreate /dev/sdc1 Physical volume "/dev/sdc1" successfully created > pvcreate /dev/sdd1 Physical volume "/dev/sdd1" successfully created > vgcreate vg_ssd /dev/sdc1 /dev/sdd1 Volume group "vg_ssd" successfully created > lvcreate --size 40GB --name test_nostripes vg_ssd Logical volume "vmtest" created > vgchange -a y vg_ssd 1 logical volume(s) in volume group "vg_ssd" now active > virsh pool-create-as vg_ssd logical --target /dev/vg_ssd Pool vg_ssd created > virsh pool-info vg_ssd Name: vg_ssd UUID: ea3f0222-9b31-054e-c411-fa57f0d7e4b4 State: running Persistent: no Autostart: no Capacity: 558.91 GB Allocation: 0.00 Available: 558.91 GB -> Now we have created a working storage pool from a lvm vg that does not use any striping. > virsh pool-destroy vg_ssd Pool vg_ssd destroyed > lvcreate --size 40GB --stripes 2 --stripesize 8kb --name test_stripes vg_ssd Logical volume "vmtest" created > vgchange -a y vg_ssd 2 logical volume(s) in volume group "vg_ssd" now active > stat /dev/vg_ssd/test_stripes File: `/dev/vg_ssd/test_stripes' -> `../dm-3' Size: 7 Blocks: 0 IO Block: 4096 symbolic link Device: 5h/5d Inode: 25160 Links: 1 Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2011-08-02 10:30:58.616055653 +0200 Modify: 2011-08-02 10:30:58.609053092 +0200 Change: 2011-08-02 10:30:58.609053092 +0200 > virsh -d 5 pool-create-as vg_ssd logical --target /dev/vg_ssd pool-create-as: name(optdata): vg_ssd pool-create-as: type(optdata): logical pool-create-as: target(optdata): /dev/vg_ssd error: Failed to create pool vg_ssd error: internal error lvs command failed -> As soon as we have a lv with striping within the vg, defining the storage pool fails. If I try to first define the storage pool and add the striped lv later on, the storage pool can't be activated. Output into messages log (with full libvirt debugging enabled): [...] libvirtd: 10:41:20.917: 4657: debug : remoteDispatchClientRequest:373 : prog=536903814 ver=1 type=0 status=0 serial=3 proc=76 libvirtd: 10:41:20.917: 4657: debug : virStoragePoolCreateXML:7986 : conn=0x7f3778000a00, xmlDesc=<pool type='logical'>#012 <name>vg_ssd</name>#012 <target>#012 <path>/dev/vg_ssd</path>#012 </target>#012</pool>#012 libvirtd: 10:41:20.917: 4656: debug : virEventCleanupTimeouts:484 : Cleanup 3 libvirtd: 10:41:20.917: 4656: debug : virEventCleanupHandles:532 : Cleanup 7 libvirtd: 10:41:20.917: 4656: debug : virEventMakePollFDs:362 : Prepare n=0 w=1, f=5 e=1 d=0 libvirtd: 10:41:20.917: 4656: debug : virEventMakePollFDs:362 : Prepare n=1 w=2, f=7 e=1 d=0 libvirtd: 10:41:20.917: 4656: debug : virEventMakePollFDs:362 : Prepare n=2 w=3, f=13 e=1 d=0 libvirtd: 10:41:20.917: 4656: debug : virEventMakePollFDs:362 : Prepare n=3 w=4, f=16 e=1 d=0 libvirtd: 10:41:20.917: 4656: debug : virEventMakePollFDs:362 : Prepare n=4 w=5, f=12 e=25 d=0 libvirtd: 10:41:20.917: 4656: debug : virEventMakePollFDs:362 : Prepare n=5 w=6, f=11 e=25 d=0 libvirtd: 10:41:20.917: 4656: debug : virEventMakePollFDs:362 : Prepare n=6 w=7, f=17 e=1 d=0 libvirtd: 10:41:20.917: 4656: debug : virEventCalculateTimeout:302 : Calculate expiry of 3 timers libvirtd: 10:41:20.917: 4656: debug : virEventCalculateTimeout:332 : Timeout at 0 due in -1 ms libvirtd: 10:41:20.917: 4656: debug : virEventRunOnce:594 : Poll on 7 handles 0x7f37880011e0 timeout -1 libvirtd: 10:41:20.918: 4657: debug : virRunWithHook:833 : /sbin/vgchange -ay vg_ssd libvirtd: 10:41:21.244: 4657: debug : virRunWithHook:849 : Command stdout: 2 logical volume(s) in volume group "vg_ssd" now active#012 libvirtd: 10:41:21.246: 4657: debug : virRunWithHook:833 : /sbin/udevadm settle libvirtd: 10:41:21.291: 4657: debug : virExecWithHook:725 : /sbin/lvs --separator , --noheadings --units b --unbuffered --nosuffix --options lv_name,origin,uuid,devices,seg_size,vg_extent_size vg_ssd libvirtd: 10:41:21.421: 4657: error : virStorageBackendVolOpenCheckMode:1025 : cannot open volume '/dev/vg_ssd/test_stripes,': No such file or directory libvirtd: 10:41:21.421: 4657: error : virStorageBackendLogicalFindLVs:219 : internal error lvs command failed libvirtd: 10:41:21.421: 4657: debug : virRunWithHook:833 : /sbin/vgchange -an vg_ssd libvirtd: 10:41:21.536: 4656: debug : virEventRunOnce:603 : Poll got 1 event(s) libvirtd: 10:41:21.536: 4656: debug : virEventDispatchTimeouts:394 : Dispatch 3 libvirtd: 10:41:21.536: 4656: debug : virEventDispatchHandles:439 : Dispatch 7 libvirtd: 10:41:21.537: 4656: debug : virEventDispatchHandles:453 : i=0 w=1 libvirtd: 10:41:21.537: 4656: debug : virEventDispatchHandles:453 : i=1 w=2 libvirtd: 10:41:21.537: 4656: debug : virEventDispatchHandles:453 : i=2 w=3 libvirtd: 10:41:21.537: 4656: debug : virEventDispatchHandles:453 : i=3 w=4 libvirtd: 10:41:21.537: 4656: debug : virEventDispatchHandles:466 : Dispatch n=3 f=16 w=4 e=1 (nil) libvirtd: 10:41:21.541: 4656: debug : udevEventHandleCallback:1458 : udev action: 'remove' libvirtd: 10:41:21.541: 4656: debug : udevRemoveOneDevice:1213 : Failed to find device to remove that has udev name '/sys/devices/virtual/block/dm-2' [...] As you can see from the stat call above, the dir /dev/vg_ssd/test_stripes did exist before the call to pool-create-as. So the entry in the log seems just to be a side-effect, the /dev/vg_ssd dir is gone after the failed call because the vg has become inactive. Running the lvs call that libvirt does yields no unexpected results: > /sbin/lvs --separator , --noheadings --units b --unbuffered --nosuffix --options lv_name,origin,uuid,devices,seg_size,vg_extent_size vg_ssd; echo $? test_nostripes,,N8cT8q-H5oH-hYcw-P85y-HcwH-Ms1r-Z830xi,/dev/sdc1(0),42949672960,4194304 test_stripes,,fSLSZH-zAS2-yAIb-n4mV-Al9u-HA3V-oo9K1B,/dev/sdc1(10240),/dev/sdd1(0),42949672960,4194304 0 So maybe the dir is removed by udev and udevadm settle does not wait for it to be recreated? --- Additional comment from Gerd v. Egidy on 2011-08-02 05:04:11 EDT --- I just took a deep look at the output of the lvs command. Counting commas... When you change it into this it becomes more obvious: > /sbin/lvs --separator "#" --noheadings --units b --unbuffered --nosuffix --options lv_name,origin,uuid,devices,seg_size,vg_extent_size vg_ssd test_nostripes##N8cT8q-H5oH-hYcw-P85y-HcwH-Ms1r-Z830xi#/dev/sdc1(0)#42949672960#4194304 test_stripes##fSLSZH-zAS2-yAIb-n4mV-Al9u-HA3V-oo9K1B#/dev/sdc1(10240),/dev/sdd1(0)#42949672960#4194304 Seems like lvs always writes the devices as comma separated list. Most probably libvirt does not correctly handle the extra comma introduced by the extra device.
Created attachment 679629 [details] Proposed patch libvirt-storage-Do-not-use-comma-as-seperator-for-lvs-output.patch from RHEL6's libvirt-0_9_4-17_el6 applies cleanly (with some offsets) to RHEL5's current libvirt-0.8.2-29.el5.
can reproduce on libvirt-0.8.2-29.el5