RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 727474 - creating libvirt storage pools fails for lvm volume groups with striped volumes
Summary: creating libvirt storage pools fails for lvm volume groups with striped volumes
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt
Version: 6.1
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Osier Yang
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: 747120 896052
TreeView+ depends on / blocked
 
Reported: 2011-08-02 08:53 UTC by Gerd v. Egidy
Modified: 2018-11-26 18:42 UTC (History)
12 users (show)

Fixed In Version: libvirt-0.9.4-17.el6
Doc Type: Bug Fix
Doc Text:
Cause libvirt used improper seperator (comma) of "lvs", as if there is striped lvm volume exists, the output of "device" field will use "comma" to seperate the multiple device paths. This cause the regular expression used in the codes to parse the "lvs" output doesn't work normally anymore. Consequence Any logical pool creation will fail if it has striped logical volume. Also libvirt doesn't have right mechnism to format multiple <device> XML (with <extents>) for the multiple device paths of striped volume. Fix Use different seperator (#), and write new codes to parse the multiple device paths of striped volume, and format multiple <devices> XML. Result Users are able to create logical pool contains striped volume, and get proper XML for the striped volume.
Clone Of:
: 896052 (view as bug list)
Environment:
Last Closed: 2011-12-06 11:20:35 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 640807 0 low CLOSED libvirt fails on LVM volume groups with mirrored volumes 2021-02-22 00:41:40 UTC
Red Hat Knowledge Base (Legacy) 65146 0 None None None Never
Red Hat Product Errata RHBA-2011:1513 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2011-12-06 01:23:30 UTC

Internal Links: 640807

Description Gerd v. Egidy 2011-08-02 08:53:51 UTC
Description of problem:

When I try to define a lvm volume group as storage pool in libvirt this fails as soon as this volume group contains at least one volume which is using striping. Defining storage pools that do not use striping works as expected.

We want to use striped lvs to maximize the io performance of our vms.

Version-Release number of selected component (if applicable):

> cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 6.1 (Santiago)
> rpm -q libvirt
libvirt-0.8.7-18.el6.x86_64
> rpm -q lvm2
lvm2-2.02.83-3.el6.x86_64
> rpm -q udev
udev-147-2.35.el6.x86_64

How reproducible:
always

Steps to Reproduce:

> pvcreate /dev/sdc1
  Physical volume "/dev/sdc1" successfully created
> pvcreate /dev/sdd1
  Physical volume "/dev/sdd1" successfully created
> vgcreate vg_ssd /dev/sdc1 /dev/sdd1
  Volume group "vg_ssd" successfully created
> lvcreate --size 40GB --name test_nostripes vg_ssd
  Logical volume "vmtest" created
> vgchange -a y vg_ssd
  1 logical volume(s) in volume group "vg_ssd" now active
> virsh pool-create-as vg_ssd logical --target /dev/vg_ssd
Pool vg_ssd created
> virsh pool-info vg_ssd
Name:           vg_ssd
UUID:           ea3f0222-9b31-054e-c411-fa57f0d7e4b4
State:          running
Persistent:     no
Autostart:      no
Capacity:       558.91 GB
Allocation:     0.00 
Available:      558.91 GB

-> Now we have created a working storage pool from a lvm vg that does not use any striping.

> virsh pool-destroy vg_ssd
Pool vg_ssd destroyed
> lvcreate --size 40GB --stripes 2 --stripesize 8kb --name test_stripes vg_ssd
  Logical volume "vmtest" created
> vgchange -a y vg_ssd
  2 logical volume(s) in volume group "vg_ssd" now active
> stat /dev/vg_ssd/test_stripes 
  File: `/dev/vg_ssd/test_stripes' -> `../dm-3'
  Size: 7               Blocks: 0          IO Block: 4096   symbolic link
Device: 5h/5d   Inode: 25160       Links: 1
Access: (0777/lrwxrwxrwx)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2011-08-02 10:30:58.616055653 +0200
Modify: 2011-08-02 10:30:58.609053092 +0200
Change: 2011-08-02 10:30:58.609053092 +0200


> virsh -d 5 pool-create-as vg_ssd logical --target /dev/vg_ssd
pool-create-as: name(optdata): vg_ssd
pool-create-as: type(optdata): logical
pool-create-as: target(optdata): /dev/vg_ssd
error: Failed to create pool vg_ssd
error: internal error lvs command failed

-> As soon as we have a lv with striping within the vg, defining the storage pool fails.

If I try to first define the storage pool and add the striped lv later on, the storage pool can't be activated.

Output into messages log (with full libvirt debugging enabled):

[...]
libvirtd: 10:41:20.917: 4657: debug : remoteDispatchClientRequest:373 : prog=536903814 ver=1 type=0 status=0 serial=3 proc=76
libvirtd: 10:41:20.917: 4657: debug : virStoragePoolCreateXML:7986 : conn=0x7f3778000a00, xmlDesc=<pool type='logical'>#012  <name>vg_ssd</name>#012  <target>#012    <path>/dev/vg_ssd</path>#012  </target>#012</pool>#012
libvirtd: 10:41:20.917: 4656: debug : virEventCleanupTimeouts:484 : Cleanup 3
libvirtd: 10:41:20.917: 4656: debug : virEventCleanupHandles:532 : Cleanup 7
libvirtd: 10:41:20.917: 4656: debug : virEventMakePollFDs:362 : Prepare n=0 w=1, f=5 e=1 d=0
libvirtd: 10:41:20.917: 4656: debug : virEventMakePollFDs:362 : Prepare n=1 w=2, f=7 e=1 d=0
libvirtd: 10:41:20.917: 4656: debug : virEventMakePollFDs:362 : Prepare n=2 w=3, f=13 e=1 d=0
libvirtd: 10:41:20.917: 4656: debug : virEventMakePollFDs:362 : Prepare n=3 w=4, f=16 e=1 d=0
libvirtd: 10:41:20.917: 4656: debug : virEventMakePollFDs:362 : Prepare n=4 w=5, f=12 e=25 d=0
libvirtd: 10:41:20.917: 4656: debug : virEventMakePollFDs:362 : Prepare n=5 w=6, f=11 e=25 d=0
libvirtd: 10:41:20.917: 4656: debug : virEventMakePollFDs:362 : Prepare n=6 w=7, f=17 e=1 d=0
libvirtd: 10:41:20.917: 4656: debug : virEventCalculateTimeout:302 : Calculate expiry of 3 timers
libvirtd: 10:41:20.917: 4656: debug : virEventCalculateTimeout:332 : Timeout at 0 due in -1 ms
libvirtd: 10:41:20.917: 4656: debug : virEventRunOnce:594 : Poll on 7 handles 0x7f37880011e0 timeout -1
libvirtd: 10:41:20.918: 4657: debug : virRunWithHook:833 : /sbin/vgchange -ay vg_ssd
libvirtd: 10:41:21.244: 4657: debug : virRunWithHook:849 : Command stdout:   2 logical volume(s) in volume group "vg_ssd" now active#012
libvirtd: 10:41:21.246: 4657: debug : virRunWithHook:833 : /sbin/udevadm settle
libvirtd: 10:41:21.291: 4657: debug : virExecWithHook:725 : /sbin/lvs --separator , --noheadings --units b --unbuffered --nosuffix --options lv_name,origin,uuid,devices,seg_size,vg_extent_size vg_ssd
libvirtd: 10:41:21.421: 4657: error : virStorageBackendVolOpenCheckMode:1025 : cannot open volume '/dev/vg_ssd/test_stripes,': No such file or directory
libvirtd: 10:41:21.421: 4657: error : virStorageBackendLogicalFindLVs:219 : internal error lvs command failed
libvirtd: 10:41:21.421: 4657: debug : virRunWithHook:833 : /sbin/vgchange -an vg_ssd
libvirtd: 10:41:21.536: 4656: debug : virEventRunOnce:603 : Poll got 1 event(s)
libvirtd: 10:41:21.536: 4656: debug : virEventDispatchTimeouts:394 : Dispatch 3
libvirtd: 10:41:21.536: 4656: debug : virEventDispatchHandles:439 : Dispatch 7
libvirtd: 10:41:21.537: 4656: debug : virEventDispatchHandles:453 : i=0 w=1
libvirtd: 10:41:21.537: 4656: debug : virEventDispatchHandles:453 : i=1 w=2
libvirtd: 10:41:21.537: 4656: debug : virEventDispatchHandles:453 : i=2 w=3
libvirtd: 10:41:21.537: 4656: debug : virEventDispatchHandles:453 : i=3 w=4
libvirtd: 10:41:21.537: 4656: debug : virEventDispatchHandles:466 : Dispatch n=3 f=16 w=4 e=1 (nil)
libvirtd: 10:41:21.541: 4656: debug : udevEventHandleCallback:1458 : udev action: 'remove'
libvirtd: 10:41:21.541: 4656: debug : udevRemoveOneDevice:1213 : Failed to find device to remove that has udev name '/sys/devices/virtual/block/dm-2'
[...]

As you can see from the stat call above, the dir /dev/vg_ssd/test_stripes did exist before the call to pool-create-as. So the entry in the log seems just to be a side-effect, the /dev/vg_ssd dir is gone after the failed call because the vg has become inactive.

Running the lvs call that libvirt does yields no unexpected results:
> /sbin/lvs --separator , --noheadings --units b --unbuffered --nosuffix --options lv_name,origin,uuid,devices,seg_size,vg_extent_size vg_ssd; echo $?
  test_nostripes,,N8cT8q-H5oH-hYcw-P85y-HcwH-Ms1r-Z830xi,/dev/sdc1(0),42949672960,4194304
  test_stripes,,fSLSZH-zAS2-yAIb-n4mV-Al9u-HA3V-oo9K1B,/dev/sdc1(10240),/dev/sdd1(0),42949672960,4194304
0

So maybe the dir is removed by udev and udevadm settle does not wait for it to be recreated?

Comment 1 Gerd v. Egidy 2011-08-02 09:04:11 UTC
I just took a deep look at the output of the lvs command. Counting commas...

When you change it into this it becomes more obvious:

> /sbin/lvs --separator "#" --noheadings --units b --unbuffered --nosuffix --options lv_name,origin,uuid,devices,seg_size,vg_extent_size vg_ssd
  test_nostripes##N8cT8q-H5oH-hYcw-P85y-HcwH-Ms1r-Z830xi#/dev/sdc1(0)#42949672960#4194304
  test_stripes##fSLSZH-zAS2-yAIb-n4mV-Al9u-HA3V-oo9K1B#/dev/sdc1(10240),/dev/sdd1(0)#42949672960#4194304

Seems like lvs always writes the devices as comma separated list. Most probably libvirt does not correctly handle the extra comma introduced by the extra device.

Comment 5 Osier Yang 2011-09-21 04:32:43 UTC
> 
> Seems like lvs always writes the devices as comma separated list. Most probably
> libvirt does not correctly handle the extra comma introduced by the extra
> device.

Hi, Gerd, 

It looks to me you are right, there is an additional "," in the volume path. Definitely it's a wrong file path, that's why it says:

<snip>
libvirtd: 10:41:21.421: 4657: error : virStorageBackendVolOpenCheckMode:1025 :
cannot open volume '/dev/vg_ssd/test_stripes,': No such file or directory
</snip>

NB, the "," at the end of the volume path, it should be caused by libvirt uses "," as the seperator, and the lvs output for "test_stripes" has one more field. And the RE libvirt uses can't matches it out. 
 
test_stripes##fSLSZH-zAS2-yAIb-n4mV-Al9u-HA3V-oo9K1B#/dev/sdc1(10240),/dev/sdd1(0)#42949672960#4194304

For the directory "/dev/vg_ssd" is removed, you may get some clew from the udevadm monitor.

Comment 6 Osier Yang 2011-09-21 10:16:11 UTC
Patch posted to upstream.

http://www.redhat.com/archives/libvir-list/2011-September/msg00809.html

Comment 7 Gerd v. Egidy 2011-09-21 20:47:01 UTC
Hi Osier,

thanks a lot for looking into this. If I understood Daniels comment on the mailinglist correctly, your patch is not complete yet and some other part needs to be rewritten to cope with the multiple device entries.

I will test your patch as soon as it is deemed complete and ready to go upstream. Just notify me when you are ready.

Comment 10 Osier Yang 2011-10-10 04:07:46 UTC
(In reply to comment #7)
> Hi Osier,
> 
> thanks a lot for looking into this. If I understood Daniels comment on the
> mailinglist correctly, your patch is not complete yet and some other part needs
> to be rewritten to cope with the multiple device entries.
> 
> I will test your patch as soon as it is deemed complete and ready to go
> upstream. Just notify me when you are ready.

Hi, Gerd, 

I posted a v2 in upstream, I did some testing on my own box, but more testing is 
always welcomed. :-)

http://www.redhat.com/archives/libvir-list/2011-October/msg00296.html

Comment 11 Osier Yang 2011-10-10 13:04:05 UTC
patch posted internally, move to POST.

http://post-office.corp.redhat.com/archives/rhvirt-patches/2011-October/msg00300.html

Comment 12 Gerd v. Egidy 2011-10-11 14:24:41 UTC
Hi Osier,

I just tested the version of your patch from libvirt-git. I just added it to my older version of libvirt.

It works like a charm.

Thanks for fixing this.

Comment 14 Huang Wenlong 2011-10-12 06:02:17 UTC
Verify this bug with libvirt-0.9.4-17.el6.x86_64


[root@rhel62-wh ~]# lvcreate --size 1GB --stripes 2 --stripesize 8kb --name test_stripes vg_ssd

> pvcreate /dev/sda6
  Physical volume "/dev/sda6" successfully created
> pvcreate /dev/sda7
  Physical volume "/dev/sda7" successfully created
> vgcreate vg_ssd /dev/sda6 /dev/sda7
  Volume group "vg_ssd" successfully created
> lvcreate --size 2GB --name test_nostripes vg_ssd
  Logical volume "vmtest" created
> vgchange -a y vg_ssd
  1 logical volume(s) in volume group "vg_ssd" now active

> lvcreate --size 2GB --stripes 2 --stripesize 8kb --name test_stripes vg_ssd
  Logical volume "test_stripes" created
> vgchange -a y vg_ssd
  2 logical volume(s) in volume group "vg_ssd" now active

> stat /dev/vg_ssd/test_stripes 
  File: `/dev/vg_ssd/test_stripes' -> `../dm-1'
  Size: 7         	Blocks: 0          IO Block: 4096   symbolic link
Device: 5h/5d	Inode: 2166057     Links: 1
Access: (0777/lrwxrwxrwx)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2011-10-12 13:51:55.633639378 +0800
Modify: 2011-10-12 13:51:55.633639378 +0800
Change: 2011-10-12 13:51:55.633639378 +0800


> virsh -d 5 pool-create-as vg_ssd logical --target /dev/vg_ssd
Pool vg_ssd created


> virsh pool-list 
Name                 State      Autostart 
-----------------------------------------
default              active     yes       
r6                   active     yes       
sda5                 active     no        
vg_ssd               active     no        

> virsh vol-list vg_ssd
Name                 Path                                    
-----------------------------------------
test_nostripes       /dev/vg_ssd/test_nostripes              
test_stripes         /dev/vg_ssd/test_stripes

Comment 16 Osier Yang 2011-11-14 08:02:18 UTC
    Technical note added. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    New Contents:
Cause
   libvirt used improper seperator (comma) of "lvs", as if there is striped lvm volume exists, the output of "device" field will use "comma" to seperate the multiple device paths. This cause the regular expression used in the codes to parse the "lvs" output doesn't work normally anymore.

Consequence
   Any logical pool creation will fail if it has striped logical volume. Also libvirt doesn't have right mechnism to format
multiple <device> XML (with <extents>) for the multiple device paths of striped volume.

Fix
   Use different seperator (#), and write new codes to parse the multiple device paths of striped volume, and format multiple <devices> XML.

Result
    Users are able to create logical pool contains striped volume, and get proper XML for the striped volume.

Comment 19 errata-xmlrpc 2011-12-06 11:20:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2011-1513.html


Note You need to log in before you can comment on or make changes to this bug.