Bug 727474
Summary: | creating libvirt storage pools fails for lvm volume groups with striped volumes | |||
---|---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | Gerd v. Egidy <gerd> | |
Component: | libvirt | Assignee: | Osier Yang <jyang> | |
Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> | |
Severity: | high | Docs Contact: | ||
Priority: | high | |||
Version: | 6.1 | CC: | dallan, dyuan, gsun, jwest, jyang, mzhan, nzhang, rdassen, rsuresh, rwu, tvvcox, whuang | |
Target Milestone: | rc | |||
Target Release: | --- | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | libvirt-0.9.4-17.el6 | Doc Type: | Bug Fix | |
Doc Text: |
Cause
libvirt used improper seperator (comma) of "lvs", as if there is striped lvm volume exists, the output of "device" field will use "comma" to seperate the multiple device paths. This cause the regular expression used in the codes to parse the "lvs" output doesn't work normally anymore.
Consequence
Any logical pool creation will fail if it has striped logical volume. Also libvirt doesn't have right mechnism to format
multiple <device> XML (with <extents>) for the multiple device paths of striped volume.
Fix
Use different seperator (#), and write new codes to parse the multiple device paths of striped volume, and format multiple <devices> XML.
Result
Users are able to create logical pool contains striped volume, and get proper XML for the striped volume.
|
Story Points: | --- | |
Clone Of: | ||||
: | 896052 (view as bug list) | Environment: | ||
Last Closed: | 2011-12-06 11:20:35 UTC | Type: | --- | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 747120, 896052 |
Description
Gerd v. Egidy
2011-08-02 08:53:51 UTC
I just took a deep look at the output of the lvs command. Counting commas...
When you change it into this it becomes more obvious:
> /sbin/lvs --separator "#" --noheadings --units b --unbuffered --nosuffix --options lv_name,origin,uuid,devices,seg_size,vg_extent_size vg_ssd
test_nostripes##N8cT8q-H5oH-hYcw-P85y-HcwH-Ms1r-Z830xi#/dev/sdc1(0)#42949672960#4194304
test_stripes##fSLSZH-zAS2-yAIb-n4mV-Al9u-HA3V-oo9K1B#/dev/sdc1(10240),/dev/sdd1(0)#42949672960#4194304
Seems like lvs always writes the devices as comma separated list. Most probably libvirt does not correctly handle the extra comma introduced by the extra device.
>
> Seems like lvs always writes the devices as comma separated list. Most probably
> libvirt does not correctly handle the extra comma introduced by the extra
> device.
Hi, Gerd,
It looks to me you are right, there is an additional "," in the volume path. Definitely it's a wrong file path, that's why it says:
<snip>
libvirtd: 10:41:21.421: 4657: error : virStorageBackendVolOpenCheckMode:1025 :
cannot open volume '/dev/vg_ssd/test_stripes,': No such file or directory
</snip>
NB, the "," at the end of the volume path, it should be caused by libvirt uses "," as the seperator, and the lvs output for "test_stripes" has one more field. And the RE libvirt uses can't matches it out.
test_stripes##fSLSZH-zAS2-yAIb-n4mV-Al9u-HA3V-oo9K1B#/dev/sdc1(10240),/dev/sdd1(0)#42949672960#4194304
For the directory "/dev/vg_ssd" is removed, you may get some clew from the udevadm monitor.
Patch posted to upstream. http://www.redhat.com/archives/libvir-list/2011-September/msg00809.html Hi Osier, thanks a lot for looking into this. If I understood Daniels comment on the mailinglist correctly, your patch is not complete yet and some other part needs to be rewritten to cope with the multiple device entries. I will test your patch as soon as it is deemed complete and ready to go upstream. Just notify me when you are ready. (In reply to comment #7) > Hi Osier, > > thanks a lot for looking into this. If I understood Daniels comment on the > mailinglist correctly, your patch is not complete yet and some other part needs > to be rewritten to cope with the multiple device entries. > > I will test your patch as soon as it is deemed complete and ready to go > upstream. Just notify me when you are ready. Hi, Gerd, I posted a v2 in upstream, I did some testing on my own box, but more testing is always welcomed. :-) http://www.redhat.com/archives/libvir-list/2011-October/msg00296.html patch posted internally, move to POST. http://post-office.corp.redhat.com/archives/rhvirt-patches/2011-October/msg00300.html Hi Osier, I just tested the version of your patch from libvirt-git. I just added it to my older version of libvirt. It works like a charm. Thanks for fixing this. Verify this bug with libvirt-0.9.4-17.el6.x86_64 [root@rhel62-wh ~]# lvcreate --size 1GB --stripes 2 --stripesize 8kb --name test_stripes vg_ssd > pvcreate /dev/sda6 Physical volume "/dev/sda6" successfully created > pvcreate /dev/sda7 Physical volume "/dev/sda7" successfully created > vgcreate vg_ssd /dev/sda6 /dev/sda7 Volume group "vg_ssd" successfully created > lvcreate --size 2GB --name test_nostripes vg_ssd Logical volume "vmtest" created > vgchange -a y vg_ssd 1 logical volume(s) in volume group "vg_ssd" now active > lvcreate --size 2GB --stripes 2 --stripesize 8kb --name test_stripes vg_ssd Logical volume "test_stripes" created > vgchange -a y vg_ssd 2 logical volume(s) in volume group "vg_ssd" now active > stat /dev/vg_ssd/test_stripes File: `/dev/vg_ssd/test_stripes' -> `../dm-1' Size: 7 Blocks: 0 IO Block: 4096 symbolic link Device: 5h/5d Inode: 2166057 Links: 1 Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2011-10-12 13:51:55.633639378 +0800 Modify: 2011-10-12 13:51:55.633639378 +0800 Change: 2011-10-12 13:51:55.633639378 +0800 > virsh -d 5 pool-create-as vg_ssd logical --target /dev/vg_ssd Pool vg_ssd created > virsh pool-list Name State Autostart ----------------------------------------- default active yes r6 active yes sda5 active no vg_ssd active no > virsh vol-list vg_ssd Name Path ----------------------------------------- test_nostripes /dev/vg_ssd/test_nostripes test_stripes /dev/vg_ssd/test_stripes Technical note added. If any revisions are required, please edit the "Technical Notes" field accordingly. All revisions will be proofread by the Engineering Content Services team. New Contents: Cause libvirt used improper seperator (comma) of "lvs", as if there is striped lvm volume exists, the output of "device" field will use "comma" to seperate the multiple device paths. This cause the regular expression used in the codes to parse the "lvs" output doesn't work normally anymore. Consequence Any logical pool creation will fail if it has striped logical volume. Also libvirt doesn't have right mechnism to format multiple <device> XML (with <extents>) for the multiple device paths of striped volume. Fix Use different seperator (#), and write new codes to parse the multiple device paths of striped volume, and format multiple <devices> XML. Result Users are able to create logical pool contains striped volume, and get proper XML for the striped volume. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2011-1513.html |