Bug 786831
Summary: | Failed to activate or create LVM storage pool out of the VG with existing mirror volumes | ||
---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | Peter Rajnoha <prajnoha> |
Component: | libvirt | Assignee: | Libvirt Maintainers <libvirt-maint> |
Status: | CLOSED NEXTRELEASE | QA Contact: | Fedora Extras Quality Assurance <extras-qa> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 16 | CC: | agk, aquini, berrange, clalance, clalancette, crobinso, djmagee, dougsland, hbrock, itamar, jforbes, jlayton, laine, libvirt-maint, veillard, virt-maint |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | 713688 | Environment: | |
Last Closed: | 2012-10-21 01:14:44 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 713688 | ||
Bug Blocks: |
Description
Peter Rajnoha
2012-02-02 14:17:55 UTC
Hi Peter, thanks for reopening, sorry about lack of response on the original report. Can you provide: 1) sudo virsh pool-dumpxml $poolname (for every logical pool) 2) sudo vgs 3) sudo lvs libvirt is actually running the following command lvs --separator , --noheadings --units b --unbuffered --nosuffix --options lv_name,origin,uuid,devices,seg_size,vg_extent_size $SOURCENAME where $SOURCENAME is <source name="$SOURCENAME"/> from the dumpxml output. Could you try that as well? Well, it's a bit different now. I'm using F17 now and it seems to be working there... However, I still have F16 set up in a virt machine, so I tried to install virt-manager and stuff (just storage to test for the error - as that would be virt under virt, I expect that has no influence here). Now, I end up with *different* error message with current F16 updated version: When trying to create a new pool out of existing VG with a mirror volume: ========================================================================= Error creating pool: Could not start storage pool: cannot open volume '/dev/vg/lvol0,': No such file or directory Traceback (most recent call last): File "/usr/share/virt-manager/virtManager/asyncjob.py", line 45, in cb_wrapper callback(asyncjob, *args, **kwargs) File "/usr/share/virt-manager/virtManager/createpool.py", line 482, in _async_pool_create poolobj = self._pool.install(create=True, meter=meter, build=build) File "/usr/lib/python2.7/site-packages/virtinst/Storage.py", line 744, in install build=build, autostart=False) File "/usr/lib/python2.7/site-packages/virtinst/Storage.py", line 489, in install raise RuntimeError(errmsg) RuntimeError: Could not start storage pool: cannot open volume '/dev/vg/lvol0,': No such file or directory Also when trying to refresh an existing pool where the mirror volume has been added: =============================================================================== Error refreshing pool 'vg': cannot open volume '/dev/vg/lvol1,': No such file or directory Traceback (most recent call last): File "/usr/share/virt-manager/virtManager/asyncjob.py", line 45, in cb_wrapper callback(asyncjob, *args, **kwargs) File "/usr/share/virt-manager/virtManager/asyncjob.py", line 66, in tmpcb callback(*args, **kwargs) File "/usr/share/virt-manager/virtManager/host.py", line 647, in cb pool.refresh() File "/usr/share/virt-manager/virtManager/storagepool.py", line 115, in refresh self.pool.refresh(0) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1832, in refresh if ret == -1: raise libvirtError ('virStoragePoolRefresh() failed', pool=self) libvirtError: cannot open volume '/dev/vg/lvol1,': No such file or directory All the LVs in that VG are deactivated then. The only problem appears in F16, F17 is already fine, so it must have been resolved just recently. Has anything changed with respect to LVM handling in between these releases? (In reply to comment #2) > Error creating pool: Could not start storage pool: cannot open volume > '/dev/vg/lvol0,': No such file or directory ...the LVs were activated before I tried to manage them under virt-manager. Seems like libvirt (or whatever comes here in the play) is deactivating the whole VG if it can't handle a volume (like the mirror one in this case). Yes, there is a significant fix in F17 related to LVM handling which will affect you: commit 82c1740ab92682d69ec8f02adb36b13e1902acd1 Author: Osier Yang <jyang> Date: Mon Oct 10 20:34:59 2011 +0800 storage: Do not use comma as seperator for lvs output * src/storage/storage_backend_logical.c: If a logical vol is created as striped. (e.g. --stripes 3), the "device" field of lvs output will have multiple fileds which are seperated by comma. Thus the RE we write in the codes will not work well anymore. E.g. (lvs output for a stripped vol, uses "#" as seperator here): test_stripes##fSLSZH-zAS2-yAIb-n4mV-Al9u-HA3V-oo9K1B#\ /dev/sdc1(10240),/dev/sdd1(0)#42949672960#4194304 The RE we use: const char *regexes[] = { "^\\s*(\\S+),(\\S*),(\\S+),(\\S+)\\((\\S+)\\),(\\S+),([0-9]+),?\\s*$" }; Also the RE doesn't match the "devices" field of striped vol properly, it contains multiple "device path" and "offset". This patch mainly does: 1) Change the seperator into "#" 2) Change the RE for "devices" field from "(\\S+)\\((\\S+)\\)" into "(\\S+)". 3) Add two new options for lvs command, (segtype, stripes) 4) Extend the RE to match the value for the two new fields. 5) Parse the "devices" field seperately in virStorageBackendLogicalMakeVol, multiple "extents" info are generated if the vol is striped. The number of "extents" is equal to the stripes number of the striped vol. A incidental fix: (virStorageBackendLogicalMakeVol) Free "vol" if it's new created and there is error. Demo on striped vol with the patch applied: % virsh vol-dumpxml /dev/test_vg/vol_striped2 <volume> <name>vol_striped2</name> <key>QuWqmn-kIkZ-IATt-67rc-OWEP-1PHX-Cl2ICs</key> <source> <device path='/dev/sda5'> <extent start='79691776' end='88080384'/> </device> <device path='/dev/sda6'> <extent start='62914560' end='71303168'/> </device> </source> <capacity>8388608</capacity> <allocation>8388608</allocation> <target> <path>/dev/test_vg/vol_striped2</path> <permissions> <mode>0660</mode> <owner>0</owner> <group>6</group> <label>system_u:object_r:fixed_disk_device_t:s0</label> </permissions> </target> </volume> RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=727474 Aha, ok, so it seems it has solved the problem with mirror volumes as well. I think we can close this bug with NEXTRELEASE then if you don't intend to backport the patch for F16... Thanks. (In reply to comment #4) > uses "#" as > seperator here): > > test_stripes##fSLSZH-zAS2-yAIb-n4mV-Al9u-HA3V-oo9K1B#\ > /dev/sdc1(10240),/dev/sdd1(0)#42949672960#4194304 Note that udev permits # in device names: * Character whitelist: 0-9, A-Z, a-z, #+-.:=@_ Whilst it's unlikely you'll end up with a # within the 'devices' field, it's not impossible (unless you have full control of the system's udev rules and lvm configuration files and avoid it) so it's probably better to pick a character that udev does not permit - an old-fashioned <TAB> is perhaps the obvious choice. (Comma does work if you put the 'devices' field last and match the correct number of fields.) Closing NEXTRELEASE as suggested in Comment #5 |