Bug 1075299
Summary: | Failed to get the vol-name by giving volume path in gluster pool. | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | chhu |
Component: | libvirt | Assignee: | Peter Krempa <pkrempa> |
Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 7.0 | CC: | ajia, dyuan, mzhan, pkrempa, pzhang, rbalakri, shyu, xuzhang, yanyang |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | libvirt-1.2.7-1.el7 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2015-03-05 07:31:20 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
chhu
2014-03-12 03:29:03 UTC
Already fixed upstream: commit 7fb3902b0f6649c6d919566bff2c8ec0dd83d2d6 Author: Peter Krempa <pkrempa> Date: Tue Feb 25 15:51:15 2014 +0100 storage: Avoid mangling paths of non-local filesystems when looking up When looking up a volume by path on a non-local filesystem don't use the "cleaned" path that might be mangled in such a way that it will differ from a path provided by a storage backend. Skip the cleanup step for gluster, sheepdog and RBD. The same issue was hit by virsh command vol-pool. Failed to get the pool name by virsh cmd vol-pool passing a volume path in glusterfs pool. # virsh vol-name gluster://10.66.84.12/gluster-vol1/r7g-qcow2.img error: failed to get vol 'luster://10.66.84.12/gluster-vol1/r7g-qcow2.img' error: Storage volume not found: no storage vol with matching path luster://10.66.84.12/gluster-vol1/r7g-qcow2.img verify version: libvirt-1.2.8-9.el7.x86_64 qemu-kvm-rhev-2.1.2-13.el7.x86_64 kernel-3.10.0-211.el7.x86_64 steps: There are several patches in the bug, will verify this bug with each patch. 1>Verify patch "virsh: volume: Fix lookup of volumes to provide better error messages" 1.1 test for exsit inactive pool # virsh vol-upload --pool disk-pool --vol somevol local-file error: pool 'disk-pool' is not active 1.2 test for non-exsit pool # virsh vol-upload --pool upload-pool --vol somevol local-file error: failed to get pool 'upload-pool' error: Storage pool not found: no storage pool with matching name 'upload-pool' 2>Verify patch "Error out when attempting to vol-upload into a remote pool" 2.1 prepare a gluster pool and create a volume in the gluster type pool # virsh pool-list gluster Name State Autostart ------------------------------------------- gluster-pool active no # virsh vol-list gluster-pool Name Path ------------------------------------------------------------------------------ gluster.img gluster://server-ip/gluster-vol1/gluster.img 2.2 vol-upload gluster type pool # virsh vol-upload --pool gluster-pool --vol gluster.img volume-as-disk.xml error: cannot upload to volume gluster.img error: this function is not supported by the connection driver: storage pool doesn't support volume upload 2.3 vol-upload netfs type pool , upload successfully. # virsh pool-list netfs Name State Autostart ------------------------------------------- netfs-nfs-pool active no upload local file disk-pool.xml contents to vol disk.xml in the netfs pool. # virsh vol-upload --pool netfs-nfs-pool --vol disk.xml disk-pool.xml # virsh vol-list netfs-nfs-pool Name Path ------------------------------------------------------------------------------ disk.xml /var/lib/libvirt/images/netfs-nfs/disk.xml upload successfully ,in the netfs-nfs-pool , check the uploaded file # cat disk.xml <pool type='disk'> <name>disk-pool</name> <capacity unit='bytes'>211244736512</capacity> <allocation unit='bytes'>35093630976</allocation> <available unit='bytes'>176151105536</available> <source> <device path='/dev/sdb'/> <format type='12345'/> </source> <target> <path>/var/lib/libvirt/images/disk-pool</path> <permissions> <mode>0755</mode> <owner>0</owner> <group>0</group> </permissions> </target> </pool> <owner>-1</owner> <group>-1</group> </permissions> </target> </pool> ........... 3>Verify patch "storage: Avoid mangling paths of non-local filesystems when looking up" get vol-name form vol-path from a gluster pool : 3.1 get vol-path /vol-key from a gluster pool # virsh vol-path gluster.img --pool gluster-pool gluster://server-ip/gluster-vol1/gluster.img # virsh vol-key gluster.img --pool gluster-pool gluster://server-ip/gluster-vol1/gluster.img 3.2 get vol-name from vol-path , get successfully # virsh vol-name gluster://server-ip/gluster-vol1/gluster.img gluster.img 3.3 using invalid volume path # virsh vol-name gluster://server-ip/gluster-vol1/gluster2.img error: failed to get vol 'gluster://server-ip/gluster-vol1/gluster2.img' error: Storage volume not found: no storage vol with matching path 'gluster://server-ip/gluster-vol1/gluster2.img' (gluster:/server-ip/gluster-vol1/gluster2.img) 3.4 for comment2 test vol-pool, successfully get pool name form volume path # virsh vol-pool gluster://server-ip/gluster-vol1/gluster.img gluster-pool 4>Verify patch “storage: Don't lie about path used to lookup in error message” the volume path will not be sanitized.it will be shown in error message as it was . # virsh vol-name ////dev/disk/by-path/ip-3ffe::104:3260-iscsi-iqn.2008-09.5.165:server.target1-lun-2 error: failed to get vol '////dev/disk/by-path/ip-3ffe::104:3260-iscsi-iqn.2008-09.5.165:server.target1-lun-2' error: Storage volume not found: no storage vol with matching path '////dev/disk/by-path/ip-3ffe::104:3260-iscsi-iqn.2008-09.5.165:server.target1-lun-2' (/dev/disk/by-path/ip-3ffe::104:3260-iscsi-iqn.2008-09.5.165:server.target1-lun-2) 5>Verify patch doc: storage: Explicitly state that it's possible to have non-unique key check http://libvirt.org/formatstorage.html ...... key Providing an identifier for the volume which identifies a single volume. In some cases it's possible to have two distinct keys identifying a single volume. This field cannot be set when creating a volume: it is always generated. ...... 6>Verify patch "gluster: Fix "key" attribute for gluster volumes" the key is as same as the path for gluster volumes # virsh vol-path gluster.img --pool gluster-pool gluster://server-ip/gluster-vol1/gluster.img # virsh vol-key gluster.img --pool gluster-pool gluster://server-ip/gluster-vol1/gluster.img # virsh vol-dumpxml gluster.img --pool gluster-pool <volume type='network'> <name>gluster.img</name> <key>gluster://server-ip/gluster-vol1/gluster.img</key> ........ <target> <path>gluster://server-ip/gluster-vol1/gluster.img</path> <format type='qcow2'/> ........ </target> </volume> all the patches can be verified . move this bug to verified. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-0323.html |