Bug 1436574

Summary: the gluster pool source list is not right when the glusterfs server has more than 1 volumes
Product: Red Hat Enterprise Linux 7 Reporter: lijuan men <lmen>
Component: libvirtAssignee: Peter Krempa <pkrempa>
Status: CLOSED ERRATA QA Contact: yisun
Severity: medium Docs Contact:
Priority: unspecified    
Version: 7.4CC: dyuan, pkrempa, rbalakri, xuzhang, yisun
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: libvirt-3.7.0-1.el7 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-04-10 10:42:33 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description lijuan men 2017-03-28 08:37:12 UTC
Description of problem:
the gluster pool source list is not right when the glusterfs server has more than 1 volumes


Version-Release number of selected component (if applicable):

the glusterfs server:
rhel7.3 release os + glusterfs-server-3.8.4-18.el7rhgs.x86_64

test host:
libvirt-3.1.0-2.el7.x86_64
qemu-kvm-rhev-2.8.0-6.el7.x86_64
glusterfs-cli-3.8.4-18.el7rhgs.x86_64


How reproducible:
100%

Steps to Reproduce:

1.create the first volume in the glusterfs server
# gluster volume create test 10.66.70.107:/opt/br1 force

2.run find-storage-pool-sources-as command in the test host
[root@localhost ~]# virsh find-storage-pool-sources-as --type gluster 10.66.70.107
<sources>
  <source>
    <host name='10.66.70.107'/>
    <dir path='test'/>
  </source>
</sources>

3.create the second volume in the glusterfs server
# gluster volume create test1 10.66.70.107:/opt/br2 force

4.run find-storage-pool-sources-as command in the test host
[root@localhost ~]# virsh find-storage-pool-sources-as --type gluster 10.66.70.107
<sources>
  <source>
    <host name='10.66.70.107'/>
  ***  <dir path='test'/> ***
  </source>
  <source>
    <host name='10.66.70.107'/>
   *** <dir path='test'/> ***
  </source>
</sources>

5.create the third volume in the glusterfs server
# gluster volume create aaa 10.66.70.107:/opt/br3 force

6.run find-storage-pool-sources-as command in the test host
[root@localhost ~]# virsh find-storage-pool-sources-as --type gluster 10.66.70.107
<sources>
  <source>
    <host name='10.66.70.107'/>
 ***   <dir path='aaa'/>***
  </source>
  <source>
    <host name='10.66.70.107'/>
 ***   <dir path='aaa'/>***
  </source>
  <source>
    <host name='10.66.70.107'/>
  ***  <dir path='aaa'/>***
  </source>
</sources>

NOTE:
the following command is normal:
[root@localhost ~]# gluster --xml --log-file=/dev/null volume info all --remote-host=10.66.70.107
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volInfo>
    <volumes>
      <volume>
   ***  <name>aaa</name>  ***
        <id>d0b219d4-4169-4907-8994-d2e2434854ed</id>
        <status>0</status>
        <statusStr>Created</statusStr>
        <snapshotCount>0</snapshotCount>
        ....
      </volume>
      <volume>
   **** <name>test</name> ****
        <id>32826068-2320-4b62-a825-2554edb7f020</id>
        <status>1</status>
        <statusStr>Started</statusStr>
        <snapshotCount>0</snapshotCount>
        ....
      </volume>
      <volume>
   **** <name>test1</name>  ****
        <id>dfa070f4-b12f-4166-8d68-041b73127abc</id>
        <status>0</status>
        <statusStr>Created</statusStr>
        ....
      </volume>
      <count>3</count>
    </volumes>
  </volInfo>
</cliOutput>


Actual results:
all the pool sources are the same names

Expected results:
list the source names correctly

Additional info:

Comment 2 Peter Krempa 2017-04-04 14:40:37 UTC
In addition to this bugreport the output was wrong for the native gluster pool, since it did not contain the <name> element with the volume name, but the volume was part of <dir>. Such XML would not work for the native gluster pool.

The following upstream commits fix both issues:

commit dff04e0af045f73ea6a2c89ae50239acfefdcb5d
Author: Peter Krempa <pkrempa>
Date:   Tue Apr 4 14:04:39 2017 +0200

    storage: gluster: Use volume name as "<name>" field in the XML
    
    For native gluster pools the <dir> field denotes a directory inside the
    pool. For the actual pool name the <name> field has to be used.

commit 5df6992e1c27119c9acfbe4fc5154193e4de7093
Author: Peter Krempa <pkrempa>
Date:   Thu Mar 30 16:14:13 2017 +0200

    storage: Fix XPath for looking up gluster volume name
    
    Use the relative lookup specifier rather than the global one. Otherwise
    only the first name would be looked up. Add a test case to cover the
    scenario.
    
    Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1436574

commit 69cc49867665c48cb22093e1be4fd23c946e1d3d
Author: Peter Krempa <pkrempa>
Date:   Tue Mar 28 17:23:37 2017 +0200

    test: Introduce testing of virStorageUtilGlusterExtractPoolSources
    
    Add a test program called virstorageutiltest and test the gluster pool
    detection code.

commit e238bfa6d409c0c5ee0a5eb3e886ca7420da3786
Author: Peter Krempa <pkrempa>
Date:   Tue Apr 4 13:39:37 2017 +0200

    storage: util: Split out the gluster volume extraction code into new function
    
    To allow testing of the algorithm, split out the extractor into a
    separate helper.

commit a92160dbd5416b093c0d99991afe300b9b8572c4
Author: Peter Krempa <pkrempa>
Date:   Thu Mar 30 15:08:06 2017 +0200

    storage: util: Pass pool type to virStorageBackendFindGlusterPoolSources
    
    The native gluster pool source list data differs from the data used for
    attaching gluster volumes as netfs pools. Currently the only difference
    was the format. Since native pools don't use it and later there will be
    more differences add a more deterministic way to switch between the
    types instead.

Comment 4 yisun 2017-10-31 07:40:01 UTC


GLUSTER HOST:
having two volumes on gluster server, as follow:
# gluster volume list
gluster-vol1
test

TEST HOST:
1. Reproduce the issue:
## rpm -qa | grep libvirt-3
libvirt-3.2.0-14.el7_4.3.x86_64

## virsh find-storage-pool-sources-as  --type netfs 10.66.5.64
<sources>
  <source>
    <host name='10.66.5.64'/>
    <dir path='gluster-vol1'/>
    <format type='glusterfs'/>
  </source>
  <source>
    <host name='10.66.5.64'/>
    <dir path='gluster-vol1'/>
    <format type='glusterfs'/>
  </source>
</sources>
<=== problem happened, two volumes return same xml, in which the path all indicate to "gluster-vol1"

root@localhost /etc/yum.repos.d  ## virsh find-storage-pool-sources-as  --type gluster 10.66.5.64
<sources>
  <source>
    <host name='10.66.5.64'/>
    <dir path='gluster-vol1'/>
  </source>
  <source>
    <host name='10.66.5.64'/>
    <dir path='gluster-vol1'/>
  </source>
</sources>
<=== problem happened, two volumes return same xml, in which the path all indicate to "gluster-vol1"

2. test with latest libvirt
## rpm -qa | grep libvirt-3
libvirt-3.8.0-1.el7.x86_64

## virsh find-storage-pool-sources-as  --type netfs 10.66.5.64
<sources>
  <source>
    <host name='10.66.5.64'/>
    <dir path='gluster-vol1'/>
    <format type='glusterfs'/>
  </source>
  <source>
    <host name='10.66.5.64'/>
    <dir path='test'/>
    <format type='glusterfs'/>
  </source>
</sources>
<==== for native netfs pool, two gluster volumes all display, and dir = volume name

root@localhost /etc/yum.repos.d  ## virsh find-storage-pool-sources-as  --type gluster 10.66.5.64
<sources>
  <source>
    <host name='10.66.5.64'/>
    <dir path='/'/>
    <name>gluster-vol1</name>
  </source>
  <source>
    <host name='10.66.5.64'/>
    <dir path='/'/>
    <name>test</name>
  </source>
</sources>
<==== for gluster pool, two gluster volumes all display, and name = volume name

Comment 8 errata-xmlrpc 2018-04-10 10:42:33 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:0704