RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1436574 - the gluster pool source list is not right when the glusterfs server has more than 1 volumes
Summary: the gluster pool source list is not right when the glusterfs server has more ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.4
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: Peter Krempa
QA Contact: yisun
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-03-28 08:37 UTC by lijuan men
Modified: 2018-04-10 10:43 UTC (History)
5 users (show)

Fixed In Version: libvirt-3.7.0-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-10 10:42:33 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:0704 0 None None None 2018-04-10 10:43:05 UTC

Description lijuan men 2017-03-28 08:37:12 UTC
Description of problem:
the gluster pool source list is not right when the glusterfs server has more than 1 volumes


Version-Release number of selected component (if applicable):

the glusterfs server:
rhel7.3 release os + glusterfs-server-3.8.4-18.el7rhgs.x86_64

test host:
libvirt-3.1.0-2.el7.x86_64
qemu-kvm-rhev-2.8.0-6.el7.x86_64
glusterfs-cli-3.8.4-18.el7rhgs.x86_64


How reproducible:
100%

Steps to Reproduce:

1.create the first volume in the glusterfs server
# gluster volume create test 10.66.70.107:/opt/br1 force

2.run find-storage-pool-sources-as command in the test host
[root@localhost ~]# virsh find-storage-pool-sources-as --type gluster 10.66.70.107
<sources>
  <source>
    <host name='10.66.70.107'/>
    <dir path='test'/>
  </source>
</sources>

3.create the second volume in the glusterfs server
# gluster volume create test1 10.66.70.107:/opt/br2 force

4.run find-storage-pool-sources-as command in the test host
[root@localhost ~]# virsh find-storage-pool-sources-as --type gluster 10.66.70.107
<sources>
  <source>
    <host name='10.66.70.107'/>
  ***  <dir path='test'/> ***
  </source>
  <source>
    <host name='10.66.70.107'/>
   *** <dir path='test'/> ***
  </source>
</sources>

5.create the third volume in the glusterfs server
# gluster volume create aaa 10.66.70.107:/opt/br3 force

6.run find-storage-pool-sources-as command in the test host
[root@localhost ~]# virsh find-storage-pool-sources-as --type gluster 10.66.70.107
<sources>
  <source>
    <host name='10.66.70.107'/>
 ***   <dir path='aaa'/>***
  </source>
  <source>
    <host name='10.66.70.107'/>
 ***   <dir path='aaa'/>***
  </source>
  <source>
    <host name='10.66.70.107'/>
  ***  <dir path='aaa'/>***
  </source>
</sources>

NOTE:
the following command is normal:
[root@localhost ~]# gluster --xml --log-file=/dev/null volume info all --remote-host=10.66.70.107
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volInfo>
    <volumes>
      <volume>
   ***  <name>aaa</name>  ***
        <id>d0b219d4-4169-4907-8994-d2e2434854ed</id>
        <status>0</status>
        <statusStr>Created</statusStr>
        <snapshotCount>0</snapshotCount>
        ....
      </volume>
      <volume>
   **** <name>test</name> ****
        <id>32826068-2320-4b62-a825-2554edb7f020</id>
        <status>1</status>
        <statusStr>Started</statusStr>
        <snapshotCount>0</snapshotCount>
        ....
      </volume>
      <volume>
   **** <name>test1</name>  ****
        <id>dfa070f4-b12f-4166-8d68-041b73127abc</id>
        <status>0</status>
        <statusStr>Created</statusStr>
        ....
      </volume>
      <count>3</count>
    </volumes>
  </volInfo>
</cliOutput>


Actual results:
all the pool sources are the same names

Expected results:
list the source names correctly

Additional info:

Comment 2 Peter Krempa 2017-04-04 14:40:37 UTC
In addition to this bugreport the output was wrong for the native gluster pool, since it did not contain the <name> element with the volume name, but the volume was part of <dir>. Such XML would not work for the native gluster pool.

The following upstream commits fix both issues:

commit dff04e0af045f73ea6a2c89ae50239acfefdcb5d
Author: Peter Krempa <pkrempa>
Date:   Tue Apr 4 14:04:39 2017 +0200

    storage: gluster: Use volume name as "<name>" field in the XML
    
    For native gluster pools the <dir> field denotes a directory inside the
    pool. For the actual pool name the <name> field has to be used.

commit 5df6992e1c27119c9acfbe4fc5154193e4de7093
Author: Peter Krempa <pkrempa>
Date:   Thu Mar 30 16:14:13 2017 +0200

    storage: Fix XPath for looking up gluster volume name
    
    Use the relative lookup specifier rather than the global one. Otherwise
    only the first name would be looked up. Add a test case to cover the
    scenario.
    
    Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1436574

commit 69cc49867665c48cb22093e1be4fd23c946e1d3d
Author: Peter Krempa <pkrempa>
Date:   Tue Mar 28 17:23:37 2017 +0200

    test: Introduce testing of virStorageUtilGlusterExtractPoolSources
    
    Add a test program called virstorageutiltest and test the gluster pool
    detection code.

commit e238bfa6d409c0c5ee0a5eb3e886ca7420da3786
Author: Peter Krempa <pkrempa>
Date:   Tue Apr 4 13:39:37 2017 +0200

    storage: util: Split out the gluster volume extraction code into new function
    
    To allow testing of the algorithm, split out the extractor into a
    separate helper.

commit a92160dbd5416b093c0d99991afe300b9b8572c4
Author: Peter Krempa <pkrempa>
Date:   Thu Mar 30 15:08:06 2017 +0200

    storage: util: Pass pool type to virStorageBackendFindGlusterPoolSources
    
    The native gluster pool source list data differs from the data used for
    attaching gluster volumes as netfs pools. Currently the only difference
    was the format. Since native pools don't use it and later there will be
    more differences add a more deterministic way to switch between the
    types instead.

Comment 4 yisun 2017-10-31 07:40:01 UTC


GLUSTER HOST:
having two volumes on gluster server, as follow:
# gluster volume list
gluster-vol1
test

TEST HOST:
1. Reproduce the issue:
## rpm -qa | grep libvirt-3
libvirt-3.2.0-14.el7_4.3.x86_64

## virsh find-storage-pool-sources-as  --type netfs 10.66.5.64
<sources>
  <source>
    <host name='10.66.5.64'/>
    <dir path='gluster-vol1'/>
    <format type='glusterfs'/>
  </source>
  <source>
    <host name='10.66.5.64'/>
    <dir path='gluster-vol1'/>
    <format type='glusterfs'/>
  </source>
</sources>
<=== problem happened, two volumes return same xml, in which the path all indicate to "gluster-vol1"

root@localhost /etc/yum.repos.d  ## virsh find-storage-pool-sources-as  --type gluster 10.66.5.64
<sources>
  <source>
    <host name='10.66.5.64'/>
    <dir path='gluster-vol1'/>
  </source>
  <source>
    <host name='10.66.5.64'/>
    <dir path='gluster-vol1'/>
  </source>
</sources>
<=== problem happened, two volumes return same xml, in which the path all indicate to "gluster-vol1"

2. test with latest libvirt
## rpm -qa | grep libvirt-3
libvirt-3.8.0-1.el7.x86_64

## virsh find-storage-pool-sources-as  --type netfs 10.66.5.64
<sources>
  <source>
    <host name='10.66.5.64'/>
    <dir path='gluster-vol1'/>
    <format type='glusterfs'/>
  </source>
  <source>
    <host name='10.66.5.64'/>
    <dir path='test'/>
    <format type='glusterfs'/>
  </source>
</sources>
<==== for native netfs pool, two gluster volumes all display, and dir = volume name

root@localhost /etc/yum.repos.d  ## virsh find-storage-pool-sources-as  --type gluster 10.66.5.64
<sources>
  <source>
    <host name='10.66.5.64'/>
    <dir path='/'/>
    <name>gluster-vol1</name>
  </source>
  <source>
    <host name='10.66.5.64'/>
    <dir path='/'/>
    <name>test</name>
  </source>
</sources>
<==== for gluster pool, two gluster volumes all display, and name = volume name

Comment 8 errata-xmlrpc 2018-04-10 10:42:33 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:0704


Note You need to log in before you can comment on or make changes to this bug.