RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1236438 - Some source conflict issues for gluster pool
Summary: Some source conflict issues for gluster pool
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.2
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Peter Krempa
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-06-29 02:31 UTC by Yang Yang
Modified: 2015-11-19 06:41 UTC (History)
7 users (show)

Fixed In Version: libvirt-1.2.17-1.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-11-19 06:41:56 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:2202 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2015-11-19 08:17:58 UTC

Description Yang Yang 2015-06-29 02:31:23 UTC
Description of problem:
In the case of gluster in following function, source will conflict once source dir and source host of any 2 pools are same. It does not take account of source name. So even if source names of 2 pools are different, they are treated to source conflicts. But in fact, they should not conflict.

virStoragePoolSourceFindDuplicate(virConnectPtr conn,
                                  virStoragePoolObjListPtr pools,
                                  virStoragePoolDefPtr def)
......
case VIR_STORAGE_POOL_GLUSTER:
            if (STREQ(pool->def->source.dir, def->source.dir) &&
                virStoragePoolSourceMatchSingleHost(&pool->def->source,
                                                    &def->source))
                matchpool = pool;
            break;


Version-Release number of selected component (if applicable):
libvirt-1.2.16-1.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1. prepare 2 gluster volumes in a host
e.g. I have 2 volumes, gluster-vol1 and gluster-vol2 on my gluster server

# gluster volume info
 
Volume Name: gluster-vol1
Type: Distribute
Volume ID: 28c86ac7-eab3-436b-930f-8bfaa8d6559f
Status: Started
Snap Volume: no
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 10.66.4.164:/br1
Options Reconfigured:
performance.readdir-ahead: on
server.allow-insecure: on
nfs.disable: on
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256
 
Volume Name: gluster-vol2
Type: Distribute
Volume ID: 7b96a8b7-56d4-4e94-bf4e-4fab7db56988
Status: Started
Snap Volume: no
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 10.66.4.164:/br2
Options Reconfigured:
server.allow-insecure: on
performance.readdir-ahead: on
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256

2. define/start a gluster pool using gluster-vol1 as source
#virsh pool-define gluster.xml
#virsh pool-start gluster
# virsh pool-dumpxml gluster
<pool type='gluster'>
  <name>gluster</name>
  <uuid>6a65fea0-6546-41be-98f0-1c4180eadca9</uuid>
  <capacity unit='bytes'>75125227520</capacity>
  <allocation unit='bytes'>70293786624</allocation>
  <available unit='bytes'>4831440896</available>
  <source>
    <host name='10.66.4.164'/>
    <dir path='/'/>
    <name>gluster-vol1</name>
  </source>
</pool>

3. define one more gluster pool using gluster-vol2 as source
# cat gluster-pool.xml 
<pool type="gluster">
        <name>gluster1</name>
        <source>
          <name>gluster-vol2</name>  ----> source name is different from 1st gluster pool, but host and dir are same with 1st gluster pool
          <host name='10.66.4.164'/>
          <dir path='/'/>
        </source>
      </pool>
# virsh pool-define gluster-pool.xml 
error: Failed to define pool from gluster-pool.xml
error: operation failed: Storage source conflict with pool: 'gluster'

Actual results:
In step 3, failed to define pool as source conflict

Expected results:
In step 3, source should not conflict

Additional info:
Another concern about gluster pool is that, given a gluster volume consists of over 2 bricks, define 1st gluster pool with host-0, then define 2st gluster pool with host-1, both pools have same source name and dir path (IOW, both pools are using same gluster volume as source), the function cannot check thus conflict. Maybe it's difficult to check.

Repro steps
1. prepare a gluster volume consists of 2 bricks

# gluster volume info gluster-vol2
 
Volume Name: gluster-vol2
Type: Distribute
Volume ID: 7b96a8b7-56d4-4e94-bf4e-4fab7db56988
Status: Started
Snap Volume: no
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 10.66.4.164:/br2
Brick2: 10.66.5.63:/br2
Options Reconfigured:
server.allow-insecure: on
performance.readdir-ahead: on
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256

2. define 1st gluster pool using Brick1 as source
# virsh pool-define gluster.xml
Pool gluster defined from gluster.xml
# virsh pool-dumpxml gluster
<pool type='gluster'>
  <name>gluster</name>
  <uuid>6a65fea0-6546-41be-98f0-1c4180eadca9</uuid>
  <capacity unit='bytes'>158970347520</capacity>
  <allocation unit='bytes'>107715432448</allocation>
  <available unit='bytes'>51254915072</available>
  <source>
    <host name='10.66.4.164'/>
    <dir path='/'/>
    <name>gluster-vol2</name>
  </source>
</pool>

3. define 2st gluster pool using Brick2 as source
# virsh pool-define gluster-pool.xml 
Pool gluster1 defined from gluster-pool.xml
# virsh pool-dumpxml gluster1
<pool type='gluster'>
  <name>gluster1</name>
  <uuid>1fa4a4c7-0828-436a-a7f4-e5655ab01968</uuid>
  <capacity unit='bytes'>158970347520</capacity>
  <allocation unit='bytes'>107715416064</allocation>
  <available unit='bytes'>51254931456</available>
  <source>
    <host name='10.66.5.63'/>
    <dir path='/'/>
    <name>gluster-vol2</name>
  </source>
</pool>

Comment 2 Peter Krempa 2015-06-30 12:13:57 UTC
Fixed upstream:

commit ea1c7b652b2b0c6248d03d4b2ed5b3e8afbd8c1b
Author: Peter Krempa <pkrempa>
Date:   Tue Jun 30 10:14:17 2015 +0200

    conf: storage: Fix duplicate check for gluster pools
    
    The pool name has to be the same too to warrant rejecting a pool
    definition as duplicate. This regression was introduced in commit
    2184ade3a0546b915252cb3b6a5dc88e9a8d2ccf.

v1.2.17-rc1-9-gea1c7b6

Comment 4 Yang Yang 2015-07-07 07:40:27 UTC
Peter,
As for the issue introduced in traditional info in comment #0, does it deserve a fix?

quote words
Another concern about gluster pool is that, given a gluster volume consists of over 2 bricks, define 1st gluster pool with host-0, then define 2st gluster pool with host-1, both pools have same source name and dir path (IOW, both pools are using same gluster volume as source), the function cannot check thus conflict. 

Repro steps
1. prepare a gluster volume consists of 2 bricks

# gluster volume info gluster-vol2
 
Volume Name: gluster-vol2
Type: Distribute
Volume ID: 7b96a8b7-56d4-4e94-bf4e-4fab7db56988
Status: Started
Snap Volume: no
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 10.66.4.164:/br2
Brick2: 10.66.5.63:/br2
Options Reconfigured:
server.allow-insecure: on
performance.readdir-ahead: on
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256

2. define 1st gluster pool using Brick1 as source
# virsh pool-define gluster.xml
Pool gluster defined from gluster.xml
# virsh pool-dumpxml gluster
<pool type='gluster'>
  <name>gluster</name>
  <uuid>6a65fea0-6546-41be-98f0-1c4180eadca9</uuid>
  <capacity unit='bytes'>158970347520</capacity>
  <allocation unit='bytes'>107715432448</allocation>
  <available unit='bytes'>51254915072</available>
  <source>
    <host name='10.66.4.164'/>
    <dir path='/'/>
    <name>gluster-vol2</name>
  </source>
</pool>

3. define 2st gluster pool using Brick2 as source
# virsh pool-define gluster-pool.xml 
Pool gluster1 defined from gluster-pool.xml
# virsh pool-dumpxml gluster1
<pool type='gluster'>
  <name>gluster1</name>
  <uuid>1fa4a4c7-0828-436a-a7f4-e5655ab01968</uuid>
  <capacity unit='bytes'>158970347520</capacity>
  <allocation unit='bytes'>107715416064</allocation>
  <available unit='bytes'>51254931456</available>
  <source>
    <host name='10.66.5.63'/>
    <dir path='/'/>
    <name>gluster-vol2</name>
  </source>
</pool>

Regards
Yang

Comment 5 Peter Krempa 2015-07-07 11:52:27 UTC
Well,
the problem of two hosts serving the same volume is known (but I don't think there's a bugzilla for that). Basically all the check that were added to fix the duplicate issue problem were mostly cosmetic. The proper way to do the check is to compare the volume ID field.

This BZ tracks a regression in the previous behavior, where it would reject distinct pools as identical, so if you want to track the described problem, please file a new bz.

Comment 6 Yang Yang 2015-07-08 03:19:43 UTC
Thanks Peter. Opened a new Bug 1240877

Verified it with libvirt-1.2.17-1.el7.x86_64

Steps
1. prepare 2 gluster volumes in one host
# gluster volume info
 
Volume Name: gluster-vol1
Type: Distribute
Volume ID: 28c86ac7-eab3-436b-930f-8bfaa8d6559f
Status: Started
Snap Volume: no
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 10.66.4.164:/br1
Options Reconfigured:
performance.readdir-ahead: on
server.allow-insecure: on
nfs.disable: on
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256
 
Volume Name: gluster-vol2
Type: Distribute
Volume ID: 7b96a8b7-56d4-4e94-bf4e-4fab7db56988
Status: Started
Snap Volume: no
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 10.66.4.164:/br2
Brick2: 10.66.5.63:/br2
Options Reconfigured:
server.allow-insecure: on
performance.readdir-ahead: on
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256

2. define 1st gluster pool using 'gluster-vol1' as source
# cat /tmp/gluster.xml 
<pool type='gluster'>
  <name>gluster</name>
  <uuid>6a65fea0-6546-41be-98f0-1c4180eadca9</uuid>
  <capacity unit='bytes'>0</capacity>
  <allocation unit='bytes'>0</allocation>
  <available unit='bytes'>0</available>
  <source>
    <host name='10.66.4.164'/>
    <dir path='/'/>
    <name>gluster-vol1</name>
  </source>
</pool>

# virsh pool-define /tmp/gluster.xml 
Pool gluster defined from /tmp/gluster.xml

# virsh pool-start gluster
Pool gluster started

3. define 2nd gluster pool using 'gluster-vol2' as source
# cat /tmp/gluster1.xml 
<pool type='gluster'>
  <name>gluster1</name>
  <uuid>1fa4a4c7-0828-436a-a7f4-e5655ab01968</uuid>
  <capacity unit='bytes'>0</capacity>
  <allocation unit='bytes'>0</allocation>
  <available unit='bytes'>0</available>
  <source>
    <host name='10.66.4.164'/>
    <dir path='/'/>
    <name>gluster-vol2</name>      ----> source name is different from 1st pool
  </source>
</pool>

# virsh pool-define /tmp/gluster1.xml 
Pool gluster1 defined from /tmp/gluster1.xml
# virsh pool-start gluster1
Pool gluster1 started

4. destroy 2nd gluster pool edit as following
# virsh pool-dumpxml gluster1
<pool type='gluster'>
  <name>gluster1</name>
  <uuid>1fa4a4c7-0828-436a-a7f4-e5655ab01968</uuid>
  <capacity unit='bytes'>75125227520</capacity>
  <allocation unit='bytes'>72399437824</allocation>
  <available unit='bytes'>2725789696</available>
  <source>
    <host name='10.66.4.164'/>
    <dir path='/nfs'/>       ---> dir path is different from 1st pool
    <name>gluster-vol1</name>
  </source>
</pool>

5. start 2nd pool
# virsh pool-start gluster1
Pool gluster1 started

Comment 9 errata-xmlrpc 2015-11-19 06:41:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-2202.html


Note You need to log in before you can comment on or make changes to this bug.