Bug 1236438
Summary: | Some source conflict issues for gluster pool | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Yang Yang <yanyang> |
Component: | libvirt | Assignee: | Peter Krempa <pkrempa> |
Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 7.2 | CC: | dyuan, jferlan, mzhan, pkrempa, pzhang, rbalakri, xuzhang |
Target Milestone: | rc | Keywords: | Regression |
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | libvirt-1.2.17-1.el7 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2015-11-19 06:41:56 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Yang Yang
2015-06-29 02:31:23 UTC
Fixed upstream: commit ea1c7b652b2b0c6248d03d4b2ed5b3e8afbd8c1b Author: Peter Krempa <pkrempa> Date: Tue Jun 30 10:14:17 2015 +0200 conf: storage: Fix duplicate check for gluster pools The pool name has to be the same too to warrant rejecting a pool definition as duplicate. This regression was introduced in commit 2184ade3a0546b915252cb3b6a5dc88e9a8d2ccf. v1.2.17-rc1-9-gea1c7b6 Peter, As for the issue introduced in traditional info in comment #0, does it deserve a fix? quote words Another concern about gluster pool is that, given a gluster volume consists of over 2 bricks, define 1st gluster pool with host-0, then define 2st gluster pool with host-1, both pools have same source name and dir path (IOW, both pools are using same gluster volume as source), the function cannot check thus conflict. Repro steps 1. prepare a gluster volume consists of 2 bricks # gluster volume info gluster-vol2 Volume Name: gluster-vol2 Type: Distribute Volume ID: 7b96a8b7-56d4-4e94-bf4e-4fab7db56988 Status: Started Snap Volume: no Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: 10.66.4.164:/br2 Brick2: 10.66.5.63:/br2 Options Reconfigured: server.allow-insecure: on performance.readdir-ahead: on auto-delete: disable snap-max-soft-limit: 90 snap-max-hard-limit: 256 2. define 1st gluster pool using Brick1 as source # virsh pool-define gluster.xml Pool gluster defined from gluster.xml # virsh pool-dumpxml gluster <pool type='gluster'> <name>gluster</name> <uuid>6a65fea0-6546-41be-98f0-1c4180eadca9</uuid> <capacity unit='bytes'>158970347520</capacity> <allocation unit='bytes'>107715432448</allocation> <available unit='bytes'>51254915072</available> <source> <host name='10.66.4.164'/> <dir path='/'/> <name>gluster-vol2</name> </source> </pool> 3. define 2st gluster pool using Brick2 as source # virsh pool-define gluster-pool.xml Pool gluster1 defined from gluster-pool.xml # virsh pool-dumpxml gluster1 <pool type='gluster'> <name>gluster1</name> <uuid>1fa4a4c7-0828-436a-a7f4-e5655ab01968</uuid> <capacity unit='bytes'>158970347520</capacity> <allocation unit='bytes'>107715416064</allocation> <available unit='bytes'>51254931456</available> <source> <host name='10.66.5.63'/> <dir path='/'/> <name>gluster-vol2</name> </source> </pool> Regards Yang Well, the problem of two hosts serving the same volume is known (but I don't think there's a bugzilla for that). Basically all the check that were added to fix the duplicate issue problem were mostly cosmetic. The proper way to do the check is to compare the volume ID field. This BZ tracks a regression in the previous behavior, where it would reject distinct pools as identical, so if you want to track the described problem, please file a new bz. Thanks Peter. Opened a new Bug 1240877 Verified it with libvirt-1.2.17-1.el7.x86_64 Steps 1. prepare 2 gluster volumes in one host # gluster volume info Volume Name: gluster-vol1 Type: Distribute Volume ID: 28c86ac7-eab3-436b-930f-8bfaa8d6559f Status: Started Snap Volume: no Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: 10.66.4.164:/br1 Options Reconfigured: performance.readdir-ahead: on server.allow-insecure: on nfs.disable: on auto-delete: disable snap-max-soft-limit: 90 snap-max-hard-limit: 256 Volume Name: gluster-vol2 Type: Distribute Volume ID: 7b96a8b7-56d4-4e94-bf4e-4fab7db56988 Status: Started Snap Volume: no Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: 10.66.4.164:/br2 Brick2: 10.66.5.63:/br2 Options Reconfigured: server.allow-insecure: on performance.readdir-ahead: on auto-delete: disable snap-max-soft-limit: 90 snap-max-hard-limit: 256 2. define 1st gluster pool using 'gluster-vol1' as source # cat /tmp/gluster.xml <pool type='gluster'> <name>gluster</name> <uuid>6a65fea0-6546-41be-98f0-1c4180eadca9</uuid> <capacity unit='bytes'>0</capacity> <allocation unit='bytes'>0</allocation> <available unit='bytes'>0</available> <source> <host name='10.66.4.164'/> <dir path='/'/> <name>gluster-vol1</name> </source> </pool> # virsh pool-define /tmp/gluster.xml Pool gluster defined from /tmp/gluster.xml # virsh pool-start gluster Pool gluster started 3. define 2nd gluster pool using 'gluster-vol2' as source # cat /tmp/gluster1.xml <pool type='gluster'> <name>gluster1</name> <uuid>1fa4a4c7-0828-436a-a7f4-e5655ab01968</uuid> <capacity unit='bytes'>0</capacity> <allocation unit='bytes'>0</allocation> <available unit='bytes'>0</available> <source> <host name='10.66.4.164'/> <dir path='/'/> <name>gluster-vol2</name> ----> source name is different from 1st pool </source> </pool> # virsh pool-define /tmp/gluster1.xml Pool gluster1 defined from /tmp/gluster1.xml # virsh pool-start gluster1 Pool gluster1 started 4. destroy 2nd gluster pool edit as following # virsh pool-dumpxml gluster1 <pool type='gluster'> <name>gluster1</name> <uuid>1fa4a4c7-0828-436a-a7f4-e5655ab01968</uuid> <capacity unit='bytes'>75125227520</capacity> <allocation unit='bytes'>72399437824</allocation> <available unit='bytes'>2725789696</available> <source> <host name='10.66.4.164'/> <dir path='/nfs'/> ---> dir path is different from 1st pool <name>gluster-vol1</name> </source> </pool> 5. start 2nd pool # virsh pool-start gluster1 Pool gluster1 started Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-2202.html |