Bug 1113609
Summary: | create_vol.sh do volume mount multiple times if it has more than one brick. | ||||||
---|---|---|---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Rachana Patel <racpatel> | ||||
Component: | rhs-hadoop-install | Assignee: | Jeff Vance <jvance> | ||||
Status: | CLOSED ERRATA | QA Contact: | amainkar | ||||
Severity: | medium | Docs Contact: | |||||
Priority: | unspecified | ||||||
Version: | rhgs-3.0 | CC: | bchilds, dahorak, eboyd, jrumanov, matt, mkudlej, nlevinki | ||||
Target Milestone: | Release Candidate | Keywords: | Triaged, UpcomingRelease, ZStream | ||||
Target Release: | --- | ||||||
Hardware: | x86_64 | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | 2.11 | Doc Type: | Bug Fix | ||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2014-11-24 11:54:47 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 1159155 | ||||||
Attachments: |
|
Description
Rachana Patel
2014-06-26 13:51:52 UTC
I can't reproduce on 1.24 and I believe this bug was fixed in 1.23 Reproduced again on 4 machines with 3 bricks on each machine, with 1 volume (created from all bricks on all nodes):
# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.5 (Santiago)
# cat /etc/redhat-storage-release
Red Hat Storage Server 3.0
# rpm -q rhs-hadoop-install
rhs-hadoop-install-1_34-1.el6rhs.noarch
# mount | grep brick
/dev/mapper/vg_brick-lv_brick1 on /mnt/brick1 type xfs (rw)
/dev/mapper/vg_brick-lv_brick2 on /mnt/brick2 type xfs (rw)
/dev/mapper/vg_brick-lv_brick3 on /mnt/brick3 type xfs (rw)
# ./create_vol.sh -y --debug HadoopVol1 /mnt/glusterfs \
NODE1:/mnt/brick1 \
NODE2:/mnt/brick1 \
NODE3:/mnt/brick1 \
NODE4:/mnt/brick1 \
NODE1:/mnt/brick2 \
NODE2:/mnt/brick2 \
NODE3:/mnt/brick2 \
NODE4:/mnt/brick2 \
NODE1:/mnt/brick3 \
NODE2:/mnt/brick3 \
NODE3:/mnt/brick3 \
NODE4:/mnt/brick3
***
*** create_vol: version 1.34
***
DEBUG: all nodes in storage pool: NODE1 NODE2 NODE3 NODE4
DEBUG: nodes *not* spanned by new volume:
*** Volume : HadoopVol1
*** Nodes : NODE1, NODE2, NODE3, NODE4, NODE1, NODE2, NODE3, NODE4, NODE1, NODE2, NODE3, NODE4
*** Volume mount : /mnt/glusterfs
*** Brick mounts : /mnt/brick1, /mnt/brick1, /mnt/brick1, /mnt/brick1, /mnt/brick2, /mnt/brick2, /mnt/brick2, /mnt/brick2, /mnt/brick3, /mnt/brick3, /mnt/brick3, /mnt/brick3
<< truncated >>
DEBUG: gluster vol start: volume start: HadoopVol1: success
"HadoopVol1" started
--- creating glusterfs-fuse mounts for HadoopVol1...
DEBUG: glusterfs mount on NODE1:
DEBUG: glusterfs mount on NODE2:
DEBUG: glusterfs mount on NODE3:
DEBUG: glusterfs mount on NODE4:
DEBUG: glusterfs mount on NODE1:
DEBUG: glusterfs mount on NODE2:
DEBUG: glusterfs mount on NODE3:
DEBUG: glusterfs mount on NODE4:
DEBUG: glusterfs mount on NODE1:
DEBUG: glusterfs mount on NODE2:
DEBUG: glusterfs mount on NODE3:
DEBUG: glusterfs mount on NODE4:
--- created glusterfs-fuse mounts for HadoopVol1
<< truncated >>
# mount | grep HadoopVol
NODE1:/HadoopVol1 on /mnt/glusterfs/HadoopVol1 type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
# gluster volume info
Volume Name: HadoopVol1
Type: Distributed-Replicate
Volume ID: 2201f08f-67cc-4c23-9098-5a6405ff879c
Status: Started
Snap Volume: no
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: NODE1:/mnt/brick1/HadoopVol1
Brick2: NODE2:/mnt/brick1/HadoopVol1
Brick3: NODE3:/mnt/brick1/HadoopVol1
Brick4: NODE4:/mnt/brick1/HadoopVol1
Brick5: NODE1:/mnt/brick2/HadoopVol1
Brick6: NODE2:/mnt/brick2/HadoopVol1
Brick7: NODE3:/mnt/brick2/HadoopVol1
Brick8: NODE4:/mnt/brick2/HadoopVol1
Brick9: NODE1:/mnt/brick3/HadoopVol1
Brick10: NODE2:/mnt/brick3/HadoopVol1
Brick11: NODE3:/mnt/brick3/HadoopVol1
Brick12: NODE4:/mnt/brick3/HadoopVol1
Options Reconfigured:
performance.stat-prefetch: off
performance.quick-read: off
cluster.eager-lock: on
performance.readdir-ahead: on
snap-max-hard-limit: 256
snap-max-soft-limit: 90
auto-delete: disable
Whole log from create_vol.sh attached in next comment.
>> ASSIGNED
Created attachment 922874 [details]
Log from create_vol.sh command.
version 2.10 now has an assoc array for the brick mnts and check_node accepts a list of bricks mnts for the given node. Fixed in version 2.10 I have tested rhs-hadoop-install-2-10 and find out that it doesn't work with "2 nodes,2 bricks per node" configuration. The create_vol.sh script [1] switches the order of bricks in command "gluster volume create" [2] that is used to create glusterfs volume inside the script. Because of the order switching, it can't create distributed replicated volume, because the two bricks choosed to be replicated, are on the same machine.(see the scripts below) [1]./create_vol.sh hadoopvol /mnt/glusterfs h21-node1:/mnt/brick1/bricksr1 h21-node2:/mnt/brick1/bricksr2 h21-node1:/mnt/brick2/bricksr1 h21-node2:/mnt/brick2/bricksr2 [2] gluster volume create hadoopvol replica 2 h21-node1:/mnt/brick1/bricksr1/hadoopvol h21-node1:/mnt/brick2/bricksr1/hadoopvol h21-node2:/mnt/brick1/bricksr2/hadoopvol h21-node2:/mnt/brick2/bricksr2/hadoopvol I have spoken with Jeff and we agreed that this is a bug caused by fix of this BZ. That's why I change the state to Assigned. Fixed again in version 2.11 Modified code where the list of bricks is created so that the original brick order is maintained. FIxed in 2.11-1 Tested and verified on cluster with 2 bricks peer node (both bricks associated to one gluster volume).
# rpm -q rhs-hadoop-install
rhs-hadoop-install-2_29-1.el6rhs.noarch
>> VERIFIED
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHEA-2014-1275.html |