Bug 1113609 - create_vol.sh do volume mount multiple times if it has more than one brick.
Summary: create_vol.sh do volume mount multiple times if it has more than one brick.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhs-hadoop-install
Version: rhgs-3.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: Release Candidate
: ---
Assignee: Jeff Vance
QA Contact: amainkar
URL:
Whiteboard:
Depends On:
Blocks: 1159155
TreeView+ depends on / blocked
 
Reported: 2014-06-26 13:51 UTC by Rachana Patel
Modified: 2015-04-20 11:57 UTC (History)
7 users (show)

Fixed In Version: 2.11
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-11-24 11:54:47 UTC
Embargoed:


Attachments (Terms of Use)
Log from create_vol.sh command. (11.93 KB, text/x-log)
2014-07-31 10:44 UTC, Daniel Horák
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2014:1275 0 normal SHIPPED_LIVE Red Hat Storage Server 3 Hadoop plug-in enhancement update 2014-11-24 16:53:36 UTC

Description Rachana Patel 2014-06-26 13:51:52 UTC
Description of problem:
=======================
after creating volume it should be mounted on each server at <mnt point>/volname. It seems if server/node has more than one bricks than create_vol.sh is trying to mount same volume on same mountpoint multiple times.

e.g.
/usr/share/rhs-hadoop-install/create_vol.sh vol /mnt/gfs --debug rhs-gp-srv7.lab.eng.blr.redhat.com:/bk1 rhs-gp-srv4.lab.eng.blr.redhat.com:/bk1  rhs-gp-srv7.lab.eng.blr.redhat.com:/bk2 rhs-gp-srv4.lab.eng.blr.redhat.com:/bk2


--- creating glusterfs-fuse mount for vol...
DEBUG: glusterfs mount on rhs-gp-srv7.lab.eng.blr.redhat.com: 
DEBUG: glusterfs mount on rhs-gp-srv4.lab.eng.blr.redhat.com: 
DEBUG: glusterfs mount on rhs-gp-srv7.lab.eng.blr.redhat.com: 
DEBUG: glusterfs mount on rhs-gp-srv4.lab.eng.blr.redhat.com: 




Version-Release number of selected component (if applicable):
=============================================================
gluster - 3.6.0.20-1.el6rhs.x86_64

rhs-hadoop:-
rhs-hadoop-install-1_20-1.el6rhs.noarch
rhs-hadoop-2.3.2-2.noarch


How reproducible:
=================
always


Steps to Reproduce:
===================
1. have 2 RHS server(RHS 3.0) having 2 bricks on each and one Management server RHEL 6.5.
2. did greenfield setup for 2X2 volume using install script. run script from management server.
3. after running setup_cluster execute create_vol as below

/usr/share/rhs-hadoop-install/create_vol.sh vol /mnt/gfs --debug rhs-gp-srv7.lab.eng.blr.redhat.com:/bk1 rhs-gp-srv4.lab.eng.blr.redhat.com:/bk1  rhs-gp-srv7.lab.eng.blr.redhat.com:/bk2 rhs-gp-srv4.lab.eng.blr.redhat.com:/bk2


Actual results:
===============
--- creating glusterfs-fuse mount for vol...
DEBUG: glusterfs mount on rhs-gp-srv7.lab.eng.blr.redhat.com: 
DEBUG: glusterfs mount on rhs-gp-srv4.lab.eng.blr.redhat.com: 
DEBUG: glusterfs mount on rhs-gp-srv7.lab.eng.blr.redhat.com: 
DEBUG: glusterfs mount on rhs-gp-srv4.lab.eng.blr.redhat.com: 

It is taking each brick and mounting on that server. so if we have n bricks on same server, it seems it will do n times.

Expected results:
=================
It should create node list(should not go by brick list) and mount on that node once


Additional info:
================

Comment 2 Jeff Vance 2014-06-26 18:58:48 UTC
I can't reproduce on 1.24 and I believe this bug was fixed in 1.23

Comment 3 Daniel Horák 2014-07-31 10:43:39 UTC
Reproduced again on 4 machines with 3 bricks on each machine, with 1 volume (created from all bricks on all nodes):

# cat /etc/redhat-release 
  Red Hat Enterprise Linux Server release 6.5 (Santiago)
# cat /etc/redhat-storage-release 
  Red Hat Storage Server 3.0
# rpm -q rhs-hadoop-install
  rhs-hadoop-install-1_34-1.el6rhs.noarch

# mount | grep brick
  /dev/mapper/vg_brick-lv_brick1 on /mnt/brick1 type xfs (rw)
  /dev/mapper/vg_brick-lv_brick2 on /mnt/brick2 type xfs (rw)
  /dev/mapper/vg_brick-lv_brick3 on /mnt/brick3 type xfs (rw)

# ./create_vol.sh -y --debug HadoopVol1 /mnt/glusterfs \
      NODE1:/mnt/brick1 \
      NODE2:/mnt/brick1 \
      NODE3:/mnt/brick1 \
      NODE4:/mnt/brick1 \
      NODE1:/mnt/brick2 \
      NODE2:/mnt/brick2 \
      NODE3:/mnt/brick2 \
      NODE4:/mnt/brick2 \
      NODE1:/mnt/brick3 \
      NODE2:/mnt/brick3 \
      NODE3:/mnt/brick3 \
      NODE4:/mnt/brick3
  ***
  *** create_vol: version 1.34
  ***
  DEBUG: all nodes in storage pool: NODE1 NODE2 NODE3 NODE4
  DEBUG: nodes *not* spanned by new volume:

  *** Volume        : HadoopVol1
  *** Nodes         : NODE1, NODE2, NODE3, NODE4, NODE1, NODE2, NODE3, NODE4, NODE1, NODE2, NODE3, NODE4
  *** Volume mount  : /mnt/glusterfs
  *** Brick mounts  : /mnt/brick1, /mnt/brick1, /mnt/brick1, /mnt/brick1, /mnt/brick2, /mnt/brick2, /mnt/brick2, /mnt/brick2, /mnt/brick3, /mnt/brick3, /mnt/brick3, /mnt/brick3
    << truncated >>
  DEBUG: gluster vol start: volume start: HadoopVol1: success
  "HadoopVol1" started
  --- creating glusterfs-fuse mounts for HadoopVol1...
  DEBUG: glusterfs mount on NODE1:
  DEBUG: glusterfs mount on NODE2:
  DEBUG: glusterfs mount on NODE3:
  DEBUG: glusterfs mount on NODE4:
  DEBUG: glusterfs mount on NODE1:
  DEBUG: glusterfs mount on NODE2:
  DEBUG: glusterfs mount on NODE3:
  DEBUG: glusterfs mount on NODE4:
  DEBUG: glusterfs mount on NODE1:
  DEBUG: glusterfs mount on NODE2:
  DEBUG: glusterfs mount on NODE3:
  DEBUG: glusterfs mount on NODE4:
  --- created glusterfs-fuse mounts for HadoopVol1
    << truncated >>

# mount | grep HadoopVol
  NODE1:/HadoopVol1 on /mnt/glusterfs/HadoopVol1 type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)

# gluster volume info
  Volume Name: HadoopVol1
  Type: Distributed-Replicate
  Volume ID: 2201f08f-67cc-4c23-9098-5a6405ff879c
  Status: Started
  Snap Volume: no
  Number of Bricks: 6 x 2 = 12
  Transport-type: tcp
  Bricks:
  Brick1: NODE1:/mnt/brick1/HadoopVol1
  Brick2: NODE2:/mnt/brick1/HadoopVol1
  Brick3: NODE3:/mnt/brick1/HadoopVol1
  Brick4: NODE4:/mnt/brick1/HadoopVol1
  Brick5: NODE1:/mnt/brick2/HadoopVol1
  Brick6: NODE2:/mnt/brick2/HadoopVol1
  Brick7: NODE3:/mnt/brick2/HadoopVol1
  Brick8: NODE4:/mnt/brick2/HadoopVol1
  Brick9: NODE1:/mnt/brick3/HadoopVol1
  Brick10: NODE2:/mnt/brick3/HadoopVol1
  Brick11: NODE3:/mnt/brick3/HadoopVol1
  Brick12: NODE4:/mnt/brick3/HadoopVol1
  Options Reconfigured:
  performance.stat-prefetch: off
  performance.quick-read: off
  cluster.eager-lock: on
  performance.readdir-ahead: on
  snap-max-hard-limit: 256
  snap-max-soft-limit: 90
  auto-delete: disable

Whole log from create_vol.sh attached in next comment.

>> ASSIGNED

Comment 4 Daniel Horák 2014-07-31 10:44:19 UTC
Created attachment 922874 [details]
Log from create_vol.sh command.

Comment 5 Jeff Vance 2014-08-18 05:52:18 UTC
version 2.10 now has an assoc array for the brick mnts and check_node accepts a list of bricks mnts for the given node. Fixed in version 2.10

Comment 6 Jakub Rumanovsky 2014-08-20 16:26:48 UTC
I have tested rhs-hadoop-install-2-10 and find out that it doesn't work with "2 nodes,2 bricks per node" configuration. The create_vol.sh script [1] switches the order of bricks in command "gluster volume create" [2] that is used to create glusterfs volume inside the script. Because of the order switching, it can't create distributed replicated volume, because the two bricks choosed to be replicated, are on the same machine.(see the scripts below)

[1]./create_vol.sh hadoopvol /mnt/glusterfs h21-node1:/mnt/brick1/bricksr1 h21-node2:/mnt/brick1/bricksr2 h21-node1:/mnt/brick2/bricksr1 h21-node2:/mnt/brick2/bricksr2

[2] gluster volume create hadoopvol replica 2 h21-node1:/mnt/brick1/bricksr1/hadoopvol h21-node1:/mnt/brick2/bricksr1/hadoopvol h21-node2:/mnt/brick1/bricksr2/hadoopvol h21-node2:/mnt/brick2/bricksr2/hadoopvol


I have spoken with Jeff and we agreed that this is a bug caused by fix of this BZ. That's why I change the state to Assigned.

Comment 7 Jeff Vance 2014-08-21 02:03:32 UTC
Fixed again in version 2.11

Comment 8 Jeff Vance 2014-08-21 02:04:37 UTC
Modified code where the list of bricks is created so that the original brick order is maintained. FIxed in 2.11-1

Comment 10 Daniel Horák 2014-11-07 09:07:01 UTC
Tested and verified on cluster with 2 bricks peer node (both bricks associated to one gluster volume).

# rpm -q rhs-hadoop-install
rhs-hadoop-install-2_29-1.el6rhs.noarch

>> VERIFIED

Comment 12 errata-xmlrpc 2014-11-24 11:54:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2014-1275.html


Note You need to log in before you can comment on or make changes to this bug.