Bug 1114585 - rhs-node listed as rhs_node in enable_vol.sh help text
Summary: rhs-node listed as rhs_node in enable_vol.sh help text
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhs-hadoop-install
Version: rhgs-3.0
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: Release Candidate
: ---
Assignee: Jeff Vance
QA Contact: Daniel Horák
URL:
Whiteboard:
Depends On:
Blocks: 1159155
TreeView+ depends on / blocked
 
Reported: 2014-06-30 12:32 UTC by Anush Shetty
Modified: 2014-11-24 11:54 UTC (History)
6 users (show)

Fixed In Version: 2.27-1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-11-24 11:54:49 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2014:1275 0 normal SHIPPED_LIVE Red Hat Storage Server 3 Hadoop plug-in enhancement update 2014-11-24 16:53:36 UTC

Description Anush Shetty 2014-06-30 12:32:01 UTC
Description of problem: rhs-node option has been listed as rhs_node in the help text in enable_vol.sh


Version-Release number of selected component (if applicable): rhs-hadoop-install-1_25-1.el6rhs


How reproducible: Always


Actual results:

# ./enable_vol.sh --help
***
*** enable_vol: version 1.25
***

enable_vol enables an existing RHS volume for hadoop workloads.

SYNTAX:

enable_vol --version | --help

enable_vol [-y] [--quiet | --verbose | --debug] \
           [--user <ambari-admin-user>] [--pass <ambari-admin-password>] \
           [--port <port-num>] [--hadoop-management-node <node>] \
           [--rhs-node <node>] [--yarn-master <node>] \
           <volname>
where:

<volname>    : the RHS volume to be enabled for hadoop workloads.
--yarn-master: (optional) hostname or ip of the yarn-master server which is
               expected to be outside of the storage pool. Default is localhost.
--rhs_node   : (optional) hostname of any of the storage nodes. This is needed
               in order to access the gluster command. Default is localhost
               which, must have gluster cli access.
--hadoop-mgmt-node: (optional) hostname or ip of the hadoop mgmt server which is
               expected to be outside of the storage pool. Default is localhost.
-y           : (optional) auto answer "yes" to all prompts. Default is to answer
               a confirmation prompt.
--quiet      : (optional) output only basic progress/step messages. Default.
--verbose    : (optional) output --quiet plus more details of each step.
--debug      : (optional) output --verbose plus greater details useful for
               debugging.
--user       : the ambari admin user name. Default: "admin".
--pass       : the password for --user. Default: "admin".
--port       : the port number used by the ambari server. Default: 8080.
--version    : output only the version string.
--help       : this text.

[root@rhshdp11 rhs-hadoop-install]# ./enable_vol.sh --yarn-master rhshdp12.lab.eng.blr.redhat.com --hadoop-mgmt-node rhshdp11.lab.eng.blr.redhat.com --rhs_node rhshdp03.lab.eng.blr.redhat.com HadoopVol
***
*** enable_vol: version 1.25
***
getopt: unrecognized option '--rhs_node'
No RHS storage node specified therefore the localhost (rhshdp11.lab.eng.blr.redhat.com) is assumed
  Continue? [y|N] n



Expected results:  Should be listed as --rhs-node


Additional info:

Comment 2 Jeff Vance 2014-07-07 16:12:06 UTC
fixed in version 1.26

Comment 3 Daniel Horák 2014-07-28 13:08:13 UTC
Same problem for disable_vol.sh:
# ./disable_vol.sh --help | grep rhs_node
  --rhs_node   : (optional) hostname of any of the storage nodes. This is needed in

# rpm -q rhs-hadoop-install
  rhs-hadoop-install-1_32-1.el6rhs.noarch

>> ASSIGNED

Comment 4 Jeff Vance 2014-07-28 17:01:33 UTC
fixed disable_vol.sh in version 1.33, but not sure if I am supposed to do a brew build of this latest version.

Comment 5 Daniel Horák 2014-07-30 14:46:54 UTC
I found few other inconsistencies in help texts for scripts setup_cluster, {create,enable,disable}_vol.sh - I think it is still related to the context of this BZ and hopefully I didn't miss anything.
If it is already out of scope from this bug, fell free to return it back to MODIFIED and I'll create new BZ with following issues.

setup_cluster.sh:
  * --hadoop-management-node versus --hadoop-mgmt-node
  * <nodes-spec-list> versus <node-spec-list> 
  * Each node is expected to be separate from the management and yarn-master nodes.
    - (and similarly for --yarn-master and --hadoop-mgmt-node) - this is recommended, but scenarios with management or yarn-master inside the storage pool are also supported (if I'm right)
    - this apply also for all following scripts

create_vol.sh:
  * <volume-mnt-prefix> versus <vol-mnt-prefix>
  * <node-list-spec> versus <node-spec-list>

enable_vol.sh:
  * --hadoop-management-node versus --hadoop-mgmt-node

disable_vol.sh:
 * --hadoop-management-node versus --hadoop-mgmt-node

>> ASSIGNED

Comment 6 Jeff Vance 2014-08-01 16:18:05 UTC
Thanks for being thorough in checking the usage verbage! I've fixed the above errors in version 2.01, which is potentially post-Denali. The 2.x version supports the HDP 2.1 stack and is opportunistic for the Denali release.

Comment 7 Daniel Horák 2014-08-14 09:14:26 UTC
For rhs-hadoop-install-2_08-1.el6rhs.noarch following issues are still valid:

setup_cluster.sh --help:
  * <nodes-spec-list> versus <node-spec-list> (there are mentioned both versions, in other scripts is used node-spec-list)

  * Each node is expected to be separate from the management and yarn-master nodes.
    - this was changed only for setup_cluster.sh - <node-spec>, but --yarn-master and --hadoop-mgmt-node in each script, it is still described like:
    --yarn-master   : (optional) hostname or ip of the yarn-master server which is
                    *expected to be outside of the storage pool*. Default is localhost.

>> ASSIGNED

Comment 8 Daniel Horák 2014-08-14 10:36:52 UTC
Please also check following sentence (./setup_cluster.sh --help), the conmbination of "is" and "be" sounds strange for me, but English is not my strong suit...

  It is recommended that each node is be separate from the mgmt and yarn-master nodes.

Comment 9 Jeff Vance 2014-08-14 21:48:34 UTC
Daniel, your English is fine and you are correct about the typo/grammatical error in setup_cluster's usage(). I've once again attempted to make the nodes-spec-list consistent across setup_cluster and create_vol. Fixed in 2.09/

Comment 10 Jeff Vance 2014-08-14 23:26:55 UTC
fixed in 2.09

Comment 11 Daniel Horák 2014-10-23 15:02:46 UTC
Help message from setup_cluster.sh contains meaningless copy/pasted line in --ambari-repo section (from previous section): "rhs-high-throughput". Default is not set a profile.

# ./setup_cluster.sh --help
  <<truncated>>
  --profile       : (optional) the name of a supported rhs/kernel profile, eg.
                  "rhs-high-throughput". Default is not set a profile.
  --ambari-repo   : (optional) the URL of the ambari repo file. Default is the
                  value hard-coded in the installer.
                  "rhs-high-throughput". Default is not set a profile.
  <<truncated>>

>> ASSIGNED

Comment 12 Jeff Vance 2014-10-24 01:31:03 UTC
fixed in 2.27

Comment 14 Daniel Horák 2014-11-03 15:11:39 UTC
All issues mentioned in this bug are fixed.

# rpm -q rhs-hadoop-install
  rhs-hadoop-install-2_28-1.el6rhs.noarch

# ./create_vol.sh --help
  ***
  *** create_vol: version 2.28
  ***

  create_vol creates and prepares a new volume designated for hadoop workloads.
  The replica factor is hard-coded to 2, per RHS requirements.

  SYNTAX:

  create_vol --version | --help

  create_vol [-y] [--quiet | --verbose | --debug] \
             <volname> <vol-mnt-prefix> <nodes-spec-list>

  where:

  <nodes-spec-list>: a list of two or more <node-spec's>.
  <node-spec>     : a storage node followed by a ':', followed by a brick mount
                    path.  Eg:
                       <node1><:brickmnt1>  <node2>[:<brickmnt2>] ...
                    A volume does not need to span all nodes in the cluster. Only
                    the brick mount path associated with the first node is
                    required. If omitted from the other <nodes-spec-list>'s then
                    each node assumes the value of the first node for the brick
                    mount path.
  <volname>       : name of the new volume.
  <vol-mnt-prefix>: path of the glusterfs-fuse mount point, eg. /mnt/glusterfs.
                    Note: the volume name will be appended to this mount point.
  -y              : auto answer "yes" to all prompts. Default is to be promoted 
                    before the script continues.
  --quiet         : (optional) output only basic progress/step messages. Default.
  --verbose       : (optional) output --quiet plus more details of each step.
  --debug         : (optional) output --verbose plus greater details useful for
                    debugging.
  --version       : output only the version string.
  --help          : this text.

# ./disable_vol.sh --help
  ***
  *** disable_vol: version 2.28
  ***

  disable_vol disables an existing RHS volume from being used for hadoop
  workloads.

  SYNTAX:

  disable_vol --version | --help

  disable_vol [-y] [--quiet | --verbose | --debug] \
              [--user <ambari-admin-user>] [--pass <ambari-admin-password>] \
              [--port <port-num>] [--hadoop-mgmt-node <node>] \
              [--rhs-node <node>] --yarn-master <node> \
              <volname>
  where:

  <volname>    : the RHS volume to be disabled for hadoop workloads.
  --yarn-master: hostname or ip of the yarn-master server which is expected to
                 be outside of the storage pool.
  --rhs-node   : (optional) hostname of any of the storage nodes. This is needed
                 in order to access the gluster command. Default is localhost
                 which, must have gluster cli access.
  --hadoop-mgmt-node : (optional) hostname or ip of the hadoop mgmt server which
                 is expected to be outside of the storage pool. Default is
                 localhost.
  -y : auto answer "yes" to all prompts. Default is to be promoted before the
                 script continues.
  --quiet      : (optional) output only basic progress/step messages. Default.
  --verbose    : (optional) output --quiet plus more details of each step.
  --debug      : (optional) output --verbose plus greater details useful for
                 debugging.
  --user       : the ambari admin user name. Default: "admin".
  --pass       : the password for --user. Default: "admin".
  --port       : the port number used by the ambari server. Default: 8080.
  --version    : output only the version string.
  --help       : this text.

# ./enable_vol.sh --help
  ***
  *** enable_vol: version 2.28
  ***

  enable_vol enables an existing RHS volume for hadoop workloads.

  SYNTAX:

  enable_vol --version | --help

  enable_vol [-y] [--quiet | --verbose | --debug] [--make-default] \
             [--user <ambari-admin-user>] [--pass <ambari-admin-password>] \
             [--port <port-num>] [--hadoop-mgmt-node <node>] \
             [--rhs-node <node>] [--yarn-master <node>] \
             <volname>
  where:

  <volname>    : the RHS volume to be enabled for hadoop workloads.
  --yarn-master: (optional) hostname or ip of the yarn-master server which is
                 expected to be outside of the storage pool. Default is localhost.
  --rhs-node   : (optional) hostname of any of the storage nodes. This is needed
                 in order to access the gluster command. Default is localhost
                 which, must have gluster cli access.
  --hadoop-mgmt-node: (optional) hostname or ip of the hadoop mgmt server which is
                 expected to be outside of the storage pool. Default is localhost.
  --make-default: if specified then the volume is set to be the default volume
                 used when hadoop job URIs are unqualified. Default is to NOT 
                 make this volume the default volume.
  -y           : (optional) auto answer "yes" to all prompts. Default is to answer
                 a confirmation prompt.
  --quiet      : (optional) output only basic progress/step messages. Default.
  --verbose    : (optional) output --quiet plus more details of each step.
  --debug      : (optional) output --verbose plus greater details useful for
                 debugging.
  --user       : the ambari admin user name. Default: "admin".
  --pass       : the password for --user. Default: "admin".
  --port       : the port number used by the ambari server. Default: 8080.
  --version    : output only the version string.
  --help       : this text.

# ./setup_cluster.sh --help
  ***
  *** setup_cluster: version 2.28
  ***

  setup_cluster sets up a storage cluster for hadoop workloads.

  SYNTAX:

  setup_cluster --version | --help

  setup_cluster [-y] [--hadoop-mgmt-node <node>] [--yarn-master <node>] \
                [--profile <profile>] [--ambari-repo <url>] \
                [--force-ambari-update] [--quiet | --verbose | --debug] \
                <nodes-spec-list>
  where:

  <nodes-spec-list>: a list of two or more <node-spec's>.
  <node-spec>   : a storage node followed by a ':', followed by a brick mount
                  path, followed by another ':', followed by a block device path.
                  Eg: <n1><:brkmnt1>:<blkdev1> <n2>[:<brkmnt2>][:<blkdev2>] \
                      [<n3>] ...
                  Only the brick mount path and the block device path associated
                  with the first node are required. If omitted from the other
                  <nodes-spec-list>'s then each node assumes the values of the
                  first node for brick mount path and block device path. If a
                  brick mount path is omitted but a block device path is
                  specified then the block device path is proceded by two ':'s,
                  e.g., "<nodeN>::<blkdevN>". It is recommended that the mgmt
                  and yarn-master nodes are not also storage nodes and are not
                  collocated on the same server.
  --yarn-master : (optional) hostname or ip of the yarn-master server which is
                  expected to be outside of the storage pool. Default is
                  localhost.
  --hadoop-mgmt-node: (optional) hostname or ip of the hadoop mgmt server which
                  is expected to be outside of the storage pool. Default is
                  localhost.
  --profile     : (optional) the name of a supported rhs/kernel profile, e.g.,
                  "rhs-high-throughput". Default is not set a profile.
  --ambari-repo : (optional) the URL of the ambari repo file. Default is the
                  value hard-coded in the installer.
  --force-ambari-update: (optional) force the update of the ambari-server and
                  ambari-agents even if they are already installed and running.
                  If the ambari-server is running, by default, it will not be re-
                  installed. For each node where the agent is already running, by
                  default, the agent will not be re-installed. Note: if the server
                  and/or agents are not installed (or not running) they will be
                  installed and started.
  -y            : (optional) auto answer "yes" to all prompts. Default is to 
                  answer a confirmation prompt.
  --quiet       : (optional) output only basic progress/step messages. Default.
  --verbose     : (optional) output --quiet plus more details of each step.
  --debug       : (optional) output --verbose plus greater details useful for
                  debugging.
  --version     : output only the version string.
  --help        : this text.

>> VERIFIED

Comment 16 errata-xmlrpc 2014-11-24 11:54:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2014-1275.html


Note You need to log in before you can comment on or make changes to this bug.