Bug 1551140

Summary: cns-deploy lists firewall port 24006 which was deprecated by bz 1483827
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Thom Carlin <tcarlin>
Component: cns-deploy-toolAssignee: Raghavendra Talur <rtalur>
Status: CLOSED ERRATA QA Contact: RamaKasturi <knarra>
Severity: medium Docs Contact:
Priority: unspecified    
Version: cns-3.6CC: bkunal, hchiramm, jarrpa, jmulligan, jroberts, knarra, madam, rhs-bugs, rtalur
Target Milestone: ---Keywords: ZStream
Target Release: OCS 3.11.z Batch Update 4   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: cns-deploy-7.0.0-12.el7rhgs Doc Type: Bug Fix
Doc Text:
The firewall rules incorrectly reported 24006 as the port used by gluster-block. It has been changed to 24010.
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-10-30 12:33:52 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1724792    

Description Thom Carlin 2018-03-02 21:09:11 UTC
Description of problem:

While documenting https://bugzilla.redhat.com/show_bug.cgi?id=1551121, found discrepancy with https://bugzilla.redhat.com/show_bug.cgi?id=1483827

Version-Release number of selected component (if applicable):

OCP3.7
cns-deploy-5.0.0-59.el7rhgs.x86_64

How reproducible:

100%

Steps to Reproduce:
1. cns-deploy (per Section 8.2.1)

Actual results:

[...]
Each of the nodes that will host GlusterFS must also have appropriate firewall
rules for the required GlusterFS ports:
[...]
 * 24006 - glusterblockd
[...]

Expected results:

[...]
Each of the nodes that will host GlusterFS must also have appropriate firewall
rules for the required GlusterFS ports:
[...]
 * 24010 - glusterblockd
[...]


Additional info:

Verified by checking /usr/bin/cns-deploy and port actually used by gluster-blockd

I believe this was fixed in https://review.gluster.org/#/c/18112/

Comment 15 RamaKasturi 2019-06-29 16:40:49 UTC
Verified in build cns-deploy-7.0.0-13.el7rhgs.x86_64 and i see that the description does not list 24006 port for gluster-blockd and it lists out 24010 for gluster-blockd.

Below is what we see with the latest cns-deploy tool when trying to perform an installation.

The client machine that will run this script must have:
 * Administrative access to an existing Kubernetes or OpenShift cluster
 * Access to a python interpreter 'python'

Each of the nodes that will host GlusterFS must also have appropriate firewall
rules for the required GlusterFS ports:
 * 111   - rpcbind (for glusterblock)
 * 2222  - sshd (if running GlusterFS in a pod)
 * 3260  - iSCSI targets (for glusterblock)
 * 24010 - glusterblockd
 * 24007 - GlusterFS Management
 * 24008 - GlusterFS RDMA
 * 49152 to 49251 - Each brick for every volume on the host requires its own
   port. For every new brick, one new port will be used starting at 49152. We
   recommend a default range of 49152-49251 on each host, though you can adjust
   this to fit your needs.

The following kernel modules must be loaded:
 * dm_snapshot
 * dm_mirror
 * dm_thin_pool
 * dm_multipath
 * target_core_user

For systems with SELinux, the following settings need to be considered:
 * virt_sandbox_use_fusefs should be enabled on each node to allow writing to
   remote GlusterFS volumes

In addition, for an OpenShift deployment you must:
 * Have 'cluster_admin' role on the administrative account doing the deployment
 * Add the 'default' and 'router' Service Accounts to the 'privileged' SCC
 * Have a router deployed that is configured to allow apps to access services
   running in the cluster

Do you wish to proceed with deployment?

Comment 18 errata-xmlrpc 2019-10-30 12:33:52 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:3254