Bug 1551140 - cns-deploy lists firewall port 24006 which was deprecated by bz 1483827
Summary: cns-deploy lists firewall port 24006 which was deprecated by bz 1483827
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: cns-deploy-tool
Version: cns-3.6
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: OCS 3.11.z Batch Update 4
Assignee: Raghavendra Talur
QA Contact: RamaKasturi
URL:
Whiteboard:
Depends On:
Blocks: 1724792
TreeView+ depends on / blocked
 
Reported: 2018-03-02 21:09 UTC by Thom Carlin
Modified: 2019-10-30 12:33 UTC (History)
9 users (show)

Fixed In Version: cns-deploy-7.0.0-12.el7rhgs
Doc Type: Bug Fix
Doc Text:
The firewall rules incorrectly reported 24006 as the port used by gluster-block. It has been changed to 24010.
Clone Of:
Environment:
Last Closed: 2019-10-30 12:33:52 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1483827 0 unspecified CLOSED Avoid using 24006 port as it is registered. 2021-02-22 00:41:40 UTC
Red Hat Bugzilla 1551121 0 unspecified CLOSED [Docs] Section 6.1: Firewall list doesn't match list for cns-deploy 2021-11-18 15:43:33 UTC
Red Hat Product Errata RHBA-2019:3254 0 None None None 2019-10-30 12:33:59 UTC

Internal Links: 1483827 1551121

Description Thom Carlin 2018-03-02 21:09:11 UTC
Description of problem:

While documenting https://bugzilla.redhat.com/show_bug.cgi?id=1551121, found discrepancy with https://bugzilla.redhat.com/show_bug.cgi?id=1483827

Version-Release number of selected component (if applicable):

OCP3.7
cns-deploy-5.0.0-59.el7rhgs.x86_64

How reproducible:

100%

Steps to Reproduce:
1. cns-deploy (per Section 8.2.1)

Actual results:

[...]
Each of the nodes that will host GlusterFS must also have appropriate firewall
rules for the required GlusterFS ports:
[...]
 * 24006 - glusterblockd
[...]

Expected results:

[...]
Each of the nodes that will host GlusterFS must also have appropriate firewall
rules for the required GlusterFS ports:
[...]
 * 24010 - glusterblockd
[...]


Additional info:

Verified by checking /usr/bin/cns-deploy and port actually used by gluster-blockd

I believe this was fixed in https://review.gluster.org/#/c/18112/

Comment 15 RamaKasturi 2019-06-29 16:40:49 UTC
Verified in build cns-deploy-7.0.0-13.el7rhgs.x86_64 and i see that the description does not list 24006 port for gluster-blockd and it lists out 24010 for gluster-blockd.

Below is what we see with the latest cns-deploy tool when trying to perform an installation.

The client machine that will run this script must have:
 * Administrative access to an existing Kubernetes or OpenShift cluster
 * Access to a python interpreter 'python'

Each of the nodes that will host GlusterFS must also have appropriate firewall
rules for the required GlusterFS ports:
 * 111   - rpcbind (for glusterblock)
 * 2222  - sshd (if running GlusterFS in a pod)
 * 3260  - iSCSI targets (for glusterblock)
 * 24010 - glusterblockd
 * 24007 - GlusterFS Management
 * 24008 - GlusterFS RDMA
 * 49152 to 49251 - Each brick for every volume on the host requires its own
   port. For every new brick, one new port will be used starting at 49152. We
   recommend a default range of 49152-49251 on each host, though you can adjust
   this to fit your needs.

The following kernel modules must be loaded:
 * dm_snapshot
 * dm_mirror
 * dm_thin_pool
 * dm_multipath
 * target_core_user

For systems with SELinux, the following settings need to be considered:
 * virt_sandbox_use_fusefs should be enabled on each node to allow writing to
   remote GlusterFS volumes

In addition, for an OpenShift deployment you must:
 * Have 'cluster_admin' role on the administrative account doing the deployment
 * Add the 'default' and 'router' Service Accounts to the 'privileged' SCC
 * Have a router deployed that is configured to allow apps to access services
   running in the cluster

Do you wish to proceed with deployment?

Comment 18 errata-xmlrpc 2019-10-30 12:33:52 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:3254


Note You need to log in before you can comment on or make changes to this bug.