Description of problem: While documenting https://bugzilla.redhat.com/show_bug.cgi?id=1551121, found discrepancy with https://bugzilla.redhat.com/show_bug.cgi?id=1483827 Version-Release number of selected component (if applicable): OCP3.7 cns-deploy-5.0.0-59.el7rhgs.x86_64 How reproducible: 100% Steps to Reproduce: 1. cns-deploy (per Section 8.2.1) Actual results: [...] Each of the nodes that will host GlusterFS must also have appropriate firewall rules for the required GlusterFS ports: [...] * 24006 - glusterblockd [...] Expected results: [...] Each of the nodes that will host GlusterFS must also have appropriate firewall rules for the required GlusterFS ports: [...] * 24010 - glusterblockd [...] Additional info: Verified by checking /usr/bin/cns-deploy and port actually used by gluster-blockd I believe this was fixed in https://review.gluster.org/#/c/18112/
Verified in build cns-deploy-7.0.0-13.el7rhgs.x86_64 and i see that the description does not list 24006 port for gluster-blockd and it lists out 24010 for gluster-blockd. Below is what we see with the latest cns-deploy tool when trying to perform an installation. The client machine that will run this script must have: * Administrative access to an existing Kubernetes or OpenShift cluster * Access to a python interpreter 'python' Each of the nodes that will host GlusterFS must also have appropriate firewall rules for the required GlusterFS ports: * 111 - rpcbind (for glusterblock) * 2222 - sshd (if running GlusterFS in a pod) * 3260 - iSCSI targets (for glusterblock) * 24010 - glusterblockd * 24007 - GlusterFS Management * 24008 - GlusterFS RDMA * 49152 to 49251 - Each brick for every volume on the host requires its own port. For every new brick, one new port will be used starting at 49152. We recommend a default range of 49152-49251 on each host, though you can adjust this to fit your needs. The following kernel modules must be loaded: * dm_snapshot * dm_mirror * dm_thin_pool * dm_multipath * target_core_user For systems with SELinux, the following settings need to be considered: * virt_sandbox_use_fusefs should be enabled on each node to allow writing to remote GlusterFS volumes In addition, for an OpenShift deployment you must: * Have 'cluster_admin' role on the administrative account doing the deployment * Add the 'default' and 'router' Service Accounts to the 'privileged' SCC * Have a router deployed that is configured to allow apps to access services running in the cluster Do you wish to proceed with deployment?
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:3254