Description of problem: ======================= After configuring the NFS Ganesha and HA using ingress, the HAproxy daemon version is shown as "<unknown>" in the ceph orch ps command output Version-Release number of selected component (if applicable): ============================================================ RHCS 7.0 - 8.2.0-43.el9cp IBM 7.0 - 18.2.0-20.el9cp # dnf info cephadm Updating Subscription Management repositories. IBM-CEPH-7.0-20230911.ci.0 4.2 kB/s | 3.0 kB 00:00 Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) 31 MB/s | 25 MB 00:00 Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs) 6.8 MB/s | 14 MB 00:02 Installed Packages Name : cephadm Epoch : 2 Version : 18.2.0 Release : 20.el9cp Architecture : noarch Size : 227 k Source : cephadm-18.2.0-20.el9cp.src.rpm Repository : @System From repo : IBM-CEPH-7.0-20230911.ci.0 Summary : Utility to bootstrap Ceph clusters URL : https://ceph.io License : LGPL-2.1 Description : Utility to bootstrap a Ceph cluster and manage Ceph daemons deployed : with systemd and podman. How reproducible: ================= Always Steps to Reproduce: =================== 1. Setup a NFS Ganesha cluster by following the steps in https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/6/html-single/operations_guide/index#creating-the-nfs-ganesha-cluster-using-the-ceph-orchestrator_ops 2. Deploy HA using steps in https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/6/html-single/operations_guide/index#deploying-ha-for-cephfs-nfs-using-a-specification-file_ops 3. Once the ingress service is up and running, check if the daemons for the service using #ceph orch ps command Actual results: =============== On RHCS 7.0 - # ceph orch ps | grep nfs haproxy.nfs.cephfs.argo018.chuvdz argo018 *:2049,9049 host is offline - 37h 15.7M - <unknown> bda92490ac6c 3edf91f71fdc haproxy.nfs.cephfs.argo019.oxihrw argo019 *:2049,9049 running (36h) - 37h 29.5M - <unknown> bda92490ac6c c0454751c02f keepalived.nfs.cephfs.argo018.qxkinu argo018 host is offline - 37h 1765k - 2.2.4 b79b516c07ed 9f24ae3c5260 keepalived.nfs.cephfs.argo019.xrsvvy argo019 running (37h) - 37h 1761k - 2.2.4 b79b516c07ed 444b88efec5d nfs.cephfs.0.0.argo019.ksicti argo019 *:12049 running (37h) - 37h 96.5M - 5.5 1f29ade573da 43c8aa88ba78 nfs.cephfs.1.0.argo018.uezwss argo018 *:12049 host is offline - 36h 15.6M - 5.5 1f29ade573da 9f3709f0cae4 On IBM 7.0 - # ceph orch ps | grep nfs haproxy.nfs.nfsganesha.clara004.qozngr clara004 *:2050,9000 running (53m) - 53m 10.0M - <unknown> 2bb64a680c36 2abe82b80a4b haproxy.nfs.nfsganesha.clara006.kfkbih clara006 *:2050,9000 running (53m) - 53m 8124k - <unknown> 2bb64a680c36 39f5a4fd3bcd keepalived.nfs.nfsganesha.clara004.tcompt clara004 running (52m) - 55m 1765k - 2.2.4 5366a49ebd2b 2bdf5c0d2617 keepalived.nfs.nfsganesha.clara006.ogmxmd clara006 running (52m) - 55m 1765k - 2.2.4 5366a49ebd2b 670d9d7cb617 nfs.nfsganesha.0.0.clara004.onkrxm clara004 *:2049 running (113m) - 113m 72.1M - 5.1 48c75ff3dd69 ae31810fc890 nfs.nfsganesha.1.0.clara005.bteggj clara005 *:2049 running (113m) - 113m 75.4M - 5.1 48c75ff3dd69 08b08656e462 nfs.nfsganesha.2.0.clara006.pkmybx clara006 *:2049 running (113m) - 113m 74.1M - 5.1 48c75ff3dd69 cd71179799df As observed in above outputs, the version for haproxy is missing Expected results: ================ Version for haproxy should be displayed correctly in the #ceph orch ps command output.
Verified this with # ceph --version ceph version 18.2.0-74.el9cp (4130b8a3077f517f6a6b1da0a3e642bfc59ed96e) reef (stable) # dnf info cephadm Updating Subscription Management repositories. Last metadata expiration check: 0:30:00 ago on Mon Oct 9 14:23:25 2023. Installed Packages Name : cephadm Epoch : 2 Version : 18.2.0 Release : 74.el9cp Architecture : noarch Size : 211 k Source : ceph-18.2.0-74.el9cp.src.rpm Repository : @System From repo : ceph-Tools Summary : Utility to bootstrap Ceph clusters URL : http://ceph.com/ License : LGPL-2.1 and LGPL-3.0 and CC-BY-SA-3.0 and GPL-2.0 and BSL-1.0 and BSD-3-Clause and MIT Description : Utility to bootstrap a Ceph cluster and manage Ceph daemons deployed : with systemd and podman. # ceph orch ps | grep nfs haproxy.nfs.ganesha.ceph-mani-b04gdn-node2.qafghx ceph-mani-b04gdn-node2 *:2049,9049 running (8m) 6m ago 8m 5096k - 2.4.17-9f97155 bda92490ac6c f527e5989dcd keepalived.nfs.ganesha.ceph-mani-b04gdn-node2.relgkj ceph-mani-b04gdn-node2 running (8m) 6m ago 8m 1770k - 2.2.4 b79b516c07ed e8e708e2cb29 nfs.ganesha.0.0.ceph-mani-b04gdn-node2.xrhtgo ceph-mani-b04gdn-node2 *:12049 running (8m) 6m ago 8m 50.6M - 5.5 f77bb00396c9 380a0f213925 HAproxy version (2.4.17-9f97155) is now reflected in ceph orch ps command.Moving this BZ to verified state.Moving this BZ to verified state
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 7.0 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:7780