See bug #2092220 for a detailed description of the issue in ODF. Description of problem (please be detailed as possible and provide log snippests): Running the `ganesha-rados-grace` command on IBM Power (ppc64le) always shows the usage message, commands are not recognized as valid. Version of all relevant components (if applicable): - whatever nfs-ganesha is bundled with "ceph version 16.2.8-5.el8cp" containers Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Rook as part of ODF is unable to configure NFS-Ganesha because of this issue. Is there any workaround available to the best of your knowledge? No Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 1 - Deploy ODF with support for NFS on a ppc64le cluster Can this issue reproducible? Yes! If this is a regression, please provide more details to justify this: Works on x86_64, so a sort of regression when running on ppc64le. Steps to Reproduce: 1. run a command like Rook does, no need for a Ceph cluster: ganesha-rados-grace -p .nfs -n ocs-storagecluster-cephnfs add ocs-storagecluster-cephnfs.a` 2. the usage text of the command is shown, the same command works on x86_64 Actual results: $ ganesha-rados-grace -p .nfs -n ocs-storagecluster-cephnfs add ocs-storagecluster-cephnfs.a Usage: /usr/bin/ganesha-rados-grace [ --userid ceph_user ] [ --cephconf /path/to/ceph.conf ] [ --ns namespace ] [ --oid obj_id ] [ --pool pool_id ] dump|add|start|join|lift|remove|enforce|noenforce|member [ nodeid ... ] Expected results: Successfully adding the node to the NFS-Ganesha configuration in RADOS. or a RADOS failure if no Ceph cluster is available.
Bug 2092220#c8 contains a small C program to reproduce the issue. There is also a log of a gdb session with the ganesha-rados-grace command showing the problem. https://github.com/nfs-ganesha/nfs-ganesha/commit/3db6bc0cb75fa85ffcebeda1276d195915b84579 seems to be the uptsteam change that addresses the problem.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat Ceph Storage Security, Bug Fix, and Enhancement Update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5997