Description of problem: gdeploy console message says "NFS Ganesha status is HEALTHY" even if nfs-ganesha setup fails. Version-Release number of selected component (if applicable): gdeploy-2.0.1-3.el7rhgs.noarch How reproducible: Always Steps to Reproduce: Setup nfs-ganesha using gdeploy. Actual results: gdeploy console message says "NFS Ganesha status is HEALTHY" even if nfs-ganesha setup fails. Expected results: gdeploy should log error message in console output in case if there is any failure in nfs ganesha cluster creation. Additional info: TASK [Report NFS Ganesha status] *********************************************** ok: [dhcp46-111.lab.eng.blr.redhat.com] => { "msg": "-- NFS Ganesha status is HEALTHY" }
This is a bit tricky, because currently we don't know what are all the possible console output in case of failure in setup. Since Ganesha involves quite a few number of services. We had currently captured two error cases. Now pcs has cropped up. I propose this beyond 3.2.0? Any concerns? And currently disable the particular health check status. User can determine by looking at console output if anything failed during the process.
I have raised a bug on "/usr/libexec/ganesha/ganesha-ha.sh --status" to check all services which are used in nfs-ganesha are running and with few more status checks. bug info - https://bugzilla.redhat.com/show_bug.cgi?id=1394815
Commit: https://github.com/gluster/gdeploy/commit/dcdc29deb39 fixes the issue.
Verified the fix in build, gdeploy-2.0.1-6.el7rhgs.noarch nfs-ganesha-2.4.1-2.el7rhgs.x86_64 nfs-ganesha-gluster-2.4.1-2.el7rhgs.x86_64 glusterfs-ganesha-3.8.4-8.el7rhgs.x86_64 Current Output Status Message: ------------------------------- If nfs-ganesha installation succeeds: TASK [Report NFS Ganesha status] *********************************************** ok: [dhcp46-42.lab.eng.blr.redhat.com] => { "msg": "Cluster HA Status: HEALTHY" } If there are any errors: TASK [Report NFS Ganesha status (If any errors)] ******************************* ok: [dhcp46-42.lab.eng.blr.redhat.com] => { "msg": "Error: cluster is not currently running on this node" }
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHEA-2017-0482.html