Created attachment 660617 [details] NFS_status_before_killing Description of problem: Services tab in Cluster is not showing the NFS status as DOWN when the service is down. Version-Release number of selected component (if applicable): rhsc-2.1-qa18.el6ev.noarch How reproducible: Always Steps to Reproduce: 1. Select a cluster and click on the "Services" sub-tab 2. See the Status of any NFS service (it should be in the UP state) and note it down 3. Now go to the corresponding backed-end RHS node and kill the PID of the NFS service. 4. Ideally, the status should now get reflected in the UI and move to automatically to DOWN state. Actual results: The status still shows as UP, even though the PID is killed or not running. Expected results: The status should change automatically to reflect the current state in the RHS node. Additional info: Screenshot attached. From the backend of RHS node1: Here I'm trying to kill the NFS PID of server 10.70.36.53 which is 2054 # ps aux |grep glusterfs |grep nfs root 2054 0.2 0.4 326540 76012 ? Ssl 00:39 0:00 /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S /var/run/59bcb3fbd57e1040f17d5dbc753880dc.socket # kill -9 2054 # ps aux |grep glusterfs |grep nfs #
Created attachment 660618 [details] NFS_status_after_killing
Is the status getting changed if you go to some other tab(Hosts or Volumes) and comes back to Services tab? Can you please provide engine and vdsm logs?
It is working fine in my setup. NFS status is DOWN after killing the nfs process in the storage node. Please attach the relevant log files if this is not working.
Please close this bug, if the issue not exists.
Looks like this got fixed in the latest QA build or something changed after rhsc-2.1-qa18.el6. Now I could see that the actual status of the NFS and SHD services are getting reflected in the UI as well. But I'm sure that this was not working earlier in rhsc-2.1-qa18.el6 when I reported this issue. You can close this bug.