Bug 885597

Summary: [RHS-C]: Services tab in Cluster is not showing the NFS status as DOWN when the service is actually down
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Prasanth <pprakash>
Component: rhscAssignee: Kanagaraj <kmayilsa>
Status: CLOSED WORKSFORME QA Contact: Prasanth <pprakash>
Severity: high Docs Contact:
Priority: medium    
Version: 2.0CC: kmayilsa, mmahoney, pprakash, rhs-bugs, sankarshan, sdharane, shireesh, ssampat, vbellur
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-03-06 09:31:00 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
NFS_status_before_killing
none
NFS_status_after_killing none

Description Prasanth 2012-12-10 07:52:15 UTC
Created attachment 660617 [details]
NFS_status_before_killing

Description of problem:

Services tab in Cluster is not showing the NFS status as DOWN when the service is down.

Version-Release number of selected component (if applicable): rhsc-2.1-qa18.el6ev.noarch


How reproducible: Always


Steps to Reproduce:
1. Select a cluster and click on the "Services" sub-tab
2. See the Status of any NFS service (it should be in the UP state) and note it down
3. Now go to the corresponding backed-end RHS node and kill the PID of the NFS service.
4. Ideally, the status should now get reflected in the UI and move to automatically to DOWN state.
  
Actual results: The status still shows as UP, even though the PID is killed or not running.


Expected results: The status should change automatically to reflect the current state in the RHS node.


Additional info: Screenshot attached.

From the backend of RHS node1:

Here I'm trying to kill the NFS PID of server 10.70.36.53 which is 2054

# ps aux |grep glusterfs |grep nfs
root      2054  0.2  0.4 326540 76012 ?        Ssl  00:39   0:00 /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S /var/run/59bcb3fbd57e1040f17d5dbc753880dc.socket
 
# kill -9 2054
# ps aux |grep glusterfs |grep nfs
#

Comment 1 Prasanth 2012-12-10 07:53:08 UTC
Created attachment 660618 [details]
NFS_status_after_killing

Comment 3 Kanagaraj 2012-12-18 07:41:52 UTC
Is the status getting changed if you go to some other tab(Hosts or Volumes) and comes back to Services tab?

Can you please provide engine and vdsm logs?

Comment 4 Kanagaraj 2013-01-03 09:55:15 UTC
It is working fine in my setup. NFS status is DOWN after killing the nfs process in the storage node. Please attach the relevant log files if this is not working.

Comment 5 Kanagaraj 2013-02-20 13:32:10 UTC
Please close this bug, if the issue not exists.

Comment 6 Prasanth 2013-02-27 11:45:16 UTC
Looks like this got fixed in the latest QA build or something changed after rhsc-2.1-qa18.el6. Now I could see that the actual status of the NFS and SHD services are getting reflected in the UI as well. But I'm sure that this was not working earlier in rhsc-2.1-qa18.el6 when I reported this issue.

You can close this bug.