Bug 885597 - [RHS-C]: Services tab in Cluster is not showing the NFS status as DOWN when the service is actually down
Summary: [RHS-C]: Services tab in Cluster is not showing the NFS status as DOWN when t...
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhsc
Version: 2.0
Hardware: All
OS: All
medium
high
Target Milestone: ---
: ---
Assignee: Kanagaraj
QA Contact: Prasanth
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-12-10 07:52 UTC by Prasanth
Modified: 2016-04-18 10:05 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-03-06 09:31:00 UTC
Embargoed:


Attachments (Terms of Use)
NFS_status_before_killing (42.59 KB, image/jpeg)
2012-12-10 07:52 UTC, Prasanth
no flags Details
NFS_status_after_killing (37.83 KB, image/jpeg)
2012-12-10 07:53 UTC, Prasanth
no flags Details

Description Prasanth 2012-12-10 07:52:15 UTC
Created attachment 660617 [details]
NFS_status_before_killing

Description of problem:

Services tab in Cluster is not showing the NFS status as DOWN when the service is down.

Version-Release number of selected component (if applicable): rhsc-2.1-qa18.el6ev.noarch


How reproducible: Always


Steps to Reproduce:
1. Select a cluster and click on the "Services" sub-tab
2. See the Status of any NFS service (it should be in the UP state) and note it down
3. Now go to the corresponding backed-end RHS node and kill the PID of the NFS service.
4. Ideally, the status should now get reflected in the UI and move to automatically to DOWN state.
  
Actual results: The status still shows as UP, even though the PID is killed or not running.


Expected results: The status should change automatically to reflect the current state in the RHS node.


Additional info: Screenshot attached.

From the backend of RHS node1:

Here I'm trying to kill the NFS PID of server 10.70.36.53 which is 2054

# ps aux |grep glusterfs |grep nfs
root      2054  0.2  0.4 326540 76012 ?        Ssl  00:39   0:00 /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S /var/run/59bcb3fbd57e1040f17d5dbc753880dc.socket
 
# kill -9 2054
# ps aux |grep glusterfs |grep nfs
#

Comment 1 Prasanth 2012-12-10 07:53:08 UTC
Created attachment 660618 [details]
NFS_status_after_killing

Comment 3 Kanagaraj 2012-12-18 07:41:52 UTC
Is the status getting changed if you go to some other tab(Hosts or Volumes) and comes back to Services tab?

Can you please provide engine and vdsm logs?

Comment 4 Kanagaraj 2013-01-03 09:55:15 UTC
It is working fine in my setup. NFS status is DOWN after killing the nfs process in the storage node. Please attach the relevant log files if this is not working.

Comment 5 Kanagaraj 2013-02-20 13:32:10 UTC
Please close this bug, if the issue not exists.

Comment 6 Prasanth 2013-02-27 11:45:16 UTC
Looks like this got fixed in the latest QA build or something changed after rhsc-2.1-qa18.el6. Now I could see that the actual status of the NFS and SHD services are getting reflected in the UI as well. But I'm sure that this was not working earlier in rhsc-2.1-qa18.el6 when I reported this issue.

You can close this bug.


Note You need to log in before you can comment on or make changes to this bug.