Bug 1116336 - nfs-ganesha: nfs-ganesha process didn't getting killed with nfs-ganesha.enable off option
Summary: nfs-ganesha: nfs-ganesha process didn't getting killed with nfs-ganesha.enabl...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: nfs-ganesha
Version: rhgs-3.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1087818
TreeView+ depends on / blocked
 
Reported: 2014-07-04 08:57 UTC by Saurabh
Modified: 2016-01-27 07:19 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
The nfs-ganesha process is active after setting nfs-ganesha.enable to off as executing kill -s TERM command does not kill nfs-ganesha. Workaround (if any): Use kill -9 on the process ID of ganesha.nfsd process and then use CLI options to export new entries.
Clone Of:
Environment:
Last Closed: 2016-01-27 07:19:51 UTC
Embargoed:
mmadhusu: needinfo+


Attachments (Terms of Use)

Description Saurabh 2014-07-04 08:57:37 UTC
Description of problem:

I tried to bring the nfs-ganesha process down using the nfs-ganesha.enable off process, it didn't go down, meaning the using "ps" it was still displayed.

After effect is that even if I enable the nfs-ganesha for the volume, the showmount command does not display the exported volumes, whereas showmount is suppose to the exported one.

Version-Release number of selected component (if applicable):
glusterfs-3.6.0.22-1.el6rhs.x86_64
nfs-ganesha-2.1.0.2-4.el6rhs.x86_64

How reproducible:
happens intermittently

Actual results:
post operations as per explanation above here is the result,
[root@nfs1 ~]# showmount -e localhost
Export list for localhost:
/ (everyone)
[root@nfs1 ~]# ps -eaf | grep nfs
root      2419     1  2 Jul03 ?        00:29:39 /usr/bin/ganesha.nfsd -f /var/lib/glusterfs-ganesha/nfs-ganesha.conf -L /tmp/ganesha.log -N NIV_EVENT -d
root     18802  2083  0 03:22 pts/0    00:00:00 grep nfs
[root@nfs1 ~]# 
[root@nfs1 ~]# 
[root@nfs1 ~]# gluster volume info dist-rep1
 
Volume Name: dist-rep1
Type: Distributed-Replicate
Volume ID: d0cc61c1-806d-42b7-8cc2-39559d6f187e
Status: Started
Snap Volume: no
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.37.62:/bricks/d1r11
Brick2: 10.70.37.215:/bricks/d1r21
Brick3: 10.70.37.44:/bricks/d2r11
Brick4: 10.70.37.201:/bricks/dr2r21
Brick5: 10.70.37.62:/bricks/d3r11
Brick6: 10.70.37.215:/bricks/d3r21
Brick7: 10.70.37.44:/bricks/d4r11
Brick8: 10.70.37.201:/bricks/dr4r21
Brick9: 10.70.37.62:/bricks/d5r11
Brick10: 10.70.37.215:/bricks/d5r21
Brick11: 10.70.37.44:/bricks/d6r11
Brick12: 10.70.37.201:/bricks/dr6r21
Options Reconfigured:
cluster.self-heal-daemon: on
cluster.data-self-heal: off
cluster.entry-self-heal: off
cluster.metadata-self-heal: off
performance.readdir-ahead: on
nfs-ganesha.host: 10.70.37.62
nfs-ganesha.enable: on
nfs.disable: on
snap-max-hard-limit: 256
snap-max-soft-limit: 90
auto-delete: disable


Expected results:
showmount should display dist-rep1 as exported volume

Additional info:

Comment 2 Soumya Koduri 2014-07-07 12:16:41 UTC
This is traced to an issue with 'pkill' command not killing ganesha at times.

This is a very intermittent issue and not a blocker for Denali. This will be investigated and fixed in the next release.

Comment 3 Shalaka 2014-09-20 10:08:21 UTC
Please review and sign-off edited doc text.

Comment 5 Jiffin 2016-01-27 07:19:51 UTC
nfs-ganesha.enable option is deprecated


Note You need to log in before you can comment on or make changes to this bug.