Bug 1396786 - [Ganesha] : destroy_fsals CRIT messages in Ganesha logs [NEEDINFO]
Summary: [Ganesha] : destroy_fsals CRIT messages in Ganesha logs
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: nfs-ganesha
Version: rhgs-3.2
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: ---
Assignee: Kaleb KEITHLEY
QA Contact: Manisha Saini
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-11-20 06:36 UTC by Ambarish
Modified: 2018-11-19 06:50 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-11-19 06:50:51 UTC
Target Upstream Version:
kkeithle: needinfo? (asoman)


Attachments (Terms of Use)

Description Ambarish 2016-11-20 06:36:40 UTC
Description of problem:
-----------------------

4 node Ganesha cluster.Mounted a 2*2 volume on 4 clients via v3 .

Ran smallfile creates.

Seeing these messages in ganesha.log :

18/11/2016 10:47:59 : epoch d2110000 : gqas005.sbu.lab.eng.bos.redhat.com : ganesha.nfsd-16239[Admin] destroy_fsals :FSAL :CRIT :Extra references (1) hanging around to FSAL PSEUDO

18/11/2016 10:47:59 : epoch d2110000 : gqas005.sbu.lab.eng.bos.redhat.com : ganesha.nfsd-16239[Admin] destroy_fsals :FSAL :CRIT :Extra references (3) hanging around to FSAL GLUSTER

18/11/2016 10:47:59 : epoch d2110000 : gqas005.sbu.lab.eng.bos.redhat.com : ganesha.nfsd-16239[Admin] destroy_fsals :FSAL :CRIT :Extra references (3) hanging around to FSAL GLUSTER

 
Version-Release number of selected component (if applicable):
-------------------------------------------------------------

glusterfs-ganesha-3.8.4-5.el7rhgs.x86_64
nfs-ganesha-2.4.1-1.el7rhgs.x86_64


How reproducible:
----------------

Every which way I try.

Steps to Reproduce:
-------------------

1. Create a new 2*2 volume ,mount it via v3 on 4 clients.

2. Run smallfile creates in  a distributed multithreaded way.

3. Check Ganesha logs.

Actual results:
---------------


Expected results:
-----------------

Need confirmation from Dev if this is expected.

Additional info:
----------------


Volume Name: testvol
Type: Distributed-Replicate
Volume ID: 865c5329-7fa5-4a10-888b-671902b0bca6
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: gqas013.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick0
Brick2: gqas005.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick1
Brick3: gqas006.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick2
Brick4: gqas011.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick3
Options Reconfigured:
ganesha.enable: on
features.cache-invalidation: on
nfs.disable: on
performance.readdir-ahead: on
transport.address-family: inet
performance.stat-prefetch: off
server.allow-insecure: on
diagnostics.latency-measurement: on
diagnostics.count-fop-hits: on
nfs-ganesha: enable
cluster.enable-shared-storage: enable
[root@gqas013 ~]#

Comment 3 Jiffin 2016-11-20 07:11:21 UTC
destroy_fsal should be called during ganesha stop or unexporting the volume. Do u perform any operations like that?

Comment 4 Ambarish 2016-11-20 07:55:53 UTC
I never restarted ganesha service during my tests.
Nor did I restart the volume.

Comment 5 Ambarish 2016-11-20 07:59:17 UTC
Jiffin,

Sorry.I take that back.
I did restart ganesha services,after making changes to ganesha-<volname>.conf

Comment 9 Kaleb KEITHLEY 2017-08-23 12:35:21 UTC
please see if you can reproduce in rhgs-3.3.0


Note You need to log in before you can comment on or make changes to this bug.