Hide Forgot
Description of problem: ----------------------- 4 node Ganesha cluster.Mounted a 2*2 volume on 4 clients via v3 . Ran smallfile creates. Seeing these messages in ganesha.log : 18/11/2016 10:47:59 : epoch d2110000 : gqas005.sbu.lab.eng.bos.redhat.com : ganesha.nfsd-16239[Admin] destroy_fsals :FSAL :CRIT :Extra references (1) hanging around to FSAL PSEUDO 18/11/2016 10:47:59 : epoch d2110000 : gqas005.sbu.lab.eng.bos.redhat.com : ganesha.nfsd-16239[Admin] destroy_fsals :FSAL :CRIT :Extra references (3) hanging around to FSAL GLUSTER 18/11/2016 10:47:59 : epoch d2110000 : gqas005.sbu.lab.eng.bos.redhat.com : ganesha.nfsd-16239[Admin] destroy_fsals :FSAL :CRIT :Extra references (3) hanging around to FSAL GLUSTER Version-Release number of selected component (if applicable): ------------------------------------------------------------- glusterfs-ganesha-3.8.4-5.el7rhgs.x86_64 nfs-ganesha-2.4.1-1.el7rhgs.x86_64 How reproducible: ---------------- Every which way I try. Steps to Reproduce: ------------------- 1. Create a new 2*2 volume ,mount it via v3 on 4 clients. 2. Run smallfile creates in a distributed multithreaded way. 3. Check Ganesha logs. Actual results: --------------- Expected results: ----------------- Need confirmation from Dev if this is expected. Additional info: ---------------- Volume Name: testvol Type: Distributed-Replicate Volume ID: 865c5329-7fa5-4a10-888b-671902b0bca6 Status: Started Snapshot Count: 0 Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: gqas013.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick0 Brick2: gqas005.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick1 Brick3: gqas006.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick2 Brick4: gqas011.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick3 Options Reconfigured: ganesha.enable: on features.cache-invalidation: on nfs.disable: on performance.readdir-ahead: on transport.address-family: inet performance.stat-prefetch: off server.allow-insecure: on diagnostics.latency-measurement: on diagnostics.count-fop-hits: on nfs-ganesha: enable cluster.enable-shared-storage: enable [root@gqas013 ~]#
destroy_fsal should be called during ganesha stop or unexporting the volume. Do u perform any operations like that?
I never restarted ganesha service during my tests. Nor did I restart the volume.
Jiffin, Sorry.I take that back. I did restart ganesha services,after making changes to ganesha-<volname>.conf
please see if you can reproduce in rhgs-3.3.0