Bug 1476563 - [Stress] : Ganesha v4 mounts timed out during MTSH
[Stress] : Ganesha v4 mounts timed out during MTSH
Status: ON_QA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: nfs-ganesha (Show other bugs)
3.3
x86_64 Linux
unspecified Severity high
: ---
: RHGS 3.4.0
Assigned To: Kaleb KEITHLEY
Manisha Saini
:
Depends On:
Blocks: 1503134
  Show dependency treegraph
 
Reported: 2017-07-30 07:05 EDT by Ambarish
Modified: 2018-04-19 03:35 EDT (History)
13 users (show)

See Also:
Fixed In Version: nfs-ganesha-2.5.4-1
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Ambarish 2017-07-30 07:05:26 EDT
Description of problem:
-----------------------

4 Node cluster,with MTSH in progress and continous IO from 3 mounts.


v4 mount timed out on one of my clients :

[root@gqac007 /]# mount -t nfs -o vers=4.0 192.168.97.161:/testvol /gluster-mount/ -v
mount.nfs: timeout set for Sun Jul 30 06:36:57 2017
mount.nfs: trying text-based options 'vers=4.0,addr=192.168.97.161,clientaddr=192.168.97.147'
mount.nfs: mount(2): Connection timed out
mount.nfs: Connection timed out
[root@gqac007 /]# 

v3 succeeded though :

[root@gqac007 /]# mount -t nfs -o vers=3 192.168.97.161:/testvol /gluster-mount/ -v
mount.nfs: timeout set for Sun Jul 30 06:59:38 2017
mount.nfs: trying text-based options 'vers=3,addr=192.168.97.161'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 192.168.97.161 prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: trying 192.168.97.161 prog 100005 vers 3 prot UDP port 20048
mount.nfs: portmap query retrying: RPC: Timed out
mount.nfs: prog 100005, trying vers=3, prot=6
mount.nfs: trying 192.168.97.161 prog 100005 vers 3 prot TCP port 20048
[root@gqac007 /]# 


Mounts from other servers succeeded as well :

[root@gqac007 /]# mount -t nfs -o vers=4.0 192.168.97.162:/testvol /gluster-mount/ -v
mount.nfs: timeout set for Sun Jul 30 07:02:06 2017
mount.nfs: trying text-based options 'vers=4.0,addr=192.168.97.162,clientaddr=192.168.97.147'
[root@gqac007 /]# 


Version-Release number of selected component (if applicable):
---------------------------------------------------------------

nfs-ganesha-gluster-2.4.4-16.el7rhgs.x86_64
glusterfs-ganesha-3.8.4-36.el7rhgs.x86_64



How reproducible:
----------------

1/1


Additional info:
----------------
Volume Name: testvol
Type: Distributed-Replicate
Volume ID: 41c5aa32-ec60-4591-ae6d-f93a0b13b47c
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: gqas013.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick0
Brick2: gqas005.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick1
Brick3: gqas006.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick2
Brick4: gqas008.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick3
Options Reconfigured:
cluster.shd-wait-qlength: 655536
cluster.shd-max-threads: 64
client.event-threads: 4
server.event-threads: 4
cluster.lookup-optimize: on
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
ganesha.enable: on
features.cache-invalidation: on
server.allow-insecure: on
performance.stat-prefetch: off
transport.address-family: inet
nfs.disable: on
nfs-ganesha: enable
cluster.enable-shared-storage: enable
Comment 4 Ambarish 2017-07-30 07:17:28 EDT
Each time I try to mount , I see this in messages :

Jul 30 07:16:53 gqas013 lrmd[19750]:  notice: gqas013.sbu.lab.eng.bos.redhat.com-nfs_unblock_monitor_10000:28728:stderr [ 0+1 records in ]
Jul 30 07:16:53 gqas013 lrmd[19750]:  notice: gqas013.sbu.lab.eng.bos.redhat.com-nfs_unblock_monitor_10000:28728:stderr [ 0+1 records out ]
Jul 30 07:16:53 gqas013 lrmd[19750]:  notice: gqas013.sbu.lab.eng.bos.redhat.com-nfs_unblock_monitor_10000:28728:stderr [ 390 bytes (390 B) copied, 0.00407114 s, 95.8 kB/s ]
Comment 5 Ambarish 2017-07-30 08:50:29 EDT
I have gluster v heal info running periodially on that server too,to see when heal completes.
Comment 9 Frank Filz 2017-07-31 16:52:00 EDT
what are the other threads doing?

I assume this is a deadlock, it could be fixed by patches in 2.5 already.
Comment 10 Frank Filz 2017-07-31 16:55:08 EDT
Is both this bug and https://bugzilla.redhat.com/show_bug.cgi?id=1476559 occurring at the same time? If so, I think it's one deadlock bug...
Comment 11 Jiffin 2017-08-02 01:17:10 EDT
(In reply to Frank Filz from comment #10)
> Is both this bug and https://bugzilla.redhat.com/show_bug.cgi?id=1476559
> occurring at the same time? If so, I think it's one deadlock bug...

Yeah both bug is occurring on same server for two different clients
Comment 16 Kaleb KEITHLEY 2017-10-05 07:25:57 EDT
POST with rebase to nfs-ganesha-2.5.x

Note You need to log in before you can comment on or make changes to this bug.