Bugzilla will be upgraded to version 5.0 on a still to be determined date in the near future. The original upgrade date has been delayed.
Bug 1330218 - Shutting down I/O serving node, takes around ~9mins for IO to resume from failed over node in heterogeneous client scenarios [NEEDINFO]
Shutting down I/O serving node, takes around ~9mins for IO to resume from fai...
Status: NEW
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: nfs-ganesha (Show other bugs)
3.1
x86_64 Linux
unspecified Severity urgent
: ---
: ---
Assigned To: Kaleb KEITHLEY
Manisha Saini
: Reopened, Triaged, ZStream
Depends On: 1278336 1302545 1303037 1354439 1363722
Blocks: 1351530
  Show dependency treegraph
 
Reported: 2016-04-25 12:20 EDT by Shashank Raj
Modified: 2018-10-18 13:34 EDT (History)
12 users (show)

See Also:
Fixed In Version:
Doc Type: Known Issue
Doc Text:
If a volume is being accessed by heterogeneous clients (i.e, both NFSv3 and NFSv4 clients), it is observed that NFSv4 clients take longer time to recover post virtual-IP failover due to a node shutdown. Workaround: Use different VIPs for different access protocol (i.e, NFSv3 or NFSv4) access.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2018-04-16 14:18:25 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
jthottan: needinfo? (msaini)


Attachments (Terms of Use)

  None (edit)
Description Shashank Raj 2016-04-25 12:20:23 EDT
Description of problem:

Shutting down I/O serving node, takes 15-20 mins for IO to resume from failed over node.

Version-Release number of selected component (if applicable):

ganesha-2.3.1-4

How reproducible:

Always

Steps to Reproduce:

1. Create a 4 node cluster and configure ganesha on it.
2. Create a dist rep 6x2 volume and mount it using vers 3 and 4 on 2 clients respectively.
3. Start creating IO's (100kb files in my case) from both the mount points.
4. Shut down the node which is serving the IO.

Performed the above scenario 3 times and observations are as below:

1st attempt:

with vers 4, IO stopped and resumed after ~17 mins.
with vers 3, IO happening continuously.

2nd attempt:

with vers 4, IO stopped during grace period and started after that.
with vers 3, IO stopped and started after ~15 mins.

3rd attempt:

with vers 4, IO stopped and resumed after ~20 mins.
with vers 3, IO happening continuously.


Actual results:

Shutting down I/O serving node, takes 15-20 mins for IO to resume from failed over node

Expected results:

IO should resume as soon as grace period finishes.

Additional info:
Comment 2 Niels de Vos 2016-06-20 08:31:57 EDT
This is most likely the same as bug 1278336.
Comment 9 Manisha Saini 2016-11-28 04:38:46 EST
With glusterfs-ganesha-3.8.4-5.el7rhgs.x86_64

While rebooting IO serving node,it takes around ~9 minutes for IO to resume from failover node in case of NFSV4 mount.


1. Create a 4 node cluster and configure ganesha on it.
2. Create a dist rep 6x2 volume and mount it using vers 3 and 4 on 2 clients respectively.
3. Start creating IO's (100kb files in my case) from both the mount points.
4. Shut down the node which is serving the IO.

Performed the above scenario 3 times and observations are as below:

1st attempt:

with vers 4, IO stopped and resumed after ~9 minutes.
with vers 3, IO stopped for around ~1 minute and resumed in GRACE period itself.

2nd attempt:

with vers 4, IO stopped and resumed after ~9 minutes.
with vers 3, IO stopped for around ~1 minute and resumed in GRACE period itself.

3rd attempt:

with vers 4, IO stopped and resumed after ~8 minutes.
with vers 3, IO stopped for around ~1 minute and resumed in GRACE period itself.



I tried swapping clients for nfsV3 and nfsV4,Observation was same (With NFSV4,it is taking around ~9 minutes to resume with both clients)


Expected Result:
IO should resume as soon as grace period finishes
Comment 12 Manisha Saini 2016-11-29 04:45:02 EST
Soumya,


1.Tried mounting volume on single client with NFSV4
        IO resumed after ~2 minutes

2.Tried setting time out during mounting volume on single client with NFSV4

        mount -t nfs -o vers=4,timeo=200 10.70.44.154:/ganeshaVol1 /mnt/ganesha1/
        IO resumed after ~2 minutes
Comment 13 Manisha Saini 2016-11-29 09:23:31 EST
Based on Comment 9, as it takes around ~9 minutes for IO to resume from failover node in case of NFSV4 mount ,Reopening this Bug
Comment 14 Soumya Koduri 2016-11-29 10:15:58 EST
Thanks for retesting Manisha.

Frank/Dan/Matt,

Do you have any comments wrt to update on comment#11 and comment#12.
Comment 15 Daniel Gryniewicz 2016-11-29 14:30:53 EST
Without some form of logs from the failover time, I'm not sure I can say anything.
Comment 18 Bhavana 2017-03-13 21:14:26 EDT
Hi Soumya,

I have edited the doc text for the release notes. Can you please take a look at it and let me know if I need any anything more.
Comment 19 Soumya Koduri 2017-03-14 01:01:12 EDT
Hi Bhavana,

This bug was FAILED_QA as there was one outstanding issue. I changed the doc_text to reflect that. Please check the same.

<<<<
In case a volume is being accessed by heterogeneous clients (i.e, both NFSv3 and NFSv4 clients), it was observed that NFSv4 clients take longer time to recover post virtual-IP failover due to any node shutdown. 

Workaround:
To avoid that use different VIPs for different access protocol (i.e, NFSv3 or NFSv4) access.
>>>
Comment 20 Bhavana 2017-03-14 06:12:44 EDT
Thanks Soumya.

Added the doc text for the release notes.

Note You need to log in before you can comment on or make changes to this bug.