Back to bug 1257548
| Who | When | What | Removed | Added |
|---|---|---|---|---|
| Red Hat Bugzilla Rules Engine | 2015-08-27 10:12:57 UTC | Keywords | ZStream | |
| Soumya Koduri | 2015-08-27 10:15:25 UTC | CC | jthottan, kkeithle, mmadhusu, ndevos, saujain, skoduri | |
| Soumya Koduri | 2015-08-27 10:35:29 UTC | Blocks | 1255689 | |
| John Skeoch | 2015-09-01 02:56:23 UTC | CC | mmadhusu | vagarwal |
| Vivek Agarwal | 2015-09-15 12:12:17 UTC | Doc Type | Bug Fix | Known Issue |
| Red Hat Bugzilla | 2015-09-15 12:12:17 UTC | Doc Type | Known Issue | Bug Fix |
| Soumya Koduri | 2015-09-15 12:56:17 UTC | Doc Text | Cause: nfs-ganesha service monitor script runs periodically every 10sec where as gluster ping timeout of the GlusterFS server (after which the locks of the unreachable client gets flushed) is 42sec by default. Consequence: Since the locks may not have got flushed by the GlusterFS server process, after IP failover, NFS clients lock state reclaim may fail. Workaround (if any): Its recommended to have nfs-ganesha service monitor period interval at least as twice as the gluster server ping timout. Hence either decrease the network ping timeout using the following command #gluster v set <volname> network.ping-timeout <ping_timeout_value> or increase nfs-service monitor interval time using the following commands: #pcs resource op remove nfs-mon monitor #pcs resource op add nfs-mon monitor interval=<interval_period_value> timeout=<timeout_value> Result: This shall ensure that in case of nfs-ganesha service going down, IP gets failed over to new node only after all the locks have been flushed taken by the old instance on the glusterFS brick processes. | |
| Doc Type | Bug Fix | Known Issue | ||
| Niels de Vos | 2015-09-15 13:25:18 UTC | Doc Text | Cause: nfs-ganesha service monitor script runs periodically every 10sec where as gluster ping timeout of the GlusterFS server (after which the locks of the unreachable client gets flushed) is 42sec by default. Consequence: Since the locks may not have got flushed by the GlusterFS server process, after IP failover, NFS clients lock state reclaim may fail. Workaround (if any): Its recommended to have nfs-ganesha service monitor period interval at least as twice as the gluster server ping timout. Hence either decrease the network ping timeout using the following command #gluster v set <volname> network.ping-timeout <ping_timeout_value> or increase nfs-service monitor interval time using the following commands: #pcs resource op remove nfs-mon monitor #pcs resource op add nfs-mon monitor interval=<interval_period_value> timeout=<timeout_value> Result: This shall ensure that in case of nfs-ganesha service going down, IP gets failed over to new node only after all the locks have been flushed taken by the old instance on the glusterFS brick processes. | Cause: nfs-ganesha service monitor script which triggers IP failover runs periodically every 10sec. The ping-timeout of the GlusterFS server (after which the locks of the unreachable client gets flushed) is 42sec by default. Consequence: Since the locks may not have got flushed by the GlusterFS server process, after IP failover, reclaiming the lock state by NFS clients may fail. Workaround (if any): It is recommended to have the nfs-ganesha service monitor period interval (default 10sec) at least as twice as the Gluster server ping-timout (default 42sec). Hence either decrease the network ping-timeout using the following command # gluster volume set <volname> network.ping-timeout <ping_timeout_value> or increase nfs-service monitor interval time using the following commands: # pcs resource op remove nfs-mon monitor # pcs resource op add nfs-mon monitor interval=<interval_period_value> timeout=<timeout_value> Result: This shall ensure that in case of nfs-ganesha service going down, IP gets failed over to new node only after all the locks have been flushed taken by the old instance on the GlusterFS brick processes. |
| Flags | needinfo?(skoduri) | |||
| Soumya Koduri | 2015-09-15 13:54:57 UTC | Flags | needinfo?(skoduri) | |
| Anjana Suparna Sriram | 2015-09-18 07:45:08 UTC | CC | asriram | |
| Doc Text | Cause: nfs-ganesha service monitor script which triggers IP failover runs periodically every 10sec. The ping-timeout of the GlusterFS server (after which the locks of the unreachable client gets flushed) is 42sec by default. Consequence: Since the locks may not have got flushed by the GlusterFS server process, after IP failover, reclaiming the lock state by NFS clients may fail. Workaround (if any): It is recommended to have the nfs-ganesha service monitor period interval (default 10sec) at least as twice as the Gluster server ping-timout (default 42sec). Hence either decrease the network ping-timeout using the following command # gluster volume set <volname> network.ping-timeout <ping_timeout_value> or increase nfs-service monitor interval time using the following commands: # pcs resource op remove nfs-mon monitor # pcs resource op add nfs-mon monitor interval=<interval_period_value> timeout=<timeout_value> Result: This shall ensure that in case of nfs-ganesha service going down, IP gets failed over to new node only after all the locks have been flushed taken by the old instance on the GlusterFS brick processes. | nfs-ganesha service monitor script which triggers IP failover runs periodically every 10sec. The ping-timeout of the GlusterFS server (after which the locks of the unreachable client gets flushed) is 42sec by default. After an IP failover, some locks may get cleaned by the GlusterFS server process, hence reclaiming the lock state by NFS clients fails. Workaround (if any): It is recommended to set the nfs-ganesha service monitor period interval (default 10sec) at least as twice as the Gluster server ping-timout (default 42sec). Hence, either decrease the network ping-timeout using the following command: # gluster volume set <volname> network.ping-timeout <ping_timeout_value> or increase nfs-service monitor interval time using the following commands: # pcs resource op remove nfs-mon monitor # pcs resource op add nfs-mon monitor interval=<interval_period_value> timeout=<timeout_value> | ||
| Flags | needinfo?(skoduri) | |||
| Soumya Koduri | 2015-09-18 08:48:21 UTC | Flags | needinfo?(skoduri) | |
| Anjana Suparna Sriram | 2015-09-18 09:59:22 UTC | Doc Text | nfs-ganesha service monitor script which triggers IP failover runs periodically every 10sec. The ping-timeout of the GlusterFS server (after which the locks of the unreachable client gets flushed) is 42sec by default. After an IP failover, some locks may get cleaned by the GlusterFS server process, hence reclaiming the lock state by NFS clients fails. Workaround (if any): It is recommended to set the nfs-ganesha service monitor period interval (default 10sec) at least as twice as the Gluster server ping-timout (default 42sec). Hence, either decrease the network ping-timeout using the following command: # gluster volume set <volname> network.ping-timeout <ping_timeout_value> or increase nfs-service monitor interval time using the following commands: # pcs resource op remove nfs-mon monitor # pcs resource op add nfs-mon monitor interval=<interval_period_value> timeout=<timeout_value> | nfs-ganesha service monitor script which triggers IP failover runs periodically every 10 seconds. The ping-timeout of the GlusterFS server (after which the locks of the unreachable client gets flushed) is 42 seconds by default. After an IP failover, some locks may not get cleaned by the GlusterFS server process, hence reclaiming the lock state by NFS clients may fail Workaround (if any): It is recommended to set the nfs-ganesha service monitor period interval (default 10sec) at least as twice as the Gluster server ping-timout (default 42sec). Hence, either decrease the network ping-timeout using the following command: # gluster volume set <volname> network.ping-timeout <ping_timeout_value> or increase nfs-service monitor interval time using the following commands: # pcs resource op remove nfs-mon monitor # pcs resource op add nfs-mon monitor interval=<interval_period_value> timeout=<timeout_value> |
| John Skeoch | 2016-01-19 06:16:10 UTC | CC | saujain | mzywusko |
| Soumya Koduri | 2016-01-28 11:08:20 UTC | Sub Component | nfs | |
| Status | NEW | ASSIGNED | ||
| Component | nfs-ganesha | glusterfs | ||
| QA Contact | storage-qa-internal | saujain | ||
| John Skeoch | 2016-02-18 00:09:09 UTC | CC | vagarwal | sankarshan |
| Kaleb KEITHLEY | 2016-06-15 13:37:03 UTC | Keywords | RFE | |
| Assignee | rhs-bugs | kkeithle | ||
| Niels de Vos | 2016-06-16 15:13:54 UTC | Keywords | FutureFeature | |
| Atin Mukherjee | 2016-08-03 05:05:58 UTC | Sub Component | nfs | |
| CC | amukherj, rhs-bugs | |||
| Component | glusterfs | nfs-ganesha | ||
| QA Contact | saujain | storage-qa-internal | ||
| Soumya Koduri | 2017-08-23 13:29:19 UTC | Assignee | kkeithle | skoduri |
| PnT Account Manager | 2018-05-24 21:31:45 UTC | CC | mzywusko | |
| Pasi Karkkainen | 2018-11-19 10:06:42 UTC | CC | pasik | |
| Soumya Koduri | 2019-05-06 12:42:57 UTC | Keywords | Triaged | |
| Jiffin | 2019-05-20 12:40:29 UTC | Status | ASSIGNED | CLOSED |
| Resolution | --- | WONTFIX | ||
| Last Closed | 2019-05-20 12:40:29 UTC |
Back to bug 1257548