Back to bug 1330365
| Who | When | What | Removed | Added |
|---|---|---|---|---|
| Red Hat Bugzilla Rules Engine | 2016-04-26 03:32:51 UTC | Keywords | ZStream | |
| Kaleb KEITHLEY | 2016-04-28 12:52:14 UTC | Status | NEW | ASSIGNED |
| Assignee | rhs-bugs | jthottan | ||
| Shashank Raj | 2016-04-29 07:42:05 UTC | Hardware | Unspecified | x86_64 |
| OS | Unspecified | Linux | ||
| Severity | unspecified | high | ||
| Jiffin | 2016-05-05 09:35:13 UTC | Blocks | 1333319 | |
| Jiffin | 2016-05-05 11:55:47 UTC | Status | ASSIGNED | POST |
| Rahul Hinduja | 2016-05-05 13:35:00 UTC | CC | rhinduja | |
| Blocks | 1311817 | |||
| Jiffin | 2016-05-05 18:21:08 UTC | Blocks | 1333528 | |
| Jiffin | 2016-05-06 05:58:40 UTC | CC | asrivast | |
| Flags | needinfo?(asrivast) | |||
| Alok | 2016-05-06 06:12:59 UTC | Flags | needinfo?(asrivast) | |
| Soumya Koduri | 2016-05-06 06:34:22 UTC | Status | POST | MODIFIED |
| Red Hat Bugzilla Rules Engine | 2016-05-06 06:34:27 UTC | Target Release | --- | RHGS 3.1.3 |
| Rahul Hinduja | 2016-05-09 15:01:43 UTC | QA Contact | storage-qa-internal | sraj |
| errata-xmlrpc | 2016-05-10 11:26:44 UTC | Status | MODIFIED | ON_QA |
| Milind Changire | 2016-05-10 12:04:35 UTC | Fixed In Version | glusterfs-3.7.9-4 | |
| Shashank Raj | 2016-05-12 08:59:29 UTC | Status | ON_QA | ASSIGNED |
| Rejy M Cyriac | 2016-05-12 16:07:23 UTC | CC | rcyriac | |
| Fixed In Version | glusterfs-3.7.9-4 | |||
| Soumya Koduri | 2016-05-15 15:56:09 UTC | CC | sraj | |
| Flags | needinfo?(sraj) | |||
| Soumya Koduri | 2016-05-16 08:15:11 UTC | Blocks | 1336331 | |
| Jiffin | 2016-05-17 13:14:04 UTC | Blocks | 1336798 | |
| Shashank Raj | 2016-05-18 14:57:34 UTC | Flags | needinfo?(sraj) | |
| Rejy M Cyriac | 2016-05-20 11:26:06 UTC | Target Release | RHGS 3.1.3 | --- |
| Rahul Hinduja | 2016-05-20 11:27:21 UTC | Blocks | 1311817 | 1311843 |
| Rejy M Cyriac | 2016-05-30 20:41:13 UTC | Doc Text | Cause: Consequence: Fix: Result: | |
| Doc Type | Bug Fix | Known Issue | ||
| Jiffin | 2016-06-12 18:41:46 UTC | Doc Text | Cause: Consequence: Fix: Result: | Cause: Rarely gluster v set <volume> ganesha.enable off fails with "Dynamic export addition/deletion failed ip address". This is due to the dbus message send to remove a export entry from ganesha process enters into hung state Consequence: Both volume information for ganesha.enale will be wrong failed node and volume won't be unexported properly but configuration file will be deleted Fix: This can be solved using following steps On the failed node <ip/hostname can be obtained from stdout> restart the nfs-ganesha process execute the following /usr/libexec/ganesha/create-export-ganesha.sh /etc/ganesha/ <volname> /usr/libexec/ganesha/dbus-send.sh /etc/ganesha on <volname> gluster v set <volume> ganesha.enable off Result: |
| Jiffin | 2016-06-12 18:46:58 UTC | Doc Text | Cause: Rarely gluster v set <volume> ganesha.enable off fails with "Dynamic export addition/deletion failed ip address". This is due to the dbus message send to remove a export entry from ganesha process enters into hung state Consequence: Both volume information for ganesha.enale will be wrong failed node and volume won't be unexported properly but configuration file will be deleted Fix: This can be solved using following steps On the failed node <ip/hostname can be obtained from stdout> restart the nfs-ganesha process execute the following /usr/libexec/ganesha/create-export-ganesha.sh /etc/ganesha/ <volname> /usr/libexec/ganesha/dbus-send.sh /etc/ganesha on <volname> gluster v set <volume> ganesha.enable off Result: | Cause: Rarely gluster v set <volume> ganesha.enable off fails with "Dynamic export addition/deletion failed ip address". This is due to the dbus message send to remove a export entry from ganesha process enters into hung state Consequence: Both volume information for ganesha.enale will be wrong failed node and volume won't be unexported properly but configuration file will be deleted Fix: This can be solved using following steps On the failed node <ip/hostname can be obtained from stdout> restart the nfs-ganesha process execute the following /usr/libexec/ganesha/create-export-ganesha.sh /etc/ganesha/ <volname> /usr/libexec/ganesha/dbus-send.sh /etc/ganesha on <volname> gluster v set <volume> ganesha.enable off Result: It may fail on some node.So make sure the volume information and showmount list contains expected values in all the nodes in the cluster |
| Soumya Koduri | 2016-06-14 06:09:34 UTC | Doc Text | Cause: Rarely gluster v set <volume> ganesha.enable off fails with "Dynamic export addition/deletion failed ip address". This is due to the dbus message send to remove a export entry from ganesha process enters into hung state Consequence: Both volume information for ganesha.enale will be wrong failed node and volume won't be unexported properly but configuration file will be deleted Fix: This can be solved using following steps On the failed node <ip/hostname can be obtained from stdout> restart the nfs-ganesha process execute the following /usr/libexec/ganesha/create-export-ganesha.sh /etc/ganesha/ <volname> /usr/libexec/ganesha/dbus-send.sh /etc/ganesha on <volname> gluster v set <volume> ganesha.enable off Result: It may fail on some node.So make sure the volume information and showmount list contains expected values in all the nodes in the cluster | Cause: Rarely the command 'gluster v set <volume> ganesha.enable off' fails with the error "Dynamic export addition/deletion failed". This happens when the dbus message sent to remove a export entry from ganesha process on any of the nodes enters into hung state. Consequence: The volume wouldn't have got unexported on that node. This leads to inconsistency across the cluster. Fix: This can be solved using following steps: First identify the node where unexport failed from the glusterd logs. On that failed node: 1) restart the nfs-ganesha process 2) execute the following /usr/libexec/ganesha/create-export-ganesha.sh /etc/ganesha/ <volname> 3) /usr/libexec/ganesha/dbus-send.sh /etc/ganesha on <volname> 4) gluster v set <volume> ganesha.enable off Result: The volume gets re-exported and unexported on that node where the command had initially failed. Please note that step (4) mentioned above may result in some errors if there were any nodes where the volume was successfully unexported at the beginning. So make sure the volume information and the output of the command "showmount -e localhost" contains expected values on all the nodes in the cluster. |
| Soumya Koduri | 2016-06-20 12:33:50 UTC | Blocks | 1247450 | |
| Bhavana | 2016-06-21 08:41:54 UTC | CC | bmohanra | |
| Doc Text | Cause: Rarely the command 'gluster v set <volume> ganesha.enable off' fails with the error "Dynamic export addition/deletion failed". This happens when the dbus message sent to remove a export entry from ganesha process on any of the nodes enters into hung state. Consequence: The volume wouldn't have got unexported on that node. This leads to inconsistency across the cluster. Fix: This can be solved using following steps: First identify the node where unexport failed from the glusterd logs. On that failed node: 1) restart the nfs-ganesha process 2) execute the following /usr/libexec/ganesha/create-export-ganesha.sh /etc/ganesha/ <volname> 3) /usr/libexec/ganesha/dbus-send.sh /etc/ganesha on <volname> 4) gluster v set <volume> ganesha.enable off Result: The volume gets re-exported and unexported on that node where the command had initially failed. Please note that step (4) mentioned above may result in some errors if there were any nodes where the volume was successfully unexported at the beginning. So make sure the volume information and the output of the command "showmount -e localhost" contains expected values on all the nodes in the cluster. | Sometimes the command 'gluster v set <volume> ganesha.enable off' fails with the error "Dynamic export addition/deletion failed". This happens when the dbus message is sent to remove a export entry from ganesha process and the message is timed out. Due to this, the volume is not unexported on that failed node which leads to inconsistency across the cluster. Workaround: Identify the node where the unexport failed from the glusterd logs. On that failed node execute the following steps: 1) Restart the nfs-ganesha process Red Hat Enterprise Linux 7: # systemctl restart nfs-ganesha Red Hat Enterprise Linux 6: # service nfs-ganesha restart 2) Execute the following command to re-export the volume: # /usr/libexec/ganesha/create-export-ganesha.sh /etc/ganesha/ <volname> # /usr/libexec/ganesha/dbus-send.sh /etc/ganesha on <volname> 3) Execute the following command to set ganesha.enable to off: # gluster v set <volname> ganesha.enable off |
||
| Shashank Raj | 2016-09-19 06:32:36 UTC | Priority | unspecified | high |
| Niels de Vos | 2016-09-29 13:01:44 UTC | Blocks | 1333319 | |
| Depends On | 1333319 | |||
| Jiffin | 2016-09-29 13:05:55 UTC | Flags | needinfo?(sraj) | |
| Jiffin | 2016-09-29 13:06:11 UTC | Fixed In Version | 3.2.0 | |
| Jiffin | 2016-10-13 09:44:09 UTC | Keywords | Triaged | |
| Jiffin | 2016-10-14 05:21:01 UTC | Fixed In Version | 3.2.0 | |
| Shashank Raj | 2016-10-19 09:36:18 UTC | Flags | needinfo?(sraj) | |
| Jiffin | 2016-10-24 13:55:39 UTC | Flags | needinfo?(sraj) | |
| Shashank Raj | 2016-10-24 14:08:06 UTC | Status | ASSIGNED | CLOSED |
| Resolution | --- | WORKSFORME | ||
| Flags | needinfo?(sraj) | |||
| Last Closed | 2016-10-24 10:08:06 UTC | |||
| John Skeoch | 2016-11-08 03:53:34 UTC | CC | sashinde |
Back to bug 1330365