Bug 1335090
Summary: | Shared volume doesn't get mounted on few nodes after rebooting all nodes in cluster. | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Shashank Raj <sraj> | |
Component: | glusterfs | Assignee: | Jiffin <jthottan> | |
Status: | CLOSED ERRATA | QA Contact: | Manisha Saini <msaini> | |
Severity: | high | Docs Contact: | ||
Priority: | unspecified | |||
Version: | rhgs-3.1 | CC: | amukherj, asengupt, asrivast, jthottan, kkeithle, msaini, mzywusko, ndevos, nlevinki, rcyriac, rhinduja, skoduri, vbellur | |
Target Milestone: | --- | |||
Target Release: | RHGS 3.3.0 | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.8.4-34 | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1452527 (view as bug list) | Environment: | ||
Last Closed: | 2017-09-21 04:28:23 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1452527 | |||
Bug Blocks: | 1417147, 1451981 |
Description
Shashank Raj
2016-05-11 10:45:12 UTC
sosreports can be found at http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1335090 This is expected behaviour. We need to understand that the shared volume itself is hosted in these nodes, and all nodes mount it using one of the particular nodes. Now when all nodes are down, the shared storage volume is also essentially down. When the nodes come up, till the node whose entry is mentioned in /etc/fstab is up and serving, none of them will be able to connect to the shared storage. That node itself will never connect to the shared storage on reboot, as by the time /etc/fstab entry is replayed, the volume is not being served. Why can't we close this bug then? upstream patch : https://review.gluster.org/17339 To work the above change following service need to be enabled systemctl enable glusterfssharedstorage.service Verified this bug on # rpm -qa | grep ganesha glusterfs-ganesha-3.8.4-29.el7rhgs.x86_64 nfs-ganesha-gluster-2.4.4-10.el7rhgs.x86_64 nfs-ganesha-debuginfo-2.4.4-10.el7rhgs.x86_64 nfs-ganesha-2.4.4-10.el7rhgs.x86_64 Steps: 1.Create a 4 node cluster 2.run systemctl enable glusterfssharedstorage.service on all the nodes 3.reboot all the nodes Shared_storage is mounted on all the nodes post reboot. Will move this bug to verified once jiffin open the doc bug for enabling sharedstorage service post ganesha setup creation (In reply to Manisha Saini from comment #13) > Verified this bug on > > # rpm -qa | grep ganesha > glusterfs-ganesha-3.8.4-29.el7rhgs.x86_64 > nfs-ganesha-gluster-2.4.4-10.el7rhgs.x86_64 > nfs-ganesha-debuginfo-2.4.4-10.el7rhgs.x86_64 > nfs-ganesha-2.4.4-10.el7rhgs.x86_64 > > > Steps: > 1.Create a 4 node cluster > 2.run systemctl enable glusterfssharedstorage.service on all the nodes > 3.reboot all the nodes > > Shared_storage is mounted on all the nodes post reboot. > > Will move this bug to verified once jiffin open the doc bug > for enabling sharedstorage service post ganesha setup creation Thanks Manisha . I have opened doc bug for this issue .https://bugzilla.redhat.com/show_bug.cgi?id=1464342 Opened the bug to add this in Gdeploy as well while setting up ganesha cluster https://bugzilla.redhat.com/show_bug.cgi?id=1464375 Verified this Bug on # rpm -qa | grep ganesha glusterfs-ganesha-3.8.4-34.el7rhgs.x86_64 nfs-ganesha-2.4.4-16.el7rhgs.x86_64 nfs-ganesha-gluster-2.4.4-16.el7rhgs.x86_64 Tested On 6 node ganesha cluster.After rebooting all the 6 nodes,shared-storage is mounted on all the nodes Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:2774 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:2774 |