Bug 1435170
| Summary: | Gluster-client no failover | ||
|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | Attila Pinter <apinter.it> |
| Component: | glusterd | Assignee: | Atin Mukherjee <amukherj> |
| Status: | CLOSED EOL | QA Contact: | |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 3.10 | CC: | apinter.it, bugs, rtalur |
| Target Milestone: | --- | Keywords: | Triaged |
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2017-08-09 07:47:48 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Attila Pinter
2017-03-23 10:20:23 UTC
Ok so done a little booboo mounting on the client side. So the correct mount is: sudo mount -t glusterfs -o backupvolfile-server=gfs2,transport=tcp gfs1:/gfs /mnt/bitWafl/ Now this works and client connects to 2nd server@gfs2. Now the question remain: what happens if I want to mount the volume during boot? By the look of it the volfile is not being transfered properly? Interesting enough that after disabled server 2 (gfs2) and enabled all the rest and disabled gfs2 the connection still broke :/ What am I doing wrong? Or this is really a bug in 3.10? Moved forward on the issue and started to play with heal. SHD is not running, marked as not available no idea why, but got an error saying: Not able to fetch volfile from glusterd. The original topic as there is no failover is solved by mounting the actual volfile in fstab. Mounting only from one node does transfer the volfile, but there is no failover once the server goes down. This issue I think is glusterfs related. On the other hand regarding the SHD failing. Should I open a new ticket for that or can it run here? I think it is related to the volfile issue though. The original topic as there is no failover is solved by mounting the actual volfile in fstab. Mounting only from one node does transfer the volfile, but there is no failover once the server goes down. This issue I think is glusterfs related. On the other hand regarding the SHD failing. Should I open a new ticket for that or can it run here? I think it is related to the volfile issue though. Refer to https://github.com/gluster/glusterfs/blob/master/extras/hook-scripts/start/post/S29CTDBsetup.sh#L54 for example on how to specify multiple volfile servers in fstab for boot time mounts. failover to a different volfile server should happen if current volfile server goes down. The patch for that was merged before 3.10 release. https://review.gluster.org/#/c/13002/ Regarding SHD, it could be related to the same issue. Please share the logs as attachments once again. The previous pastebin logs have been deleted. Since the needinfo hasn't been addressed for more than a month now, closing this bug. Please reopen if the issue persists. |