Bug 1267752 - Unable to mount a replicated volume with 2 servers, one brick per server when only 1 server is powered on from having both off [NEEDINFO]
Unable to mount a replicated volume with 2 servers, one brick per server whe...
Status: CLOSED EOL
Product: GlusterFS
Classification: Community
Component: core (Show other bugs)
3.7.4
x86_64 Linux
unspecified Severity unspecified
: ---
: ---
Assigned To: Kaushal
: Triaged
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-09-30 16:51 EDT by Julien Langlois
Modified: 2017-03-08 06:03 EST (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Hypervisor: VirtualBox 5.0.4 Guest OS: Ubuntu 14.04 (Trusty) 64bit
Last Closed: 2017-03-08 06:03:12 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
ndevos: needinfo? (anekkunt)


Attachments (Terms of Use)

  None (edit)
Description Julien Langlois 2015-09-30 16:51:48 EDT
Description of problem:
I have 2 identical Ubuntu 14.04 x64 virtual machines running on a LAN. Both systems are server and client for GlusterFS point of view.
The GlusterFS server has 1 replicated volume with one brick per server. Both systems mounts the volume.
When I shutdown the 2 systems and start only one back, I am unable to mount the volume. The volume is not started.

The problem exists >= 3.6

Tested versions:
* 3.4.2 Official upstart OK
* 3.4.7 PPA  upstart OK
* 3.5.6 PPA  upstart OK
* 3.6.6 PPA  upstart NOK
* 3.7.4 PPA  upstart, initscript NOK

How reproducible: always


Steps to Reproduce:

1. Install server 1
    # add-apt-repository -y ppa:gluster/glusterfs-3.7
    # apt-get update -y
    # apt-get install -y glusterfs-server glusterfs-client
    # mkdir -p /mnt/{data/gluster,gluster-volume1}
    # gluster vol create vol1 node1:/mnt/data/gluster/vol1 force
    # gluster vol start vol1
    # echo "127.0.0.1:vol1 /mnt/gluster-volume1 glusterfs defaults,_netdev,noauto 0 0" >> /etc/fstab

2. Install server 2
    # add-apt-repository -y ppa:gluster/glusterfs-3.7
    # apt-get update -y
    # apt-get install -y glusterfs-server glusterfs-client
    # mkdir -p /mnt/{data/gluster,gluster-volume1}
    # gluster peer probe node1
    # gluster volume add-brick vol1 replica 2 node2:/mnt/data/gluster/vol1 force
    # echo "127.0.0.1:vol1 /mnt/gluster-volume1 glusterfs defaults,_netdev,noauto 0 0" >> /etc/fstab

3. Shutdown the 2 servers

4. Boot the server 1 only

5. Try to mount the volume:
    # mount /mnt/gluster-volume1


Actual results: the volume is not mounted. No glusterfsd process running. The volume is not started


Expected results: the volume is mounted (works in 3.4 & 3.5)


Additional info:
Comment 1 SATHEESARAN 2015-09-30 21:50:45 EDT
Have you checked that glusterd is up and running after the server1 was up ?
Comment 2 Julien Langlois 2015-09-30 23:30:08 EDT
Yes, Glusterd is up and running but there is no glusterfsd. "gluster volume status" says that the volume is not started/active.
Comment 3 Anand Nekkunti 2015-10-01 04:52:39 EDT
(In reply to Julien Langlois from comment #2)
> Yes, Glusterd is up and running but there is no glusterfsd. "gluster volume
> status" says that the volume is not started/active.

Hi Julien Langlois
    Thanks for reporting  issue, I have tested this issue which is reproducible.
We will fix this issue for next release.

Workaround :
    gluster vol start <VOL_Name> force
Comment 4 Anand Nekkunti 2015-10-01 04:52:53 EDT
(In reply to Julien Langlois from comment #2)
> Yes, Glusterd is up and running but there is no glusterfsd. "gluster volume
> status" says that the volume is not started/active.

Hi Julien Langlois
    Thanks for reporting  issue, I have tested this issue which is reproducible.
We will fix this issue for next release.

Workaround :
    gluster vol start <VOL_Name> force
Comment 5 Anand Nekkunti 2015-10-05 03:15:25 EDT
This is known issue for two node cluster, glusterd need at least one peer node to sync metadata. 
If no peer is available then glusterd assume that available metadata is stale data and it won't start bricks.
Usually we recommend 3 node cluster to avoid this scenario.

You can over rule this by running below command
 "gluster vol start <VOL_Name> force" command to start bricks
Comment 6 Niels de Vos 2015-11-17 07:15:33 EST
Anand, can you make sure that this gets documented somewhere? Once that is done, and no code change is needed, please pass a link to the doc and close this bug.

Thanks!
Comment 8 Patrick Monnerat 2016-05-09 07:55:40 EDT
I have tried the command "gluster vol start <VOL_Name> force" and it works.
However, this is not entirely satisfactory, because it cannot be performed automatically. This is really needed in a fault-tolerant context.

Could we have something as a volume option to force volume start at startup even if no peer is accessible ? I did not find such a config param in the code. This is a feature request, of course.

Thanks for considering it.
Comment 9 Kaushal 2017-03-08 06:03:12 EST
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life.

Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS.
If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.

Note You need to log in before you can comment on or make changes to this bug.