Bug 862033 - quorum does not work (Possible regression)
quorum does not work (Possible regression)
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
2.0
Unspecified Unspecified
medium Severity unspecified
: ---
: ---
Assigned To: Kaushal
Sudhir D
:
Depends On:
Blocks: 840122 874018
  Show dependency treegraph
 
Reported: 2012-10-01 12:52 EDT by Sachidananda Urs
Modified: 2013-09-23 18:43 EDT (History)
5 users (show)

See Also:
Fixed In Version: glusterfs-3.3.0.5rhs-36
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-09-23 18:39:15 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Sachidananda Urs 2012-10-01 12:52:33 EDT
Description of problem:

[root@rhs-client19 ~]# gluster volume info
 
Volume Name: quo
Type: Distributed-Replicate
Volume ID: 96852dd0-e8f6-48f8-94e2-ef80e8c70778
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: rhs-client19.lab.eng.blr.redhat.com:/home/A
Brick2: rhs-client20.lab.eng.blr.redhat.com:/home/B
Brick3: rhs-client21.lab.eng.blr.redhat.com:/home/C
Brick4: rhs-client23.lab.eng.blr.redhat.com:/home/D
Options Reconfigured:
cluster.server-quorum-type: server
cluster.server-quorum-ratio: 100%
[root@rhs-client19 ~]# 

[root@rhs-client19 ~]# gluster volume status
Status of volume: quo
Gluster process                                         Port    Online  Pid
------------------------------------------------------------------------------
Brick rhs-client19.lab.eng.blr.redhat.com:/home/A       24010   Y       7374
Brick rhs-client20.lab.eng.blr.redhat.com:/home/B       24011   Y       6834
NFS Server on localhost                                 38467   Y       7380
Self-heal Daemon on localhost                           N/A     Y       7385
NFS Server on rhs-client22.lab.eng.blr.redhat.com       38467   Y       18244
Self-heal Daemon on rhs-client22.lab.eng.blr.redhat.com N/A     Y       18249
NFS Server on rhs-client20.lab.eng.blr.redhat.com       38467   Y       6840
Self-heal Daemon on rhs-client20.lab.eng.blr.redhat.com N/A     Y       6845

[root@rhs-client19 ~]# gluster peer status
Number of Peers: 3

Hostname: rhs-client22.lab.eng.blr.redhat.com
Uuid: 8c743ecc-d9aa-4cb0-a7a7-3c45c5e1284d
State: Peer in Cluster (Connected)

Hostname: rhs-client20.lab.eng.blr.redhat.com
Uuid: b7f33530-25c1-406c-8c76-2c5feabaf7b0
State: Peer in Cluster (Connected)

Hostname: rhs-client21.lab.eng.blr.redhat.com
Uuid: 5b315725-90dd-41f9-abe8-827d27db8210
State: Peer in Cluster (Disconnected)
[root@rhs-client19 ~]#
Comment 2 Sachidananda Urs 2012-10-01 13:07:34 EDT
And when the server comes up the brick does not come up:


[root@rhs-client21 ~]# gluster volume status
Status of volume: quo
Gluster process                                         Port    Online  Pid
------------------------------------------------------------------------------
Brick rhs-client19.lab.eng.blr.redhat.com:/home/A       24010   Y       7374
Brick rhs-client20.lab.eng.blr.redhat.com:/home/B       24011   Y       6834
Brick rhs-client21.lab.eng.blr.redhat.com:/home/C       24010   N       4625
NFS Server on localhost                                 38467   Y       3165
Self-heal Daemon on localhost                           N/A     Y       3171
NFS Server on rhs-client22.lab.eng.blr.redhat.com       38467   Y       18244
Self-heal Daemon on rhs-client22.lab.eng.blr.redhat.com N/A     Y       18249
NFS Server on rhs-client20.lab.eng.blr.redhat.com       38467   Y       6840
Self-heal Daemon on rhs-client20.lab.eng.blr.redhat.com N/A     Y       6845
NFS Server on 10.70.36.43                               38467   Y       7380
Self-heal Daemon on 10.70.36.43                         N/A     Y       7385
 
[root@rhs-client21 ~]# gluster volume info
 
Volume Name: quo
Type: Distributed-Replicate
Volume ID: 96852dd0-e8f6-48f8-94e2-ef80e8c70778
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: rhs-client19.lab.eng.blr.redhat.com:/home/A
Brick2: rhs-client20.lab.eng.blr.redhat.com:/home/B
Brick3: rhs-client21.lab.eng.blr.redhat.com:/home/C
Brick4: rhs-client23.lab.eng.blr.redhat.com:/home/D
Options Reconfigured:
cluster.server-quorum-type: server
cluster.server-quorum-ratio: 100%
[root@rhs-client21 ~]# 
[root@rhs-client21 ~]# gluster peer status
Number of Peers: 4

Hostname: rhs-client23.lab.eng.blr.redhat.com
Uuid: 230ae9f2-310e-49a6-b9f6-440bb5962da3
State: Peer Rejected (Connected)

Hostname: rhs-client22.lab.eng.blr.redhat.com
Uuid: 8c743ecc-d9aa-4cb0-a7a7-3c45c5e1284d
State: Peer in Cluster (Connected)

Hostname: 10.70.36.43
Uuid: 772396e0-ccae-4b64-99f9-84f7e836d101
State: Peer in Cluster (Connected)

Hostname: rhs-client20.lab.eng.blr.redhat.com
Uuid: b7f33530-25c1-406c-8c76-2c5feabaf7b0
State: Peer in Cluster (Connected)
[root@rhs-client21 ~]#
Comment 3 Amar Tumballi 2012-10-08 03:07:42 EDT
seems like the release 3.3.0.3-32rhs should work fine.
Comment 4 Amar Tumballi 2012-10-11 03:08:29 EDT
Pranith, can you please help Kaushal on debugging these issues? (if it is still relevant)
Comment 5 Pranith Kumar K 2012-10-11 03:39:59 EDT
Already sent the fix for this.
Comment 7 Sachidananda Urs 2012-11-07 06:52:47 EST
Tested with latest update, the server now is up when the machine is brought back up.
Comment 8 Scott Haines 2013-09-23 18:39:15 EDT
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html
Comment 9 Scott Haines 2013-09-23 18:43:42 EDT
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html

Note You need to log in before you can comment on or make changes to this bug.