Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1511339

Summary: In Replica volume 2*2 when quorum is set, after glusterd restart nfs server is coming up instead of self-heal daemon
Product: [Community] GlusterFS Reporter: Sanju <srakonde>
Component: glusterdAssignee: Sanju <srakonde>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: low Docs Contact:
Priority: low    
Version: mainlineCC: akrai, amukherj, bmekala, bugs, rhs-bugs, storage-qa-internal, vbellur
Target Milestone: ---Keywords: Reopened, Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-6.0 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1507933
: 1511768 1511782 (view as bug list) Environment:
Last Closed: 2019-03-25 16:30:19 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1507933, 1511768, 1511782    

Comment 1 Worker Ant 2017-11-09 07:51:58 UTC
REVIEW: https://review.gluster.org/18703 (glusterd: gluster volume status displaying NFS server instead of self heal daemon) posted (#1) for review on master by Sanju Rakonde

Comment 2 Atin Mukherjee 2017-11-09 12:08:37 UTC
Description of problem:
Instead of self daemon, nfs server is coming up after glusterd restart.

Version-Release number of selected component (if applicable):
mainline

How reproducible:
2/2

Steps to Reproduce:
1. Created replica volume 2*2 using 3 nodes cluster and started it.
2. Stop glusterd on other two nodes.
3. Check the volume status on node where glusterd is running and do glusterd restart
4. check the gluster vol status

Actual results:
Before glusterd restart

Status of volume: replica_vol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.37.52:/bricks/brick0/replica_vo
l                                           N/A       N/A        N       N/A  
Brick 10.70.37.52:/bricks/brick1/replica_vo
l                                           N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       4062 
 
Task Status of Volume replica_vol
------------------------------------------------------------------------------
There are no active volume tasks

After glusterd restart

Status of volume: replica_vol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.37.52:/bricks/brick0/replica_vo
l                                           N/A       N/A        N       N/A  
Brick 10.70.37.52:/bricks/brick1/replica_vo
l                                           N/A       N/A        N       N/A  
NFS Server on localhost                     N/A       N/A        N       N/A  
 
Task Status of Volume replica_vol
------------------------------------------------------------------------------
There are no active volume tasks


Expected results:
Self heal daemon  should come up, not nfs-server daemon

Comment 3 Worker Ant 2017-11-09 19:00:28 UTC
COMMIT: https://review.gluster.org/18703 committed in master by  

------------- glusterd: display gluster volume status, when quorum type is server

Problem: when server-quorum-type is server, after restarting glusterd
in the node which is up, gluster volume status is giving incorrect
information.

Fix: check whether server is blank, before adding other keys into the
dictionary.

Change-Id: I926ebdffab330ccef844f23f6d6556e137914047
BUG: 1511339
Signed-off-by: Sanju Rakonde <srakonde>

Comment 4 Shyamsundar 2018-03-15 11:20:54 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.0.0, please open a new bug report.

glusterfs-4.0.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-March/000092.html
[2] https://www.gluster.org/pipermail/gluster-users/

Comment 5 Worker Ant 2018-11-19 10:21:11 UTC
REVIEW: https://review.gluster.org/21675 (glusterd: volume status should not show NFS daemon) posted (#1) for review on master by Sanju Rakonde

Comment 6 Worker Ant 2018-11-25 13:18:09 UTC
REVIEW: https://review.gluster.org/21675 (glusterd: volume status should not show NFS daemon) posted (#3) for review on master by Atin Mukherjee

Comment 7 Shyamsundar 2019-03-25 16:30:19 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report.

glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html
[2] https://www.gluster.org/pipermail/gluster-users/