Bug 1255694 - glusterd: volume status backward compatibility
glusterd: volume status backward compatibility
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: glusterd (Show other bugs)
mainline
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: hari gowtham
: Triaged
Depends On:
Blocks: 1260858
  Show dependency treegraph
 
Reported: 2015-08-21 06:58 EDT by hari gowtham
Modified: 2016-06-16 09:33 EDT (History)
1 user (show)

See Also:
Fixed In Version: glusterfs-3.8rc2
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1260858 (view as bug list)
Environment:
Last Closed: 2016-06-16 09:33:12 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description hari gowtham 2015-08-21 06:58:40 EDT
Description of problem:
The volume status command doesn't display all the bricks when the cluster is a mixed cluster. It displays the bricks in the higher gluster version(3.8) but the bricks in the lower version (3.6) is not displayed. 

Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1.create a mixed cluster with 3.6 and 3.8
2.create a volume wit bricks in both the nodes
3.Issue gluster v status 

Actual results:
Status of volume: vol1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.42.171:/data/gluster/tier/cbr2  49153     0          Y       13608
NFS Server on localhost                     N/A       N/A        N       N/A  
NFS Server on dhcp42-203.lab.eng.blr.redhat
.com                                        N/A       N/A        N       N/A  
 
Task Status of Volume vol1
------------------------------------------------------------------------------
There are no active volume tasks


Expected results:
Status of volume: vol1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.42.171:/data/gluster/tier/cbr2  49153     0          Y       31879
Brick 10.70.42.203:/data/gluster/tier/cbr2  49154     0          Y       27686
NFS Server on localhost                     N/A       N/A        N       N/A  
NFS Server on dhcp42-203.lab.eng.blr.redhat
.com                                        N/A       N/A        N       N/A  
 
Task Status of Volume vol1
------------------------------------------------------------------------------
There are no active volume tasks


Additional info:
The brick on the host, 10.70.42.203 (3.6) was missing.
Comment 1 Anand Avati 2015-08-21 08:57:48 EDT
REVIEW: http://review.gluster.org/11986 (glusterd: volume status backward compatibility) posted (#1) for review on master by hari gowtham (hari.gowtham005@gmail.com)
Comment 2 Anand Avati 2015-08-24 01:58:00 EDT
REVIEW: http://review.gluster.org/11986 (glusterd: volume status backward compatibility) posted (#2) for review on master by hari gowtham (hari.gowtham005@gmail.com)
Comment 3 Anand Avati 2015-08-24 02:00:06 EDT
REVIEW: http://review.gluster.org/11986 (glusterd: volume status backward compatibility) posted (#3) for review on master by hari gowtham (hari.gowtham005@gmail.com)
Comment 4 Anand Avati 2015-08-28 03:25:41 EDT
REVIEW: http://review.gluster.org/11986 (glusterd: volume status backward compatibility) posted (#4) for review on master by hari gowtham (hari.gowtham005@gmail.com)
Comment 5 Anand Avati 2015-08-28 03:43:05 EDT
REVIEW: http://review.gluster.org/11986 (glusterd: volume status backward compatibility) posted (#5) for review on master by hari gowtham (hari.gowtham005@gmail.com)
Comment 6 Vijay Bellur 2015-09-04 03:49:41 EDT
REVIEW: http://review.gluster.org/11986 (glusterd: volume status backward compatibility) posted (#7) for review on master by hari gowtham (hari.gowtham005@gmail.com)
Comment 7 Niels de Vos 2016-06-16 09:33:12 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.