+++ This bug was initially created as a clone of Bug #1382277 +++ Description of problem: ======================= Showing Incorrect volume type in the glusterd state dump. [Volumes] Volume1.name: Dis-Rep Volume1.id: fd13329e-35c9-476f-a5d0-ca2be1c488c0 Volume1.type: Replicate <============== Volume1.transport_type: tcp Volume1.status: Started Volume1.brickcount: 4 Volume info details: ~]# gluster volume info Volume Name: Dis-Rep Type: Distributed-Replicate <============= Volume ID: fd13329e-35c9-476f-a5d0-ca2be1c488c0 Status: Started Snapshot Count: 0 Number of Bricks: 2 x 2 = 4 Transport-type: tcp ...... Version-Release number of selected component (if applicable): ============================================================= glusterfs-3.8.4-2 How reproducible: ================= Always Steps to Reproduce: =================== 1. Have 2 or more cluster nodes 2. Create a 2 * 2 volume type and start it 3. take glusterd state dump using the cmd "gluster get-state" 4. Check for the volume type in the dump file. Actual results: =============== Incorrect volume type in the glusterd state dump Expected results: ================= Volume type should show correctly Additional info: --- Additional comment from Atin Mukherjee on 2016-10-06 04:50:16 EDT --- Byreddy - please do not use the term statedump here as that indicates something different. Please change the subject/summary to mention cluster state --- Additional comment from Atin Mukherjee on 2016-10-06 04:51:50 EDT --- Correction, it should be local state. --- Additional comment from Byreddy on 2016-10-14 04:54:00 EDT --- Some other issues related to this one: 1) replica count entry is not showing, it should be there 2) Correct the peer related info which is discussed // similar to CLI peer status output 3) Remove the below duplicate entries Volume1.rebalance.data: 0 Volume1.rebalance.data: 0 4) Entry "Volume1.Brick1.signedin:" is showing false for running brick process 5)Remove the not needed entries eg: Volume1.Brick1.filesystem_type: Volume1.Brick1.mount_options: I didn't seen these entries filled in my testing. --- Additional comment from Byreddy on 2016-10-14 07:08:52 EDT --- + 6) Sever quorum ratio entry is missing, which is needed. --- Additional comment from Byreddy on 2016-10-14 07:09:32 EDT --- (In reply to Byreddy from comment #7) > + > 6) Server quorum ratio entry is missing, which is needed. --- Additional comment from Samikshan Bairagya on 2016-10-14 07:46:51 EDT --- (In reply to Byreddy from comment #7) > + > 6) Sever quorum ratio entry is missing, which is needed. cluster.server-quorum-ratio will be there in the local state data under the global options section, in case it is explicitly set to some other value other than the default. --- Additional comment from Byreddy on 2016-10-17 00:58:09 EDT --- (In reply to Samikshan Bairagya from comment #9) > (In reply to Byreddy from comment #7) > > + > > 6) Sever quorum ratio entry is missing, which is needed. > > cluster.server-quorum-ratio will be there in the local state data under the > global options section, in case it is explicitly set to some other value > other than the default. Yes Samikshan, i am seeing when i set it explicitly under the global option section. --- Additional comment from Byreddy on 2016-10-17 01:03:50 EDT --- For Tiered volume type, we have to separate the cold and hot bricks based on the below info in the file. Volume4.tier_info.cold_tier_type: Replicate Volume4.tier_info.cold_brick_count: 3 Volume4.tier_info.cold_replica_count: 3 Volume4.tier_info.cold_disperse_count: 0 Volume4.tier_info.cold_dist_leaf_count: 3 Volume4.tier_info.cold_redundancy_count: 0 Volume4.tier_info.hot_tier_type: Distribute Volume4.tier_info.hot_brick_count: 2 Volume4.tier_info.hot_replica_count: 1 Volume4.tier_info.promoted: 0 Volume4.tier_info.demoted: 0 --- Additional comment from Byreddy on 2016-10-17 01:52:10 EDT --- rebalanced data entry is showing with out data unit. Volume1.rebalance.id: e7227829-1357-4153-8b8f-78e05aec5fe1 Volume1.rebalance.status: started Volume1.rebalance.failures: 0 Volume1.rebalance.skipped: 0 Volume1.rebalance.lookedup: 988 Volume1.rebalance.files: 684 Volume1.rebalance.data: 93433255 <======= NO DATA UNIT I think data unit is needed here
REVIEW: http://review.gluster.org/15662 (cli, glusterd: Address issues in get-state cli output) posted (#1) for review on master by Samikshan Bairagya (samikshan)
REVIEW: http://review.gluster.org/15662 (cli, glusterd: Address issues in get-state cli output) posted (#2) for review on master by Samikshan Bairagya (samikshan)
COMMIT: http://review.gluster.org/15662 committed in master by Atin Mukherjee (amukherj) ------ commit daea58a51b70f80ab04f115e49f8bf8790b6046a Author: Samikshan Bairagya <samikshan> Date: Thu Oct 13 17:13:54 2016 +0530 cli, glusterd: Address issues in get-state cli output This fixes the following data points: 1. Volume type 2. Peer state 3. List of other hostnames for a peer 4. Data unit information for rebalance The following data points are removed: 1. Mount options and filesystem types for bricks 2. global-option-version from list of global options The following data points are added: 1. Replica Count 2. Tier type for bricks belonging to hot/cold tier Change-Id: I5011250e863fdc4929b203cdb345d79b2f16c6a5 BUG: 1385839 Signed-off-by: Samikshan Bairagya <samikshan> Reviewed-on: http://review.gluster.org/15662 Reviewed-by: mohammed rafi kc <rkavunga> CentOS-regression: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> Smoke: Gluster Build System <jenkins.org> Reviewed-by: Atin Mukherjee <amukherj>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report. glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html [2] https://www.gluster.org/pipermail/gluster-users/