Bug 1277043

Summary: Upgrading to 3.7.-5-5 has changed volume to distributed disperse
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Bhaskarakiran <byarlaga>
Component: tierAssignee: hari gowtham <hgowtham>
Status: CLOSED ERRATA QA Contact: Neha <nerawat>
Severity: urgent Docs Contact:
Priority: urgent    
Version: rhgs-3.1CC: asrivast, dlambrig, mzywusko, nchilaka, rhs-bugs, rkavunga, sankarshan, sashinde, storage-qa-internal
Target Milestone: ---Keywords: ZStream
Target Release: RHGS 3.1.2   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.7.5-6 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1277481 (view as bug list) Environment:
Last Closed: 2016-03-01 05:49:48 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1260783, 1260923, 1277481, 1277984    

Description Bhaskarakiran 2015-11-02 07:35:13 UTC
Description of problem:
=======================

Upgraded the build from 3.7.5.0-3 to 3.7.5-5 and volume info shows plain disperse as distributed disperse and distribute count as 12.

[root@transformers ~]# gluster v info vol1
 
Volume Name: vol1
Type: Tier
Volume ID: ed22b68c-b982-40ca-86c0-d85cd127dbc1
Status: Started
Number of Bricks: 18
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 3 x 2 = 6
Brick1: ninja:/rhs/brick3/vol1-tier6
Brick2: vertigo:/rhs/brick3/vol1-tier5
Brick3: ninja:/rhs/brick2/vol1-tier4
Brick4: vertigo:/rhs/brick2/vol1-tier3
Brick5: ninja:/rhs/brick1/vol1-tier2
Brick6: vertigo:/rhs/brick1/vol1-tier1
Cold Tier:
Cold Tier Type : Distributed-Disperse <==========================>
Number of Bricks: 12 x (8 + 4) = 12 <==========================>
Brick7: transformers:/rhs/brick1/b1
Brick8: interstellar:/rhs/brick1/b2
Brick9: transformers:/rhs/brick2/b3
Brick10: interstellar:/rhs/brick2/b4
Brick11: transformers:/rhs/brick3/b5
Brick12: interstellar:/rhs/brick3/b6
Brick13: transformers:/rhs/brick4/b7
Brick14: interstellar:/rhs/brick4/b8
Brick15: transformers:/rhs/brick5/b9
Brick16: interstellar:/rhs/brick5/b10
Brick17: transformers:/rhs/brick6/b11
Brick18: interstellar:/rhs/brick6/b12
Options Reconfigured:
features.quota-deem-statfs: on
ganesha.enable: off
cluster.self-heal-daemon: on
features.ctr-enabled: on
cluster.disperse-self-heal-daemon: enable
server.event-threads: 4
client.event-threads: 4
features.inode-quota: on
features.quota: on
features.uss: on
nfs.disable: on
performance.readdir-ahead: on
cluster.enable-shared-storage: disable
nfs-ganesha: disable
[root@transformers ~]# 

For a fresh volume it shows correct.

Version-Release number of selected component (if applicable):
=============================================================
3.7.5-5

[root@transformers ~]# gluster --version
glusterfs 3.7.5 built on Oct 29 2015 10:11:53
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@transformers ~]# 

How reproducible:
=================
100%

Steps to Reproduce:
1. Install 3.7.5.0-3 build and create 8+4 ec volume. Attach 2x2 dist-rep to it
2. Now upgrade to 3.7.5-5 and do volume info


Actual results:
===============
Plain disperse volume is shown as distributed disperse volume

Expected results:
=================
Volume info should list correct type of volume

Additional info:
================
sosreports will be attached.

Comment 3 hari gowtham 2015-11-03 13:05:35 UTC
the upstream patch has been posted.
url : http://review.gluster.org/#/c/12495/

Comment 7 errata-xmlrpc 2016-03-01 05:49:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0193.html