Bug 963168 - Volume info is not synced to the peers which were brought online from offline state
Volume info is not synced to the peers which were brought online from offline...
Status: CLOSED NOTABUG
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
2.1
x86_64 Linux
unspecified Severity urgent
: ---
: ---
Assigned To: Amar Tumballi
Sudhir D
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-05-15 06:04 EDT by Rahul Hinduja
Modified: 2013-12-18 19:09 EST (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-05-15 15:27:31 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Rahul Hinduja 2013-05-15 06:04:01 EDT
Description of problem:
=======================

Powered down two of the servers using (shutdown/poweroff) and than performed the graph change by setting "write-behind: on" for the volume from the up servers. Once the servers were brought back online, the volume info is not synced to the server.

Servers:
========

rhs-client11
rhs-client12
rhs-client13
rhs-client14

rhs-client11 and rhs-client13 were brought offline and than brought online.

Output:
=======

[root@rhs-client11 ~]# gluster volume info
 
Volume Name: vol-dis-rep
Type: Distributed-Replicate
Volume ID: 15a17dd8-affb-4a78-b7ec-ab19c679107c
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.36.35:/rhs/brick1/b1
Brick2: 10.70.36.36:/rhs/brick1/b2
Brick3: 10.70.36.35:/rhs/brick1/b3
Brick4: 10.70.36.36:/rhs/brick1/b4
Brick5: 10.70.36.35:/rhs/brick1/b5
Brick6: 10.70.36.36:/rhs/brick1/b6
Brick7: 10.70.36.37:/rhs/brick1/b7
Brick8: 10.70.36.38:/rhs/brick1/b8
Brick9: 10.70.36.37:/rhs/brick1/b9
Brick10: 10.70.36.38:/rhs/brick1/b10
Brick11: 10.70.36.37:/rhs/brick1/b11
Brick12: 10.70.36.38:/rhs/brick1/b12
[root@rhs-client11 ~]# 


[root@rhs-client12 ~]# gluster volume info
 
Volume Name: vol-dis-rep
Type: Distributed-Replicate
Volume ID: 15a17dd8-affb-4a78-b7ec-ab19c679107c
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.36.35:/rhs/brick1/b1
Brick2: 10.70.36.36:/rhs/brick1/b2
Brick3: 10.70.36.35:/rhs/brick1/b3
Brick4: 10.70.36.36:/rhs/brick1/b4
Brick5: 10.70.36.35:/rhs/brick1/b5
Brick6: 10.70.36.36:/rhs/brick1/b6
Brick7: 10.70.36.37:/rhs/brick1/b7
Brick8: 10.70.36.38:/rhs/brick1/b8
Brick9: 10.70.36.37:/rhs/brick1/b9
Brick10: 10.70.36.38:/rhs/brick1/b10
Brick11: 10.70.36.37:/rhs/brick1/b11
Brick12: 10.70.36.38:/rhs/brick1/b12
Options Reconfigured:
performance.write-behind: on
[root@rhs-client12 ~]# 


[root@rhs-client13 ~]# gluster volume info
 
Volume Name: vol-dis-rep
Type: Distributed-Replicate
Volume ID: 15a17dd8-affb-4a78-b7ec-ab19c679107c
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.36.35:/rhs/brick1/b1
Brick2: 10.70.36.36:/rhs/brick1/b2
Brick3: 10.70.36.35:/rhs/brick1/b3
Brick4: 10.70.36.36:/rhs/brick1/b4
Brick5: 10.70.36.35:/rhs/brick1/b5
Brick6: 10.70.36.36:/rhs/brick1/b6
Brick7: 10.70.36.37:/rhs/brick1/b7
Brick8: 10.70.36.38:/rhs/brick1/b8
Brick9: 10.70.36.37:/rhs/brick1/b9
Brick10: 10.70.36.38:/rhs/brick1/b10
Brick11: 10.70.36.37:/rhs/brick1/b11
Brick12: 10.70.36.38:/rhs/brick1/b12
[root@rhs-client13 ~]# 


[root@rhs-client14 ~]# gluster volume info
 
Volume Name: vol-dis-rep
Type: Distributed-Replicate
Volume ID: 15a17dd8-affb-4a78-b7ec-ab19c679107c
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.36.35:/rhs/brick1/b1
Brick2: 10.70.36.36:/rhs/brick1/b2
Brick3: 10.70.36.35:/rhs/brick1/b3
Brick4: 10.70.36.36:/rhs/brick1/b4
Brick5: 10.70.36.35:/rhs/brick1/b5
Brick6: 10.70.36.36:/rhs/brick1/b6
Brick7: 10.70.36.37:/rhs/brick1/b7
Brick8: 10.70.36.38:/rhs/brick1/b8
Brick9: 10.70.36.37:/rhs/brick1/b9
Brick10: 10.70.36.38:/rhs/brick1/b10
Brick11: 10.70.36.37:/rhs/brick1/b11
Brick12: 10.70.36.38:/rhs/brick1/b12
Options Reconfigured:
performance.write-behind: on
[root@rhs-client14 ~]# 



Version-Release number of selected component (if applicable):
=============================================================

[root@rhs-client11 ~]# rpm -qa | grep gluster | grep 3.4.0
glusterfs-fuse-3.4.0.8rhs-1.el6rhs.x86_64
glusterfs-devel-3.4.0.8rhs-1.el6rhs.x86_64
glusterfs-3.4.0.8rhs-1.el6rhs.x86_64
glusterfs-server-3.4.0.8rhs-1.el6rhs.x86_64
glusterfs-rdma-3.4.0.8rhs-1.el6rhs.x86_64
glusterfs-debuginfo-3.4.0.8rhs-1.el6rhs.x86_64
glusterfs-geo-replication-3.4.0.8rhs-1.el6rhs.x86_64
[root@rhs-client11 ~]# 


Steps Carried:
==============
1. Created and started 6*2 volume from 4 servers (rhs-client11 to 14)
2. Mounted NFS and FUSE on client(darrel)
3. Created directories/files from FUSE.
4. Brought down rhs-client11 and rhs-client13
5. Performed graph change for volume "write-behind: on"
6. Brought back the servers online.
7. Compared the "gluster volume info" output.
  
Actual results:
===============

New graph should is not updated.


Expected results:
=================

Graph should be updated.

Additional info:
================

vol-dis-rep/info shows difference in version which confirms that the graphs are not updated.

Note You need to log in before you can comment on or make changes to this bug.