Bug 1235202

Summary: tiering: tier daemon not restarting during volume/glusterd restart
Product: [Community] GlusterFS Reporter: Mohammed Rafi KC <rkavunga>
Component: tieringAssignee: Mohammed Rafi KC <rkavunga>
Status: CLOSED CURRENTRELEASE QA Contact: bugs <bugs>
Severity: medium Docs Contact:
Priority: medium    
Version: 3.7.1CC: bugs, dlambrig, josferna, nchilaka, rkavunga, vagarwal
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: glusterfs-3.7.4 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1225330 Environment:
Last Closed: 2015-09-09 09:38:01 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 994405, 1225330, 1233151, 1265890, 1273354    
Bug Blocks: 1229270, 1229271, 1260923    

Description Mohammed Rafi KC 2015-06-24 10:01:27 UTC
+++ This bug was initially created as a clone of Bug #1225330 +++

Description of problem:

tier daemon should always run on the node to promote/demote the files, but when volume is stopped , we will stop the daemon, but when start the volume the daemon should also start. Same case for glusterd restart after tier daemon went offline

Version-Release number of selected component (if applicable):


How reproducible:

100%

Steps to Reproduce:
1.create a tiered volume
2.stop the volume
3.start the volume
4.check for the tier process

Actual results:

tier daemon was not running

Expected results:

volume restart should run the rebalance again

Additional info:

--- Additional comment from Anand Avati on 2015-05-27 03:14:14 EDT ---

REVIEW: http://review.gluster.org/10933 (glusterd/tier: configure tier daemon during volume restart) posted (#1) for review on master by mohammed rafi  kc (rkavunga)

--- Additional comment from Anand Avati on 2015-05-27 03:17:59 EDT ---

REVIEW: http://review.gluster.org/10933 (glusterd/tier: configure tier daemon during volume restart) posted (#2) for review on master by mohammed rafi  kc (rkavunga)

--- Additional comment from Anand Avati on 2015-05-29 03:42:45 EDT ---

REVIEW: http://review.gluster.org/10933 (glusterd/tier: configure tier daemon during volume restart) posted (#3) for review on master by mohammed rafi  kc (rkavunga)

--- Additional comment from Mohammed Rafi KC on 2015-06-03 10:52:28 EDT ---

apart from http://review.gluster.org/10933, it requires one more fix

--- Additional comment from Anand Avati on 2015-06-10 10:41:06 EDT ---

REVIEW: http://review.gluster.org/10933 (glusterd/tier: configure tier daemon during volume restart) posted (#4) for review on master by mohammed rafi  kc (rkavunga)

--- Additional comment from Anand Avati on 2015-06-16 03:05:09 EDT ---

REVIEW: http://review.gluster.org/10933 (glusterd/tier: configure tier daemon during volume restart) posted (#5) for review on master by Joseph Fernandes

--- Additional comment from Anand Avati on 2015-06-16 22:01:02 EDT ---

REVIEW: http://review.gluster.org/10933 (glusterd/tier: configure tier daemon during volume restart) posted (#6) for review on master by Joseph Fernandes

Comment 1 Anand Avati 2015-06-24 11:14:15 UTC
REVIEW: http://review.gluster.org/11376 (glusterd/tier: configure tier daemon during volume restart) posted (#2) for review on release-3.7 by mohammed rafi  kc (rkavunga)

Comment 2 Nag Pavan Chilakam 2015-08-29 09:18:36 UTC
On a 2 node cluster, when we do a restart of volume, the tier deamon shows as in progress or running , when we issue a vol status, but the tier rebalancer deamon which is supposed to restart on both nodes, doesnt restart on localnode.
It always fails to restart on localhost, but triggers successfully on the other node.
Hence marking as failed


[root@nag-manual-node1 ~]# gluster v rebalance vol4 tier status
Node                 Promoted files       Demoted files        Status              
---------            ---------            ---------            ---------           
localhost            0                    0                    in progress         
10.70.46.36          0                    0                    in progress         
volume rebalance: vol4: success: 
[root@nag-manual-node1 ~]# gluster v stop vol3
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) n
[root@nag-manual-node1 ~]# gluster v stop vol4
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: vol4: success
[root@nag-manual-node1 ~]# gluster v rebalance vol4 tier status
volume rebalance: vol4: failed: Volume vol4 needs to be started to perform rebalance
[root@nag-manual-node1 ~]# gluster v start vol4
volume start: vol4: success
[root@nag-manual-node1 ~]# gluster v info vol4
 
Volume Name: vol4
Type: Tier
Volume ID: c7a41a54-ad9b-40a3-9e76-4f80c5a8345e
Status: Started
Number of Bricks: 6
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 2
Brick1: 10.70.46.36:/rhs/brick3/vol4hot
Brick2: 10.70.46.84:/rhs/brick3/vol4hot
Cold Tier:
Cold Tier Type : Distribute
Number of Bricks: 4
Brick3: 10.70.46.84:/rhs/brick1/vol4
Brick4: 10.70.46.36:/rhs/brick1/vol4
Brick5: 10.70.46.84:/rhs/brick2/vol4
Brick6: 10.70.46.36:/rhs/brick2/vol4
Options Reconfigured:
performance.readdir-ahead: on
[root@nag-manual-node1 ~]# gluster v rebalance vol4 tier status
Node                 Promoted files       Demoted files        Status              
---------            ---------            ---------            ---------           
localhost            0                    0                    failed              
10.70.46.36          0                    0                    in progress         
volume rebalance: vol4: success:

Comment 3 Nag Pavan Chilakam 2015-08-29 15:52:10 UTC
sos reports logs @ rhsqe-repo.lab.eng.blr.redhat.com:/home/repo/sosreports/bug.1235202)

Comment 4 Kaushal 2015-09-09 09:38:01 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.4, please open a new bug report.

glusterfs-3.7.4 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12496
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user