Bug 1386185 - [Eventing]: 'gluster volume tier <volname> start force' does not generate a TIER_START event
Summary: [Eventing]: 'gluster volume tier <volname> start force' does not generate a T...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfs
Version: rhgs-3.2
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: RHGS 3.2.0
Assignee: Milind Changire
QA Contact: Sweta Anandpara
URL:
Whiteboard:
Depends On: 1386247 1387981
Blocks: 1351528
TreeView+ depends on / blocked
 
Reported: 2016-10-18 11:23 UTC by Sweta Anandpara
Modified: 2017-03-23 06:12 UTC (History)
3 users (show)

Fixed In Version: glusterfs-3.8.4-4
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1386247 1387981 (view as bug list)
Environment:
Last Closed: 2017-03-23 06:12:05 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:0486 0 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.2.0 security, bug fix, and enhancement update 2017-03-23 09:18:45 UTC

Description Sweta Anandpara 2016-10-18 11:23:55 UTC
Description of problem:
=======================
The CLI 'gluster volume tier <volname> start force' when given does not result in any event getting generated. The cli attaches the tier (if anything is disconnected) and restarts the tier process, thus implying a state change of the gluster cluster - which in turn should have generated an event.
But no event is seen.

Version-Release number of selected component (if applicable):
============================================================
3.8.4-2


How reproducible
================
Always


Steps to Reproduce:
=====================
1. Create a 1*(4+2) disperse volume 'disp'. 
2. Attach a 1*4 distribute hot tier.
3. Execute a 'gluster volume tier disp start force'


Actual results:
===============
Step3 does not generate any event


Expected results:
=================
Step3 should have generated a TIER_START event


Additional info:
=================

[root@dhcp46-239 ~]# 
[root@dhcp46-239 ~]# rpm -qa | grep gluster
nfs-ganesha-gluster-2.3.1-8.el7rhgs.x86_64
glusterfs-3.8.4-2.el7rhgs.x86_64
glusterfs-api-devel-3.8.4-2.el7rhgs.x86_64
glusterfs-debuginfo-3.8.4-1.el7rhgs.x86_64
glusterfs-libs-3.8.4-2.el7rhgs.x86_64
glusterfs-api-3.8.4-2.el7rhgs.x86_64
python-gluster-3.8.4-2.el7rhgs.noarch
glusterfs-geo-replication-3.8.4-2.el7rhgs.x86_64
glusterfs-rdma-3.8.4-2.el7rhgs.x86_64
glusterfs-fuse-3.8.4-2.el7rhgs.x86_64
glusterfs-cli-3.8.4-2.el7rhgs.x86_64
glusterfs-server-3.8.4-2.el7rhgs.x86_64
glusterfs-ganesha-3.8.4-2.el7rhgs.x86_64
glusterfs-client-xlators-3.8.4-2.el7rhgs.x86_64
glusterfs-devel-3.8.4-2.el7rhgs.x86_64
glusterfs-events-3.8.4-2.el7rhgs.x86_64
[root@dhcp46-239 ~]# 
[root@dhcp46-239 ~]# 
[root@dhcp46-239 ~]# gluster peer status
Number of Peers: 3

Hostname: 10.70.46.240
Uuid: 72c4f894-61f7-433e-a546-4ad2d7f0a176
State: Peer in Cluster (Connected)

Hostname: 10.70.46.242
Uuid: 1e8967ae-51b2-4c27-907e-a22a83107fd0
State: Peer in Cluster (Connected)

Hostname: 10.70.46.218
Uuid: 0dea52e0-8c32-4616-8ef8-16db16120eaa
State: Peer in Cluster (Connected)
[root@dhcp46-239 ~]# 
[root@dhcp46-239 ~]# 
[root@dhcp46-239 ~]# gluster v list
disp
vol1
[root@dhcp46-239 ~]# 
[root@dhcp46-239 ~]# 
[root@dhcp46-239 ~]# gluster v info disp
 
Volume Name: disp
Type: Tier
Volume ID: a9999464-b094-4213-a422-c11fed555674
Status: Started
Snapshot Count: 0
Number of Bricks: 10
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 4
Brick1: 10.70.46.218:/bricks/brick2/disp_tier4
Brick2: 10.70.46.242:/bricks/brick2/disp_tier3
Brick3: 10.70.46.240:/bricks/brick2/disp_tier2
Brick4: 10.70.46.239:/bricks/brick2/disp_tier1
Cold Tier:
Cold Tier Type : Disperse
Number of Bricks: 1 x (4 + 2) = 6
Brick5: 10.70.46.239:/bricks/brick0/disp1
Brick6: 10.70.46.240:/bricks/brick0/disp2
Brick7: 10.70.46.242:/bricks/brick0/disp3
Brick8: 10.70.46.218:/bricks/brick0/disp4
Brick9: 10.70.46.239:/bricks/brick1/disp5
Brick10: 10.70.46.240:/bricks/brick1/disp6
Options Reconfigured:
cluster.tier-mode: cache
features.ctr-enabled: on
transport.address-family: inet
performance.readdir-ahead: on
cluster.enable-shared-storage: enable
[root@dhcp46-239 ~]# 
[root@dhcp46-239 ~]# 
[root@dhcp46-239 ~]#

Comment 2 Atin Mukherjee 2016-10-20 05:14:37 UTC
upstream mainline patch http://review.gluster.org/15675 posted for review.

Comment 5 Atin Mukherjee 2016-10-26 07:02:16 UTC
upstream mainline : http://review.gluster.org/15675
upstream 3.9 : http://review.gluster.org/15708
downstream : https://code.engineering.redhat.com/gerrit/#/c/88233

Comment 7 Sweta Anandpara 2016-11-14 09:54:57 UTC
Tested and verified this on the build 3.8.4-5

In 4node cluster with 1*(4+2) volume as cold tier and 2*2 as hot tier, executed a 'gluster volume <volname> tier start force' and that generated a 'TIER_START_FORCE' event on the webhook.

Moving this BZ to verified in 3.2. Detailed logs are pasted below:

[root@dhcp46-239 ~]# 
[root@dhcp46-239 ~]# rpm -qa | grep gluster
nfs-ganesha-gluster-2.3.1-8.el7rhgs.x86_64
glusterfs-api-3.8.4-5.el7rhgs.x86_64
python-gluster-3.8.4-5.el7rhgs.noarch
glusterfs-client-xlators-3.8.4-5.el7rhgs.x86_64
glusterfs-server-3.8.4-5.el7rhgs.x86_64
glusterfs-ganesha-3.8.4-5.el7rhgs.x86_64
glusterfs-devel-3.8.4-5.el7rhgs.x86_64
glusterfs-libs-3.8.4-5.el7rhgs.x86_64
glusterfs-fuse-3.8.4-5.el7rhgs.x86_64
glusterfs-api-devel-3.8.4-5.el7rhgs.x86_64
glusterfs-rdma-3.8.4-5.el7rhgs.x86_64
glusterfs-3.8.4-5.el7rhgs.x86_64
glusterfs-cli-3.8.4-5.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-5.el7rhgs.x86_64
glusterfs-debuginfo-3.8.4-4.el7rhgs.x86_64
glusterfs-events-3.8.4-5.el7rhgs.x86_64
[root@dhcp46-239 ~]# 
[root@dhcp46-239 ~]# 
[root@dhcp46-239 ~]# gluster peer status
Number of Peers: 3

Hostname: 10.70.46.240
Uuid: 72c4f894-61f7-433e-a546-4ad2d7f0a176
State: Peer in Cluster (Connected)

Hostname: 10.70.46.242
Uuid: 1e8967ae-51b2-4c27-907e-a22a83107fd0
State: Peer in Cluster (Connected)

Hostname: 10.70.46.218
Uuid: 0dea52e0-8c32-4616-8ef8-16db16120eaa
State: Peer in Cluster (Connected)
[root@dhcp46-239 ~]# 
[root@dhcp46-239 ~]# 
[root@dhcp46-239 ~]# gluster v info
 
Volume Name: ozone
Type: Disperse
Volume ID: 376cdde0-194f-460a-b273-3904a704a7dd
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.46.239:/bricks/brick0/ozone0
Brick2: 10.70.46.240:/bricks/brick0/ozone2
Brick3: 10.70.46.242:/bricks/brick0/ozone2
Brick4: 10.70.46.239:/bricks/brick1/ozone3
Brick5: 10.70.46.240:/bricks/brick1/ozone4
Brick6: 10.70.46.242:/bricks/brick1/ozone5
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
cluster.enable-shared-storage: disable
[root@dhcp46-239 ~]# 
[root@dhcp46-239 ~]# [root@dhcp46-239 ~]# 
[root@dhcp46-239 ~]# gluster v tier ozone attach replica 2 10.70.46.218:/bricks/brick2/ozone_tier0 10.70.46.218:/bricks/brick2/ozone_tier1 10.70.46.218:/bricks/brick2/ozone_tier2 10.70.46.218:/bricks/brick2/ozone_tier3
volume attach-tier: success
Tiering Migration Functionality: ozone: success: Attach tier is successful on ozone. use tier status to check the status.
ID: 8445eaef-a53e-4e31-b921-afb4bd9dae26

[root@dhcp46-239 ~]# gluster v tier ozone
Usage: volume tier <VOLNAME> status
volume tier <VOLNAME> start [force]
volume tier <VOLNAME> attach [<replica COUNT>] <NEW-BRICK>... [force]
volume tier <VOLNAME> detach <start|stop|status|commit|force>

Tier command failed
[root@dhcp46-239 ~]# gluster v tier ozone start force
Tiering Migration Functionality: ozone: success: Attach tier is successful on ozone. use tier status to check the status.
ID: 11cbc25a-6f0b-4db5-aa6d-fec728bb7e90

[root@dhcp46-239 ~]# 


EVENTS SEEN
------------
{u'message': {u'vol': u'ozone'}, u'event': u'TIER_START_FORCE', u'ts': 1479117084, u'nodeid': u'ed362eb3-421c-4a25-ad0e-82ef157ea328'}
10.70.46.239 - - [09/Nov/2016 11:35:54] "POST /listen HTTP/1.1" 200 -

Comment 9 errata-xmlrpc 2017-03-23 06:12:05 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0486.html


Note You need to log in before you can comment on or make changes to this bug.