Bug 1492077

Summary: Provide brick list as part of VOLUME_CREATE event.
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Darshan <dnarayan>
Component: eventsapiAssignee: Atin Mukherjee <amukherj>
Status: CLOSED ERRATA QA Contact: Sweta Anandpara <sanandpa>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: rhgs-3.3CC: amukherj, asrivast, bugs, rcyriac, rhinduja, rhs-bugs
Target Milestone: ---Keywords: ZStream
Target Release: RHGS 3.3.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.8.4-46 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1491292 Environment:
Last Closed: 2017-11-29 03:30:36 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1491292, 1492109    
Bug Blocks: 1475688    

Description Darshan 2017-09-15 12:06:20 UTC
+++ This bug was initially created as a clone of Bug #1491292 +++

Description of problem:
Currently only volume name is sent as part of VOLUME_CREATE event. It would be very helpful if we can get the list of bricks with which volume was created along with the event

Version-Release number of selected component (if applicable):
3.12

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Atin Mukherjee 2017-09-15 13:33:48 UTC
upstream patch : https://review.gluster.org/#/c/18306

Comment 5 Atin Mukherjee 2017-09-21 02:20:07 UTC
upstream mainline : https://review.gluster.org/#/c/18306
upstream 3.12 : https://review.gluster.org/#/c/18313/
downstream patch : https://code.engineering.redhat.com/gerrit/#/c/118559/

Comment 7 Sweta Anandpara 2017-10-11 11:40:13 UTC
Tested and verified this on the build 3.8.4-48.

Able to see brick related information with volume create event. Moving this bug to verified in rhgs 3.3.1.

{u'message': {u'bricks': u' 10.70.46.36:/bricks/brick2/test-vol2 10.70.46.127:/bricks/brick2/test-vol2', u'name': u'test-vol2'}, u'event': u'VOLUME_CREATE', u'ts': 1507720417, u'nodeid': u'0c719d96-0d11-4b0a-a4b6-1ba5bd0702b1'}

[root@dhcp46-36 ~]# rpm -qa | grep gluster
python-gluster-3.8.4-48.el7rhgs.noarch
gluster-nagios-addons-0.2.9-1.el7rhgs.x86_64
glusterfs-api-3.8.4-48.el7rhgs.x86_64
glusterfs-3.8.4-48.el7rhgs.x86_64
glusterfs-cli-3.8.4-48.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-48.el7rhgs.x86_64
vdsm-gluster-4.17.33-1.2.el7rhgs.noarch
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
glusterfs-client-xlators-3.8.4-48.el7rhgs.x86_64
glusterfs-server-3.8.4-48.el7rhgs.x86_64
glusterfs-rdma-3.8.4-48.el7rhgs.x86_64
libvirt-daemon-driver-storage-gluster-3.2.0-14.el7_4.3.x86_64
glusterfs-fuse-3.8.4-48.el7rhgs.x86_64
glusterfs-libs-3.8.4-48.el7rhgs.x86_64
glusterfs-events-3.8.4-48.el7rhgs.x86_64
[root@dhcp46-36 ~]# 
[root@dhcp46-36 ~]# 
[root@dhcp46-36 ~]# 
[root@dhcp46-36 ~]# 
[root@dhcp46-36 ~]# gluster v info test-vo2
Volume test-vo2 does not exist
[root@dhcp46-36 ~]# gluster v info test-vol2
 
Volume Name: test-vol2
Type: Replicate
Volume ID: d65f38c5-b402-4733-90ce-966130d46ce6
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.70.46.36:/bricks/brick2/test-vol2
Brick2: 10.70.46.127:/bricks/brick2/test-vol2
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
[root@dhcp46-36 ~]# 
[root@dhcp46-36 ~]# gluster peer status
Number of Peers: 3

Hostname: dhcp46-108.lab.eng.blr.redhat.com
Uuid: 6c2aa661-6074-43e0-8979-31eba7e4329d
State: Peer in Cluster (Connected)

Hostname: dhcp46-128.lab.eng.blr.redhat.com
Uuid: 3cedc928-59cf-4379-b149-e9abd156465e
State: Peer in Cluster (Connected)

Hostname: dhcp46-127.lab.eng.blr.redhat.com
Uuid: ab7e9c3f-9825-46b5-b655-ad18ed02fa22
State: Peer in Cluster (Connected)
[root@dhcp46-36 ~]# 
[root@dhcp46-36 ~]# gluster v list
ozone
test-vol
test-vol2
[root@dhcp46-36 ~]#

Comment 10 errata-xmlrpc 2017-11-29 03:30:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:3276