Bug 1518276

Summary: Incorrect format of host reported when geo replication status changed
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Filip Balák <fbalak>
Component: web-admin-tendrl-notifierAssignee: Nishanth Thomas <nthomas>
Status: CLOSED ERRATA QA Contact: Filip Balák <fbalak>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: rhgs-3.3CC: dahorak, fbalak, nthomas, rhs-bugs, sankarshan
Target Milestone: ---   
Target Release: RHGS 3.4.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: tendrl-ansible-1.6.1-2.el7rhgs.noarch.rpm, tendrl-api-1.6.1-1.el7rhgs.noarch.rpm, tendrl-commons-1.6.1-1.el7rhgs.noarch.rpm, tendrl-monitoring-integration-1.6.1-1.el7rhgs.noarch.rpm, tendrl-node-agent-1.6.1-1.el7, tendrl-ui-1.6.1-1.el7rhgs.noarch.rpm, Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-09-04 06:59:21 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1578716, 1598345, 1603175    
Bug Blocks: 1503134    

Description Filip Balák 2017-11-28 14:36:14 UTC
Description of problem:
I have two volumes forming geo replication. Status of some nodes became faulty and I received a few notifications like this:

```
georep status of pair: fbalak-usm1-gl5.usmqe.hostname.com-_mnt_brick_gama_disperse_2_2 of volume volume_gama_disperse_4_plus_2x2 is faulty
```

I have host fbalak-usm1-gl5.usmqe.hostname.com and brick /mnt/brick/gama_disperse_2_2.

Message should contain them in a proper format.

Version-Release number of selected component (if applicable):
tendrl-ansible-1.5.4-1.el7rhgs.noarch
tendrl-ui-1.5.4-4.el7rhgs.noarch
tendrl-grafana-plugins-1.5.4-5.el7rhgs.noarch
tendrl-selinux-1.5.3-2.el7rhgs.noarch
tendrl-commons-1.5.4-4.el7rhgs.noarch
tendrl-api-1.5.4-2.el7rhgs.noarch
tendrl-api-httpd-1.5.4-2.el7rhgs.noarch
tendrl-monitoring-integration-1.5.4-5.el7rhgs.noarch
tendrl-grafana-selinux-1.5.3-2.el7rhgs.noarch
tendrl-node-agent-1.5.4-5.el7rhgs.noarch
tendrl-notifier-1.5.4-3.el7rhgs.noarchdrl-ansible-1.5.4-1.el7rhgs.noarch


How reproducible:
60%

Steps to Reproduce:
1. Create volumes with geo replication.
2. Import clusters with geo replication into web admin.
3. Try to change status of geo replication to faulty (change some conf, stop glusterd service on some node, quit connection between some nodes)
4. Look at notifications.

Actual results:
There are notifications with incorrect and misleading format for hosts and bricks, e.g.:
fbalak-usm1-gl5.usmqe.hostname.com-_mnt_brick_gama_disperse_2_2

Expected results:
There should be fbalak-usm1-gl5.usmqe.hostname.redhat.com:/mnt/brick/gama_disperse_2_2.

Additional info:

Comment 3 Nishanth Thomas 2017-11-29 14:30:29 UTC
Not critical to address at this point of time.Proposing to move this out to a future release.

Comment 7 Filip Balák 2018-08-03 11:42:41 UTC
There are currently raised these alerts related to geo-replication:
Geo-replication between <server>:/mnt/brick/path and <volume> is Active
Geo-replication between <server>:/mnt/brick/path and <volume> is Passive
Geo-replication between <server>:/mnt/brick/path and <volume> is faulty
--> VERIFIED

Tested with:
tendrl-ansible-1.6.3-6.el7rhgs.noarch
tendrl-api-1.6.3-5.el7rhgs.noarch
tendrl-api-httpd-1.6.3-5.el7rhgs.noarch
tendrl-commons-1.6.3-11.el7rhgs.noarch
tendrl-grafana-plugins-1.6.3-8.el7rhgs.noarch
tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch
tendrl-monitoring-integration-1.6.3-8.el7rhgs.noarch
tendrl-node-agent-1.6.3-9.el7rhgs.noarch
tendrl-notifier-1.6.3-4.el7rhgs.noarch
tendrl-selinux-1.5.4-2.el7rhgs.noarch
tendrl-ui-1.6.3-9.el7rhgs.noarch

Comment 9 errata-xmlrpc 2018-09-04 06:59:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2616