Bug 1518276 - Incorrect format of host reported when geo replication status changed
Summary: Incorrect format of host reported when geo replication status changed
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: web-admin-tendrl-notifier
Version: rhgs-3.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: RHGS 3.4.0
Assignee: Nishanth Thomas
QA Contact: Filip Balák
URL:
Whiteboard:
Depends On: 1578716 1598345 1603175
Blocks: 1503134
TreeView+ depends on / blocked
 
Reported: 2017-11-28 14:36 UTC by Filip Balák
Modified: 2018-09-04 07:00 UTC (History)
5 users (show)

Fixed In Version: tendrl-ansible-1.6.1-2.el7rhgs.noarch.rpm, tendrl-api-1.6.1-1.el7rhgs.noarch.rpm, tendrl-commons-1.6.1-1.el7rhgs.noarch.rpm, tendrl-monitoring-integration-1.6.1-1.el7rhgs.noarch.rpm, tendrl-node-agent-1.6.1-1.el7, tendrl-ui-1.6.1-1.el7rhgs.noarch.rpm,
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-09-04 06:59:21 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github Tendrl gluster-integration issues 540 0 None None None 2018-01-10 08:58:12 UTC
Red Hat Product Errata RHSA-2018:2616 0 None None None 2018-09-04 07:00:23 UTC

Description Filip Balák 2017-11-28 14:36:14 UTC
Description of problem:
I have two volumes forming geo replication. Status of some nodes became faulty and I received a few notifications like this:

```
georep status of pair: fbalak-usm1-gl5.usmqe.hostname.com-_mnt_brick_gama_disperse_2_2 of volume volume_gama_disperse_4_plus_2x2 is faulty
```

I have host fbalak-usm1-gl5.usmqe.hostname.com and brick /mnt/brick/gama_disperse_2_2.

Message should contain them in a proper format.

Version-Release number of selected component (if applicable):
tendrl-ansible-1.5.4-1.el7rhgs.noarch
tendrl-ui-1.5.4-4.el7rhgs.noarch
tendrl-grafana-plugins-1.5.4-5.el7rhgs.noarch
tendrl-selinux-1.5.3-2.el7rhgs.noarch
tendrl-commons-1.5.4-4.el7rhgs.noarch
tendrl-api-1.5.4-2.el7rhgs.noarch
tendrl-api-httpd-1.5.4-2.el7rhgs.noarch
tendrl-monitoring-integration-1.5.4-5.el7rhgs.noarch
tendrl-grafana-selinux-1.5.3-2.el7rhgs.noarch
tendrl-node-agent-1.5.4-5.el7rhgs.noarch
tendrl-notifier-1.5.4-3.el7rhgs.noarchdrl-ansible-1.5.4-1.el7rhgs.noarch


How reproducible:
60%

Steps to Reproduce:
1. Create volumes with geo replication.
2. Import clusters with geo replication into web admin.
3. Try to change status of geo replication to faulty (change some conf, stop glusterd service on some node, quit connection between some nodes)
4. Look at notifications.

Actual results:
There are notifications with incorrect and misleading format for hosts and bricks, e.g.:
fbalak-usm1-gl5.usmqe.hostname.com-_mnt_brick_gama_disperse_2_2

Expected results:
There should be fbalak-usm1-gl5.usmqe.hostname.redhat.com:/mnt/brick/gama_disperse_2_2.

Additional info:

Comment 3 Nishanth Thomas 2017-11-29 14:30:29 UTC
Not critical to address at this point of time.Proposing to move this out to a future release.

Comment 7 Filip Balák 2018-08-03 11:42:41 UTC
There are currently raised these alerts related to geo-replication:
Geo-replication between <server>:/mnt/brick/path and <volume> is Active
Geo-replication between <server>:/mnt/brick/path and <volume> is Passive
Geo-replication between <server>:/mnt/brick/path and <volume> is faulty
--> VERIFIED

Tested with:
tendrl-ansible-1.6.3-6.el7rhgs.noarch
tendrl-api-1.6.3-5.el7rhgs.noarch
tendrl-api-httpd-1.6.3-5.el7rhgs.noarch
tendrl-commons-1.6.3-11.el7rhgs.noarch
tendrl-grafana-plugins-1.6.3-8.el7rhgs.noarch
tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch
tendrl-monitoring-integration-1.6.3-8.el7rhgs.noarch
tendrl-node-agent-1.6.3-9.el7rhgs.noarch
tendrl-notifier-1.6.3-4.el7rhgs.noarch
tendrl-selinux-1.5.4-2.el7rhgs.noarch
tendrl-ui-1.6.3-9.el7rhgs.noarch

Comment 9 errata-xmlrpc 2018-09-04 06:59:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2616


Note You need to log in before you can comment on or make changes to this bug.