Bug 1559364 - The flow ExpandClusterWithDetectedPeers should be targeted to provisioner node in cluster
Summary: The flow ExpandClusterWithDetectedPeers should be targeted to provisioner nod...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: web-admin-tendrl-commons
Version: rhgs-3.4
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: RHGS 3.4.0
Assignee: Shubhendu Tripathi
QA Contact: Filip Balák
URL:
Whiteboard:
Depends On:
Blocks: 1503137
TreeView+ depends on / blocked
 
Reported: 2018-03-22 12:00 UTC by Shubhendu Tripathi
Modified: 2018-09-04 07:02 UTC (History)
5 users (show)

Fixed In Version: tendrl-commons-1.6.1-3.el7rhgs
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-09-04 07:00:53 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github https://github.com/Tendrl commons pull 873 0 None None None 2018-03-28 12:19:08 UTC
Red Hat Product Errata RHSA-2018:2616 0 None None None 2018-09-04 07:02:03 UTC

Description Shubhendu Tripathi 2018-03-22 12:00:06 UTC
Description of problem:
Currently the job is being taregeted at specific node. Rather it should be targeted at provisioner node in the cluster.

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1.
2.
3.

Actual results:
The job doesn't get picked properly

Expected results:
The job should be picked and processed by provisioner node in cluster

Additional info:

Comment 2 Martin Bukatovic 2018-03-28 09:33:39 UTC
Could you provide more details about:

* More details which would help us to come up with reproducer scenario.
* Link to upstream merge request.

Comment 3 Shubhendu Tripathi 2018-03-28 12:19:09 UTC
Martin,

So earlier the expand cluster was being triggered automatically the moment a new peer was being detected in the system. This was causing issue where tendrl-node-agent might be running on the new node yet and there were more chances of import timing out for the new node as there is no node agent to pick and process the task. To fix this we came with solution for BZ#1559368 which makes expand cluster a user triggered action.

So earlier when the expansion was an automatic thing, the logic was to find the node-id of the provisioner node and target the task to the node with tag `tendrl/node_{node-id}`. The same doesn't work if its user triggered action as API would need to figure out this node-id for the provisioner and then target the job to the specific node. To ease this process in definition we mention this task to be targeted to a node with tag `provisioner/{integration-id}`.

This BZ does this fix only. Added link to the upstream PR which fixes this issue.

Comment 4 Martin Bukatovic 2018-03-29 08:26:07 UTC
I'm going to provide conditional qe_ack with following assumptions:

* dev team must have done code walkthrough or sanity unit validation of
  the change described in this BZ
* qe team will verify this BZ by running functional testing for BZ 1559368
  and check that the import task is targeted to provisioner node

Does dev team agree?

Comment 5 Nishanth Thomas 2018-03-29 08:27:42 UTC
Ack

Comment 9 Filip Balák 2018-05-23 10:28:10 UTC
Seems ok. Job is correctly locked by provisioner node. --> VERIFIED

Tested with:
tendrl-ansible-1.6.3-4.el7rhgs.noarch
tendrl-api-1.6.3-3.el7rhgs.noarch
tendrl-api-httpd-1.6.3-3.el7rhgs.noarch
tendrl-commons-1.6.3-5.el7rhgs.noarch
tendrl-grafana-plugins-1.6.3-3.el7rhgs.noarch
tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch
tendrl-monitoring-integration-1.6.3-3.el7rhgs.noarch
tendrl-node-agent-1.6.3-5.el7rhgs.noarch
tendrl-notifier-1.6.3-3.el7rhgs.noarch
tendrl-selinux-1.5.4-2.el7rhgs.noarch
tendrl-ui-1.6.3-2.el7rhgs.noarch

Comment 11 errata-xmlrpc 2018-09-04 07:00:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2616


Note You need to log in before you can comment on or make changes to this bug.