Bug 1028317
Summary: | district.info on node should use DISTRICTS_FIRST_UID instead of GEAR_MIN_UID while adding a node to district | ||
---|---|---|---|
Product: | OpenShift Online | Reporter: | zhaozhanqi <zzhao> |
Component: | Pod | Assignee: | Abhishek Gupta <abhgupta> |
Status: | CLOSED NOTABUG | QA Contact: | libra bugs <libra-bugs> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 2.x | CC: | abhgupta, xtian |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | All | ||
OS: | All | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2013-11-13 02:00:35 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
zhaozhanqi
2013-11-08 07:48:53 UTC
The broker uses the DISTRICTS_FIRST_UID value defined in the mcollective plugin. But since this powers the Rails configuration, the broker needs to be restarted as well. I currently don't see the step to restart the broker in the bug description. Note to QE: Can you please confirm that the broker was restarted? hi, Abhisheck Gupta restart the broker, still same result, please confirm, thx. added the following step after step 2 service rhc-broker restart I followed the steps specified and wasn't able to reproduce this on devenv_4017. I even created a 2-node district and tested this on both nodes. In step 2, i restarted both mcollective and broker services service ruby193-mcollective restart service rhc-broker restart One important thing to note here is that the broker and mcollective services need to be restarted before the district is created. Also, after you create a district (and before adding the node), please check the district (using oo-admin-ctl-district) to make sure that the max_uuid attribute is correct. Hi Agupta I found the reason: we need to change the file /etc/openshift/plugins.d/openshift-origin-msg-broker-mcollective-dev.conf ON NODE or ON broker? if On broker, this issue could not be reproduced, and we need to update the case as well. if on Node, this issue could be reproduced. please help to confirm. sorry to trouble you time.. Thank you for debugging this. This is the configuration file for the broker mcollective client plugin --> /etc/openshift/plugins.d/openshift-origin-msg-broker-mcollective-dev.conf So, yes, it needs to be modified on the broker and the broker needs to be restarted to pick up the configuration. |