Bug 1028317 - district.info on node should use DISTRICTS_FIRST_UID instead of GEAR_MIN_UID while adding a node to district
district.info on node should use DISTRICTS_FIRST_UID instead of GEAR_MIN_UID ...
Product: OpenShift Online
Classification: Red Hat
Component: Pod (Show other bugs)
All All
medium Severity medium
: ---
: ---
Assigned To: Abhishek Gupta
libra bugs
Depends On:
  Show dependency treegraph
Reported: 2013-11-08 02:48 EST by zhaozhanqi
Modified: 2015-05-14 20:22 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2013-11-12 21:00:35 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description zhaozhanqi 2013-11-08 02:48:53 EST
Description of problem:
district.info on node should use DISTRICTS_FIRST_UID instead of GEAR_MIN_UID

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. ssh into node instance and modify file /etc/openshift/plugins.d/openshift-origin-msg-broker-mcollective-dev.conf 
2. restart mcollective
3. ssh into broker and create one district
4. add the node to district
5. check the file '/var/lib/openshift/.settings/district.info' on node
6. oo-accept-node -v

Actual results:
step 4:
cat /var/lib/openshift/.settings/district.info
#Do not modify manually!

step 5:

INFO: find district uuid: 70b25bd4484311e3a95422000aec8282
INFO: determining node uid range: 1000 to 6999
INFO: checking presence of tc qdisc

Expected results:
step 4  should be return:

step 5:
should be 
INFO: determining node uid range: 8000 to 13999

Additional info:
Comment 1 Abhishek Gupta 2013-11-08 14:05:18 EST
The broker uses the DISTRICTS_FIRST_UID value defined in the mcollective plugin. But since this powers the Rails configuration, the broker needs to be restarted as well. I currently don't see the step to restart the broker in the bug description.
Comment 2 Abhishek Gupta 2013-11-08 14:46:31 EST
Note to QE: Can you please confirm that the broker was restarted?
Comment 3 zhaozhanqi 2013-11-11 00:22:57 EST
hi, Abhisheck Gupta

restart the broker, still same result, please confirm, thx.

added the following step after step 2

service rhc-broker restart
Comment 4 Abhishek Gupta 2013-11-11 14:20:26 EST
I followed the steps specified and wasn't able to reproduce this on devenv_4017. I even created a 2-node district and tested this on both nodes. 

In step 2, i restarted both mcollective and broker services

service ruby193-mcollective restart
service rhc-broker restart

One important thing to note here is that the broker and mcollective services need to be restarted before the district is created. Also, after you create a district (and before adding the node), please check the district (using oo-admin-ctl-district) to make sure that the max_uuid attribute is correct.
Comment 5 zhaozhanqi 2013-11-11 22:06:46 EST
Hi Agupta

  I found the reason:
   we need to change the file /etc/openshift/plugins.d/openshift-origin-msg-broker-mcollective-dev.conf ON NODE or ON broker?

  if On broker, this issue could not be reproduced, and we need to update the case as well.
  if on Node,   this issue could be reproduced. 

  please help to confirm. sorry to trouble you time..
Comment 6 Abhishek Gupta 2013-11-12 13:09:45 EST
Thank you for debugging this.

This is the configuration file for the broker mcollective client plugin --> /etc/openshift/plugins.d/openshift-origin-msg-broker-mcollective-dev.conf 

So, yes, it needs to be modified on the broker and the broker needs to be restarted to pick up the configuration.
Comment 7 zhaozhanqi 2013-11-12 21:00:35 EST
According comment 6, it is not a bug, QE need to update test case. Closed this bug.

Note You need to log in before you can comment on or make changes to this bug.