Bug 1028317

Summary: district.info on node should use DISTRICTS_FIRST_UID instead of GEAR_MIN_UID while adding a node to district
Product: OpenShift Online Reporter: zhaozhanqi <zzhao>
Component: PodAssignee: Abhishek Gupta <abhgupta>
Status: CLOSED NOTABUG QA Contact: libra bugs <libra-bugs>
Severity: medium Docs Contact:
Priority: medium    
Version: 2.xCC: abhgupta, xtian
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-11-13 02:00:35 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description zhaozhanqi 2013-11-08 07:48:53 UTC
Description of problem:
district.info on node should use DISTRICTS_FIRST_UID instead of GEAR_MIN_UID

Version-Release number of selected component (if applicable):
devenv_4003

How reproducible:
always

Steps to Reproduce:
1. ssh into node instance and modify file /etc/openshift/plugins.d/openshift-origin-msg-broker-mcollective-dev.conf 
 DISTRICTS_FIRST_UID=8000
2. restart mcollective
3. ssh into broker and create one district
4. add the node to district
5. check the file '/var/lib/openshift/.settings/district.info' on node
6. oo-accept-node -v

Actual results:
step 4:
cat /var/lib/openshift/.settings/district.info
#Do not modify manually!
uuid='70b25bd4484311e3a95422000aec8282'
active='true'
first_uid=1000
max_uid=6999

step 5:

<--snip-->
INFO: find district uuid: 70b25bd4484311e3a95422000aec8282
INFO: determining node uid range: 1000 to 6999
INFO: checking presence of tc qdisc
<--snip-->

Expected results:
step 4  should be return:
first_uid=8000
max_uid=13999

step 5:
should be 
INFO: determining node uid range: 8000 to 13999

Additional info:

Comment 1 Abhishek Gupta 2013-11-08 19:05:18 UTC
The broker uses the DISTRICTS_FIRST_UID value defined in the mcollective plugin. But since this powers the Rails configuration, the broker needs to be restarted as well. I currently don't see the step to restart the broker in the bug description.

Comment 2 Abhishek Gupta 2013-11-08 19:46:31 UTC
Note to QE: Can you please confirm that the broker was restarted?

Comment 3 zhaozhanqi 2013-11-11 05:22:57 UTC
hi, Abhisheck Gupta

restart the broker, still same result, please confirm, thx.

added the following step after step 2

service rhc-broker restart

Comment 4 Abhishek Gupta 2013-11-11 19:20:26 UTC
I followed the steps specified and wasn't able to reproduce this on devenv_4017. I even created a 2-node district and tested this on both nodes. 

In step 2, i restarted both mcollective and broker services

service ruby193-mcollective restart
service rhc-broker restart

One important thing to note here is that the broker and mcollective services need to be restarted before the district is created. Also, after you create a district (and before adding the node), please check the district (using oo-admin-ctl-district) to make sure that the max_uuid attribute is correct.

Comment 5 zhaozhanqi 2013-11-12 03:06:46 UTC
Hi Agupta

  I found the reason:
 
   we need to change the file /etc/openshift/plugins.d/openshift-origin-msg-broker-mcollective-dev.conf ON NODE or ON broker?

  if On broker, this issue could not be reproduced, and we need to update the case as well.
  if on Node,   this issue could be reproduced. 

  please help to confirm. sorry to trouble you time..

Comment 6 Abhishek Gupta 2013-11-12 18:09:45 UTC
Thank you for debugging this.

This is the configuration file for the broker mcollective client plugin --> /etc/openshift/plugins.d/openshift-origin-msg-broker-mcollective-dev.conf 

So, yes, it needs to be modified on the broker and the broker needs to be restarted to pick up the configuration.

Comment 7 zhaozhanqi 2013-11-13 02:00:35 UTC
According comment 6, it is not a bug, QE need to update test case. Closed this bug.