Description of problem: district.info on node should use DISTRICTS_FIRST_UID instead of GEAR_MIN_UID Version-Release number of selected component (if applicable): devenv_4003 How reproducible: always Steps to Reproduce: 1. ssh into node instance and modify file /etc/openshift/plugins.d/openshift-origin-msg-broker-mcollective-dev.conf DISTRICTS_FIRST_UID=8000 2. restart mcollective 3. ssh into broker and create one district 4. add the node to district 5. check the file '/var/lib/openshift/.settings/district.info' on node 6. oo-accept-node -v Actual results: step 4: cat /var/lib/openshift/.settings/district.info #Do not modify manually! uuid='70b25bd4484311e3a95422000aec8282' active='true' first_uid=1000 max_uid=6999 step 5: <--snip--> INFO: find district uuid: 70b25bd4484311e3a95422000aec8282 INFO: determining node uid range: 1000 to 6999 INFO: checking presence of tc qdisc <--snip--> Expected results: step 4 should be return: first_uid=8000 max_uid=13999 step 5: should be INFO: determining node uid range: 8000 to 13999 Additional info:
The broker uses the DISTRICTS_FIRST_UID value defined in the mcollective plugin. But since this powers the Rails configuration, the broker needs to be restarted as well. I currently don't see the step to restart the broker in the bug description.
Note to QE: Can you please confirm that the broker was restarted?
hi, Abhisheck Gupta restart the broker, still same result, please confirm, thx. added the following step after step 2 service rhc-broker restart
I followed the steps specified and wasn't able to reproduce this on devenv_4017. I even created a 2-node district and tested this on both nodes. In step 2, i restarted both mcollective and broker services service ruby193-mcollective restart service rhc-broker restart One important thing to note here is that the broker and mcollective services need to be restarted before the district is created. Also, after you create a district (and before adding the node), please check the district (using oo-admin-ctl-district) to make sure that the max_uuid attribute is correct.
Hi Agupta I found the reason: we need to change the file /etc/openshift/plugins.d/openshift-origin-msg-broker-mcollective-dev.conf ON NODE or ON broker? if On broker, this issue could not be reproduced, and we need to update the case as well. if on Node, this issue could be reproduced. please help to confirm. sorry to trouble you time..
Thank you for debugging this. This is the configuration file for the broker mcollective client plugin --> /etc/openshift/plugins.d/openshift-origin-msg-broker-mcollective-dev.conf So, yes, it needs to be modified on the broker and the broker needs to be restarted to pick up the configuration.
According comment 6, it is not a bug, QE need to update test case. Closed this bug.