Bug 1118417 - Unable to add new nodes to old districts
Summary: Unable to add new nodes to old districts
Alias: None
Product: OpenShift Online
Classification: Red Hat
Component: Pod
Version: 2.x
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: ---
Assignee: Abhishek Gupta
QA Contact: libra bugs
Depends On:
Blocks: 1118862
TreeView+ depends on / blocked
Reported: 2014-07-10 16:19 UTC by Stefanie Forrester
Modified: 2015-05-15 00:29 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1118862 (view as bug list)
Last Closed: 2014-10-10 00:48:30 UTC

Attachments (Terms of Use)

Description Stefanie Forrester 2014-07-10 16:19:53 UTC
Description of problem:
We have some old districts that used to contain at least 3 nodes each. They haven't been used in some time, since we retired some nodes. (But they still contain 1-2 nodes from back when they were created.)

Now, when we try to put some new nodes into the old districts, it gives us an error and fails to add the nodes to the districts.

[tmcgonag@ex-srv1.prod ~]$ sudo oo-admin-ctl-district -c add-node -n std95 -i ex-std-node102.prod.rhcloud.com

/usr/sbin/oo-admin-ctl-district:215:in `casecmp': can't convert nil into String (TypeError)
	from /usr/sbin/oo-admin-ctl-district:215:in `block in <main>'
	from /usr/sbin/oo-admin-ctl-district:178:in `block in collate_errors'
	from /usr/sbin/oo-admin-ctl-district:176:in `each'
	from /usr/sbin/oo-admin-ctl-district:176:in `collate_errors'
	from /usr/sbin/oo-admin-ctl-district:213:in `<main>'

However, when we create a *new* district for the new nodes, we can add them just fine. We're currently creating new districts and then adding 3 nodes to each one.

Version-Release number of selected component (if applicable):

How reproducible:
Every time.

Steps to Reproduce:
1. Attempt to add nodes to one of the old districts (which contain 1-2 nodes).

Actual results:
Add-node fails.

Expected results:
Add-node should succeed to add the node to the old district.

Additional info:

Comment 2 openshift-github-bot 2014-07-11 19:17:04 UTC
Commit pushed to master at https://github.com/openshift/origin-server

Bug 1118417: Using default district platform when missing

Comment 3 Jianwei Hou 2014-07-16 03:31:26 UTC
Looked into this PR and bug desc, I think this raise because the old districts do not have 'platform' attribute. Verified this bug on devenv-stage_910 with following steps

1. Create a district
2. Remove the 'platform' fied in district collection in mongo
ip-10-111-162-53(mongod-2.4.6)[PRIMARY] openshift_broker_dev> db.districts.update({name:'test'},{$unset:{platform:1}})
Updated 1 existing record(s) in 1ms
ip-10-111-162-53(mongod-2.4.6)[PRIMARY] openshift_broker_dev> db.districts.find({},{available_uids:0})
  "_id": ObjectId("53c629535b990c78e5000001"),
  "active_servers_size": 0,
  "available_capacity": 6000,
  "created_at": ISODate("2014-07-16T07:27:15.266Z"),
  "gear_size": "small",
  "max_capacity": 6000,
  "max_uid": 6999,
  "name": "test",
  "updated_at": ISODate("2014-07-16T07:27:15.266Z"),
  "uuid": "9c1fac2c0cba11e4af80cafa18121884"
Fetched 1 record(s) in 1ms
3. Add new node
[root@ip-10-111-162-53 ~]# oo-admin-ctl-district -n test -c add-node -i ip-10-111-162-53
Success for node 'ip-10-111-162-53'!

 "available_uids"=>"<6000 uids hidden>",
 "created_at"=>2014-07-16 07:27:15 UTC,
 "updated_at"=>2014-07-16 07:27:15 UTC,

Note You need to log in before you can comment on or make changes to this bug.