Bug 822186 - rhc-admin-ctl-district errors trying to destroy an empty district...
rhc-admin-ctl-district errors trying to destroy an empty district...
Status: CLOSED CURRENTRELEASE
Product: OpenShift Origin
Classification: Red Hat
Component: Pod (Show other bugs)
2.x
Unspecified Unspecified
medium Severity medium
: ---
: ---
Assigned To: Dan McPherson
libra bugs
: Triaged
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-05-16 11:09 EDT by Thomas Wiest
Modified: 2015-05-14 21:54 EDT (History)
6 users (show)

See Also:
Fixed In Version: devenv_1779
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2012-06-08 13:59:04 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Thomas Wiest 2012-05-16 11:09:07 EDT
Description of problem:
When I try to destroy an empty district, it errors saying:
Couldn't destroy district 'f390eba2d1514932aaff24652df864c6' because it still contains applications and/or nodes


Here's the district info:

{"available_capacity"=>5998,
 "available_uids"=>"<5998 uids hidden>",
 "node_profile"=>"small",
 "externally_reserved_uids_size"=>0,
 "uuid"=>"f390eba2d1514932aaff24652df864c6",
 "server_identities"=>{},
 "max_capacity"=>6000,
 "creation_time"=>"2012-05-12T23:25:10-04:00",
 "max_uid"=>6999,
 "name"=>"std1",
 "active_server_identities_size"=>0}

Notice that there are no server_identities because it allowed me to remove a node that didn't have any gears on it (as it should).

Notice that it says that there are 2 allocated uids. I think that's why it's erroring.


Version-Release number of selected component (if applicable):
rhc-broker-0.92.8-1.el6_2.noarch


How reproducible:
Not sure how to get it in this state, but once it's there it errors every time.


Steps to Reproduce:
1. unsure.

  
Actual results:
Can't destroy an empty district.


Expected results:
It should be able to notice that the district doesn't have any server_identities and thus should be able to be destroyed.
Comment 1 Johnny Liu 2012-05-18 01:47:59 EDT
Verified this bug with devenv_1780, and PASS.

1. Create a district.
2. Log into mongo, do the following change to create test scenarios.
PRIMARY> a = db.district.findOne({ "uuid" : "0bcd0f2359e74273bc2ec0787d73c89d" })
PRIMARY> a['available_capacity'] = 5998
5998
PRIMARY> db.district.update( { "uuid" : "0bcd0f2359e74273bc2ec0787d73c89d" }, a )
3. Check "available_capacity" of district saying there are 2 allocated uids.
{"max_uid"=>6999,
 "available_capacity"=>5998,
 "externally_reserved_uids_size"=>0,
 "server_identities"=>{"ip-10-118-103-144"=>{"active"=>true}},
 "creation_time"=>"2012-05-17T23:03:11-04:00",
 "available_uids"=>"<6000 uids hidden>",
 "uuid"=>"0bcd0f2359e74273bc2ec0787d73c89d",
 "active_server_identities_size"=>1,
 "node_profile"=>"small",
 "max_capacity"=>6000,
 "name"=>"d1"}
4. Destroy this district.
# rhc-admin-ctl-district -n d1 -c destroy
!!!! WARNING !!!! WARNING !!!! WARNING !!!!
You are about to destroy the d1 district.

This is NOT reversible, all remote data for this district will be removed.
Do you want to destroy this district (y/n): y
Successfully destroyed district: d1

# rhc-admin-ctl-district
No districts created yet.  Use 'rhc-admin-ctl-district -c create' to create one.

Note You need to log in before you can comment on or make changes to this bug.