Bug 902690 - Shouldn't create scalable application in different profile nodes when specifing the gear size
Summary: Shouldn't create scalable application in different profile nodes when specif...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: OKD
Classification: Red Hat
Component: Pod
Version: 2.x
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Dan McPherson
QA Contact: libra bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-01-22 09:36 UTC by Rony Gong 🔥
Modified: 2015-05-15 02:12 UTC (History)
0 users

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-02-13 22:56:27 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
development.log when create small scalable application (42.74 KB, text/plain)
2013-01-23 07:34 UTC, Rony Gong 🔥
no flags Details
development.log when create medium application (5.80 KB, text/plain)
2013-01-23 07:42 UTC, Rony Gong 🔥
no flags Details

Description Rony Gong 🔥 2013-01-22 09:36:23 UTC
Description of problem:
Shouldn't create scalable application in different profile node when specificied the app size

Version-Release number of selected component (if applicable):
devenv_2705

How reproducible:
always

Steps to Reproduce:
1.Create multi_node_env, include 3 nodes, A, B, C, A is broker, and A,B in one small district, C in another medium district.
2.Set the user allow gear size: small, medium
3.Create lots of non_scalable small application, all these app will created in A and B.(intends to make A, B high active_capacity than C)
4.Create an small scalable application, find that this application has  one gear in A, another gear is created in C(medium node).
5.In mongodb this gear is showed as small node profile, but actually this gear exist in the medium node C.
  
Actual results:
1.Check the cgroup resource in C:
[root@ip-10-151-16-40 /]# cat /cgroup/all/openshift/d2b55ef04b7f4b70ac7f21452f05d350/memory.limit_in_bytes
1073741824     <-----> 1G means medium profile

2.Copy data from mongodb for this gear:

           "gears": {
             "0": {
               "uuid": "c9a944a1466841958b344725bb4487ed",
               "name": "qsjbossas1",
               "group_instance_name": "@@app\/comp-proxy\/cart-jbossas-7\/group-app-servers",
               "node_profile": "small",
               "configured_components": {
                 "0": "@@app\/comp-proxy\/cart-jbossas-7\/comp-jbossas-server",
                 "1": "@@app\/comp-proxy\/cart-haproxy-1.4" 
              },
               "uid": 1017,
               "server_identity": "ip-10-151-16-40" 
            } 
          },
           "addtl_fs_gb": 0,
           "node_profile": "small",
           "supported_min": 1,
           "supported_max": 1,
           "min": 1,
           "max": 1 

Expected results:
Should create all application gears in the small node A, B.

Additional info:

Comment 1 Dan McPherson 2013-01-22 14:49:25 UTC
Did you set NODE_PROFILE_ENABLED=true in openshift-origin-msg-broker-mcollective-dev.conf?  Without that node_profile isn't part of the logic when selecting nodes.

Comment 2 Dan McPherson 2013-01-22 22:35:14 UTC
I went ahead and tried to replicate this without success.

Comment 3 Rony Gong 🔥 2013-01-23 07:28:15 UTC
Could reproduced in devenv_2709
Steps:
1.Create multi node env, 3 nodes, A and B are small node, C is medium node
In C node, and do "server rhc-broker restart" after blow modify.
[root@domU-12-31-39-14-75-78 openshift]# cat /var/www/openshift/broker/conf/openshift-origin-msg-broker-mcollective-dev.conf
MCOLLECTIVE_DISCTIMEOUT=2
MCOLLECTIVE_TIMEOUT=180
MCOLLECTIVE_VERBOSE="false"
MCOLLECTIVE_PROGRESS_BAR="false"
MCOLLECTIVE_CONFIG="/etc/mcollective/client.cfg"
DISTRICTS_ENABLED="true"
DISTRICTS_REQUIRE_FOR_APP_CREATE="false"
DISTRICTS_MAX_CAPACITY=6000
DISTRICTS_FIRST_UID=1000
NODE_PROFILE_ENABLED=true
[root@domU-12-31-39-14-75-78 openshift]# ls -al /etc/openshift/
total 108
....
lrwxrwxrwx.   1 root root          27 Jan 22 21:42 resource_limits.conf -> resource_limits.conf.medium
....
2.Create 2 district, dist1 is small, dist2 is medium, add nodes to them
[root@ip-10-70-82-147 openshift]# oo-admin-ctl-district -n dist1


{"active_server_identities_size"=>2,
 "available_capacity"=>5978,
 "available_uids"=>"<5978 uids hidden>",
 "creation_time"=>"2013-01-22T21:44:00-05:00",
 "externally_reserved_uids_size"=>0,
 "max_capacity"=>6000,
 "max_uid"=>6999,
 "name"=>"dist1",
 "node_profile"=>"small",
 "server_identities"=>
  {"ip-10-70-82-147"=>{"active"=>true}, "ip-10-46-135-80"=>{"active"=>true}},
 "uuid"=>"0267864eed22466d9e26c22de2c0402b"}
[root@ip-10-70-82-147 openshift]# oo-admin-ctl-district -n dist2
{"active_server_identities_size"=>1,
 "available_capacity"=>5999,
 "available_uids"=>"<5999 uids hidden>",
 "creation_time"=>"2013-01-22T21:46:00-05:00",
 "externally_reserved_uids_size"=>0,
 "max_capacity"=>6000,
 "max_uid"=>6999,
 "name"=>"dist2",
 "node_profile"=>"medium",
 "server_identities"=>{"domU-12-31-39-14-75-78"=>{"active"=>true}},
 "uuid"=>"e76b6252a1734171ab1b4637f5553bb9"}
3.Create lots of small application, case the active_capacity(A,B > C)
[root@ip-10-70-82-147 ~]# mco facts active_capacity
Report for fact: active_capacity

        0.0                                     found 1 times
        12.0                                    found 1 times
        14.000000000000002                      found 1 times

4.Create an small scalable application
[qgong@localhost dev]$ rhc app create qsjbossas jbossas-7 -s
Application Options
-------------------
  Namespace:  qgong19
  Cartridges: jbossas-7
  Gear Size:  default
  Scaling:    yes

Creating application 'qsjbossas' ... The supplied application name 'qsjbossas' already exists
[qgong@localhost dev]$ crejbo qs2jbossas -s
Application Options
-------------------
  Namespace:  qgong19
  Cartridges: jbossas-7
  Gear Size:  default
  Scaling:    yes

Creating application 'qs2jbossas' ... An error occurred while communicating with the server. This problem may only be temporary. Check that you have correctly specified your OpenShift server
'https://ec2-174-129-148-73.compute-1.amazonaws.com/broker/rest/domains/qgong19/applications'.


5.found that the gears are created in node C, and check the memory.limit=1 G
[root@domU-12-31-39-14-75-78 openshift]# ls
3e309e5cfa4e40609d9fa8c9e31b8d01  3e309e5cfa-qgong19  b96ac35dfa24408aaa49dba3668f05ba  last_access.log  qs2jbossas-qgong19

[root@domU-12-31-39-14-75-78 openshift]# cat /cgroup/all/openshift/b96ac35dfa24408aaa49dba3668f05ba/memory.limit_in_bytes
1073741824
[root@domU-12-31-39-14-75-78 openshift]# cat /cgroup/all/openshift/3e309e5cfa4e40609d9fa8c9e31b8d01/memory.limit_in_bytes
1073741824

Please check the log below when create this medium scalable application.

Comment 4 Rony Gong 🔥 2013-01-23 07:33:30 UTC
It seems when create the scalable application, broker chose the right district but wrong identify from this log.

Comment 5 Rony Gong 🔥 2013-01-23 07:34:09 UTC
Created attachment 685687 [details]
development.log when create small scalable application

Comment 6 Rony Gong 🔥 2013-01-23 07:41:09 UTC
Follow comment3:
If you can't reproduce the comment 3 issue, then you could face another problem:
1.keep the same environment
2.Create an medium application
[qgong@localhost dev]$ crejbo q4jbossas -g medium
Application Options
-------------------
  Namespace:  qgong19
  Cartridges: jbossas-7
  Gear Size:  medium
  Scaling:    no

Creating application 'q4jbossas' ... Node execution failure (invalid exit code from node).  If the problem persists please contact Red Hat support.
Reference ID: 28ea56c9e223422dac50be290fdf114f


From the development.log, seems the broker use the right district but wrong wrong identify too.

Comment 7 Rony Gong 🔥 2013-01-23 07:42:02 UTC
Created attachment 685688 [details]
development.log when create medium application

Comment 8 Dan McPherson 2013-01-23 15:12:34 UTC
NODE_PROFILE_ENABLED=true and rhc-broker restart needs to be done on the broker (A) and not C node.

Comment 9 Dan McPherson 2013-01-23 18:48:49 UTC
Okay I dug through the logs and this is making a little more sense.  Some of your issues are because of NODE_PROFILE_ENABLED still and the others are addressed in:

https://github.com/openshift/origin-server/pull/1200

Comment 10 openshift-github-bot 2013-01-23 20:53:08 UTC
Commits pushed to master at https://github.com/openshift/origin-server

https://github.com/openshift/origin-server/commit/c8d38da4d8178ea16aa349e6520ed3cbb5954966
Bug 902690
Cant use direct addressing mode when facts are required

https://github.com/openshift/origin-server/commit/29b6e408ab8c2ee41b886fc9a62e8dc1688b962b
Merge pull request #1200 from danmcp/master

Bug 902690

Comment 11 Rony Gong 🔥 2013-01-24 09:48:39 UTC
Verified on devenv_stage_277
1.Could create medium application in mutil_district
[qgong@localhost ~]$ crephp q2php -g medium
Application Options
-------------------
  Namespace:  qgong2
  Cartridges: php-5.3
  Gear Size:  medium
  Scaling:    no

Creating application 'q2php' ... done

Waiting for your DNS name to be available ... done

Downloading the application Git repository ...
Initialized empty Git repository in /home/qgong/q2php/.git/
Warning: Permanently added 'q2php-qgong2.dev.rhcloud.com' (RSA) to the list of known hosts.

Your application code is now in 'q2php'

q2php @ http://q2php-qgong2.dev.rhcloud.com/ (uuid: 4c0ff607d5734f09ac3e68702c2ae949)
-------------------------------------------------------------------------------------
  Created: 1:13 AM
  Gears:   1 (defaults to medium)
  Git URL: ssh://4c0ff607d5734f09ac3e68702c2ae949.rhcloud.com/~/git/q2php.git/
  SSH:     4c0ff607d5734f09ac3e68702c2ae949.rhcloud.com

  php-5.3 (PHP 5.3)
  -----------------
    Gears: 1 medium

RESULT:
Application q2php was created

2. Could create small scalable application in the small district.


Note You need to log in before you can comment on or make changes to this bug.