Description of problem: It's a rare situation, but it makes the min and max gears of a cartridge looks strange and inconsistent. After setting the Max gears of haproxy's manifest.yml, create one scalable application of any cartridge, and set its min/max gears with cartridge-scale via rhc, the min gears of the web cartridge can be set to a value greater than the max gears. Version-Release number of selected component (if applicable): On devenv_3580 How reproducible: Always Steps to Reproduce: 1. On node, edit haproxy's manifest.yml in /var/lib/openshift/.cartridge_repository/redhat-haproxy/0.0.4/metadata, update the value of max, Save and clear broker cache. Here is the detail: Scaling: Min: 1 Max: 2 Multiplier: 1 2. Create one scalable application rhc app create r19s ruby-1.9 -s --no-git --no-dns 3. Set min gears to 3 and max gears to -1 of ruby-1.9 cartridge rhc cartridge-scale ruby-1.9 -a r19s --min 3 --max -1 4. rhc app-show r19s rhc app-show r19s --gears Actual results: Can not scale up this app via RESTAPI, but can increase its gears by setting min gears of ruby-1.9 cartridge, and the min gears is even greater than the max gears. The app is consuming 3 gears when it only allows 2 gears. # rhc app-show r19s r19s @ http://r19s-jhou.dev.rhcloud.com/ (uuid: 907252348802326916497408) ------------------------------------------------------------------------- Domain: jhou Created: 3:13 PM Gears: 3 (defaults to small) Git URL: ssh://907252348802326916497408.rhcloud.com/~/git/r19s.git/ SSH: 907252348802326916497408.rhcloud.com ruby-1.9 (Ruby 1.9) ------------------- Scaling: x3 (minimum: 3, maximum: 2) on small gears haproxy-1.4 (OpenShift Web Balancer) ------------------------------------ Scaling: x3 (minimum: 3, maximum: 2) on small gears # rhc app-show --gears r19s ID State Cartridges Size SSH URL ------------------------ ------- -------------------- ----- ---------------------------------------------------------------------- 907252348802326916497408 started ruby-1.9 haproxy-1.4 small 907252348802326916497408.rhcloud.com 860993208439630341341184 started ruby-1.9 haproxy-1.4 small 860993208439630341341184.rhcloud.com 51f767d84495fd1615000001 started ruby-1.9 haproxy-1.4 small 51f767d84495fd1615000001.rhcloud.com Expected results: The min gears can not be greater than the max gears Additional info:
By setting the multiplier to be 1, haproxy no longer becomes a sparse cart. And then, if it has to co-locate with the framework cartridge, then both the scaling policies must match. The broker should not allow two cartridges with different scaling min/max to sit with each other unless one of them is a sparse cartridge (i.e. multiplier != 1). Lowering the severity as this does not affect the current release (with unmodified haproxy cartridge).
Proposed fix --> https://github.com/openshift/origin-server/pull/5087
Commit pushed to master at https://github.com/openshift/origin-server https://github.com/openshift/origin-server/commit/eb3bc411d161f3033593bb40604b8416ad5e915e Bug 989941: preventing colocation of cartridges that independently scale
Verified on devenv_4594, the logs have been moved to /var/log/openshift/broker [root@ip-10-231-17-137 haproxy]# oo-admin-upgrade upgrade-node --version 2.0.42 Upgrader started with options: {:version=>"2.0.42", :ignore_cartridge_version=>false, :target_server_identity=>nil, :upgrade_position=>1, :num_upgraders=>1, :max_threads=>12, :gear_whitelist=>[], :num_tries=>2} Building new upgrade queues and cluster metadata Getting all active gears... Getting all logins... Writing 5 entries to gear queue for node ip-10-231-17-137 at /var/log/openshift/broker/upgrade/gear_queue_ip-10-231-17-137 Writing node queue to /var/log/openshift/broker/upgrade/node_queue Writing cluster metadata to /var/log/openshift/broker/upgrade/cluster_metadata Loading cluster metadata from /var/log/openshift/broker/upgrade/cluster_metadata Loading node queue from /var/log/openshift/broker/upgrade/node_queue Upgrading node ip-10-231-17-137 from gear queue file at /var/log/openshift/broker/upgrade/gear_queue_ip-10-231-17-137 1 of 1 nodes completed Writing updated node queue to /var/log/openshift/broker/upgrade/node_queue ##################################################### Summary: # of users: 6 # of gears: 5 # of failures: 0 # of leftovers: 0 Gear counts per thread: [5] Timings: start=1395972372.997s total=45.226s Additional timings: gather_active_gears_total_time=20.863s gather_users_total_time=0.03s [root@ip-10-231-17-137 upgrade]# pwd /var/log/openshift/broker/upgrade [root@ip-10-231-17-137 upgrade]# ls cluster_metadata node_queue node_queue-2014-03-27-220658 upgrade_log_ip-10-231-17-137 upgrade_results_ip-10-231-17-137
Oops! Sorry for the wrong update. This bug is verifeid on devenv-stage_775: 1. Set Max to 2 in haproxy's manifest, restart ruby193-mcollective and re-import the cartridge using oo-admin-ctl-cartridge service ruby193-mocllective restart oo-broker oo-admin-broker-cache -c oo-broker oo-admin-ctl-cartridge -c delete haproxy-1.4 oo-broker oo-admin-ctl-cartridge -c import_node oo-broker oo-admin-ctl-cartridge -c activate --name haproxy-1.4 2. Create a scaled php-5.3 application, After creation, the scaling info of the app shows the max gears for php-5.3 is not limited, max haproxy gears is 2 php1s @ http://php1s-jhou.dev.rhcloud.com/ (uuid: 5335382d599e2ea80f000058) --------------------------------------------------------------------------- Domain: jhou Created: 4:51 PM Gears: 1 (defaults to small) Git URL: ssh://5335382d599e2ea80f000058.rhcloud.com/~/git/php1s.git/ SSH: 5335382d599e2ea80f000058.rhcloud.com Deployment: auto (on git push) haproxy-1.4 (Web Load Balancer) ------------------------------- Scaling: x1 (minimum: 1, maximum: 2) on small gears php-5.3 (PHP 5.3) ----------------- Scaling: x1 (minimum: 1, maximum: available) on small gears 3. Set min scale for php cartridge -> % rhc cartridge-scale php-5.3 -a php1s --min 3 --max -1 Please sign in to start a new session to ec2-107-21-89-147.compute-1.amazonaws.com. Password: ****** This operation will run until the application is at the minimum scale and may take several minutes. Setting scale range for php-5.3 ... done 4. Show app scaling info php1s @ http://php1s-jhou.dev.rhcloud.com/ (uuid: 5335382d599e2ea80f000058) --------------------------------------------------------------------------- Domain: jhou Created: 4:51 PM Gears: 3 (defaults to small) Git URL: ssh://5335382d599e2ea80f000058.rhcloud.com/~/git/php1s.git/ SSH: 5335382d599e2ea80f000058.rhcloud.com Deployment: auto (on git push) haproxy-1.4 (Web Load Balancer) ------------------------------- Scaling: x1 (minimum: 1, maximum: 2) on small gears php-5.3 (PHP 5.3) ----------------- Scaling: x3 (minimum: 3, maximum: available) on small gears This is working as expected, mark as verified.
Note to QE: Did you set the multiplier in the haproxy cartridge to 1 as well? That is what determines that the cartridge is no longer sparse and hence is now being blocked from being located with the web_framework gear.
Thanks for the note, I'll add a test case for it as well. Verified on devenv-stage_779 After setting multiplier to 1, the haproxy can not be located with web_framework gear. <-------------cartridge detail from cartridgte_types in datastore ------------> "text": "{\"Name\":\"haproxy\",\"Display-Name\":\"Web Load Balancer\",\"Version\":\"1.4\",\"Description\":\"Acts as a load balancer for your web cartridge and will automatically scale up to handle incoming traffic. Is automatically added to scaled applications when they are created and cannot be removed or added to an application after the fact.\",\"License\":\"GPLv2+\",\"License-Url\":\"http://www.gnu.org/licenses/gpl-2.0.html\",\"Categories\":[\"web_proxy\",\"scales\",\"embedded\"],\"Website\":\"http://haproxy.1wt.eu/\",\"Cartridge-Version\":\"0.0.14\",\"Provides\":[\"haproxy-1.4\",\"haproxy\",\"web_proxy\"],\"Vendor\":\"http://haproxy.1wt.eu/\",\"Cartridge-Vendor\":\"redhat\",\"Endpoints\":[{\"Private-IP-Name\":\"IP\",\"Private-Port-Name\":\"PORT\",\"Private-Port\":8080,\"Public-Port-Name\":\"PROXY_PORT\",\"Mappings\":[{\"Frontend\":\"\",\"Backend\":\"\",\"Options\":{\"target_update\":true,\"connections\":-1,\"websocket\":true}},{\"Frontend\":\"/health\",\"Backend\":\"/configuration/health\",\"Options\":{\"file\":true}}]},{\"Private-IP-Name\":\"STATUS_IP\",\"Private-Port-Name\":\"STATUS_PORT\",\"Private-Port\":8080,\"Mappings\":[{\"Frontend\":\"/haproxy-status\",\"Backend\":\"/\"},{\"Frontend\":\"/health\",\"Backend\":\"\",\"Options\":{\"health\":true}}]}],\"Configure-Order\":[\"web_framework\",\"web_proxy\"],\"Components\":{\"web_proxy\":{\"Publishes\":{\"get-balancer-connection-info\":{\"Type\":\"NET_TCP:http:http\",\"Required\":false},\"publish-haproxy-status-url\":{\"Type\":\"NET_TCP:haproxy-status-info\",\"Required\":false}},\"Subscribes\":{\"set-db-connection-info\":{\"Type\":\"NET_TCP:db:connection-info\",\"Required\":false},\"set-haproxy-status-url\":{\"Type\":\"NET_TCP:haproxy-status-info\",\"Required\":false}},\"Scaling\":{\"Min\":1,\"Max\":2,\"Min-Managed\":0,\"Multiplier\":1,\"Required\":null}}},\"Group-Overrides\":[{\"components\":[\"web_framework\",\"web_proxy\"]}]}", <-------------------end--------------------> 0 % create-scalable-php-app php1s Application Options ------------------- Domain: jhou Cartridges: php-5.3 Gear Size: default Scaling: yes Creating application 'php1s' ... Cartridges ["haproxy-1.4", "php-5.3"] cannot be grouped together as they scale individually