Bug 1123302 - Can't scale up or add cartridge when use Gear_Placement Plugin
Summary: Can't scale up or add cartridge when use Gear_Placement Plugin
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Node
Version: 2.1.0
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: ---
: ---
Assignee: Luke Meyer
QA Contact: libra bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-07-25 09:25 UTC by Anping Li
Modified: 2014-10-30 15:10 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-10-30 15:10:33 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Anping Li 2014-07-25 09:25:44 UTC
Description of problem:
Openshift validate the node returned by the gear-placement plugin (Refer to Bug 1093804). It seems there is something wrong with the validation function. 

Version-Release number of selected component (if applicable):
OSE-2.1z-2014-07-24.1

How reproducible:
sometimes

Steps to Reproduce:
1. Customizing the Gear Placement Algorithm according to http://docbuilder.usersys.redhat.com/20822/#Customizing_the_Gear_Placement_Algorithm
2. Modify NodeSelectionPluginTest to return a node name.

For example:
in gear_placement_plugin.rb, return one valid node name.

def self.select_best_fit_node_impl(server_infos, app_props, current_gears, comp_list, user_props, request_time)
       #return server_infos.first
       return NodeProperties.new("nd218.osegeo-20140722.com.cn")
end

3. create scaled app.
4. add db cartridge and scale up web cartridge.

Actual results:
can't scale up and add db cartridge.
[auto@anli ~]$ rhc cartridge scale php-5.3 -a php1 --min 2
This operation will run until the application is at the minimum scale and may take several minutes.
Setting scale range for php-5.3 ... 
Unable to complete the requested operation due to: Invalid node selected
Reference ID: 338b42e745dd1f8f19918f7418f6bd99

production.log log:
2014-07-25 05:00:49.948 [ERROR] Invalid node selected (pid:5303)
2014-07-25 05:00:49.949 [ERROR] /opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-controller-1.23.10.7/lib/openshift/application_container_proxy.rb:75:in `select_best_fit_node'
/opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-controller-1.23.10.7/lib/openshift/application_container_proxy.rb:38:in `find_available'
/opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-controller-1.23.10.7/app/models/gear.rb:99:in `reserve_uid'
<snip--->
/opt/rh/ruby193/root/usr/share/gems/gems/passenger-3.0.21/helper-scripts/passenger-spawn-server:102:in `<main>' (pid:5303)
2014-07-25 05:00:49.949 [ERROR] Encountered error during execute 'Invalid node selected' (pid:5303)
2014-07-25 05:00:49.949 [ERROR] Encountered error during execute 'Invalid node selected' (pid:5303)
2014-07-25 05:00:50.135 [INFO ] Completed 500 Internal Server Error in 6218ms (Views: 0.6ms) (pid:5303)


Expected results:
The cartridge can be scaled up and db cartridge can be added.

Additional info:

Comment 2 Luke Meyer 2014-08-05 15:14:41 UTC
I suspect this is NOTABUG. Looking at the logic that wraps all this in the "abstract" container class:

https://github.com/openshift/origin-server/blob/master/controller/lib/openshift/application_container_proxy.rb

The select_best_fit_node method: https://github.com/openshift/origin-server/blob/c2264b5c4adad5a8ac91102492484674efba6000/controller/lib/openshift/application_container_proxy.rb#L50
... is given a list of potential nodes that has already been filtered for acceptable nodes. When the _impl is called:
https://github.com/openshift/origin-server/blob/c2264b5c4adad5a8ac91102492484674efba6000/controller/lib/openshift/application_container_proxy.rb#L70
... notice that the node returned is immediately tested to see if it was actually in the pre-filtered list. By returning a static node, you are probably violating that test.

Could you try it again with a valid algorithm, e.g. one that returns the first node in the list of server_infos?

It is good to test this, though, as it may not be clear to implementors that they cannot simply ignore the list of server_infos and return what they want.

Comment 3 Anping Li 2014-08-06 07:08:13 UTC
With server_infos.first, I added cartridge/scale up for 15 times, this issue wasn't reported. 
But set the static node to a valid node (NodeProperties.new("legalnode") equal to server_infos.first),this issue may appears in few times(not always).

Before Bug 1093804 was fixed, I could even returned a node with other gear size, and all(app create/cartridge add/scale/access) works well. To give customer more flexible ways to write our own logical, I suggest only validate essential items for the returned node.

Comment 4 Luke Meyer 2014-09-11 04:47:10 UTC
Having looked into this fairly extensively today, I have to agree somewhat that customers will likely be surprised by how restricted they still are in choosing a node for gear placement.

The surprise comes from two facts:
1. The list of server_infos handed to the algorithm is severely filtered (see OpenShift::MCollectiveApplicationContainerProxy.rpc_find_all_available)
2. The server returned is currently *required* to be in that list.

One of the things that "rpc_find_all_available" (rather misnamed as it's often much less than "all" available) filters out, aside from obvious things like nodes with the wrong profile, is "least preferred nodes"; in a scaled application, this means nodes where the application already has gears. These are filtered out if possible (meaning if there will be any nodes left afterward).

I'm pretty sure this is why you can't scale with a static node specified. After the app has a gear on this node, it's not likely to be in the list for that app any more.

It would be nice if the customer could choose from, say, three levels of "all" that would go into the server_infos:
1. All nodes - "really, I know what I am doing."
2. All eligible nodes (not full/inactive or in the wrong profile) - "just do a little of the work for me"
3. Current list - "I just want to tweak the algorithm a bit"

The problem is that the rpc_find_all_available algorithm is already quite complicated and pulling it apart to provide some level of nuance would be unmanageable. Also, enabling a custom algorithm would involve adding a bunch of parameters that are currently considered by this routine - region, platform, restricted_servers (due to HA requirements) as well as some other esoteric bits involved in moving gears. For the customer to get this right (by re-implementing most of it) would be extremely difficult.

The one thing that looks easiest to me to change would be the removal of the "least preferred servers". This happens at the end and is probably the most surprising thing to customers as these are nodes that are perfectly valid targets, just not preferred for the app. In the case of a custom algorithm, perhaps these could be left in.

I'll raise this issue and see what can reasonably be done, but I think we will mainly just have to set expectations via documentation.

Comment 5 Luke Meyer 2014-09-18 02:15:30 UTC
I have inquired as to any real-world algorithm customization scenarios that would require reaching back to retrieve the server_infos that were discarded in rpc_find_all_available, and none have been forthcoming, so I can't really justify doing the work just now. However, when and if that opportunity arises, here is how I think I would do it.

1. Modify rpc_find_all_available to build a hash with list entries for all the server_infos that it found and all that were removed, keyed by reason (restricted, wrong profile/region, not preferred, etc.). Could record the opts passed as well. In the end add the filtered list that results and return the hash.
2. For the standard algorithm or a gear placement plugin with the same method signature as before, call exactly as before (so we don't break any existing implementations).
3. For a gear placement plugin with the method having arity 1, pass it a hash with all of the parameters previously passed plus the hash of categorized server_infos generated during discovery. That way it truly has all of the information available for making its decision.
4. Allow the plugin to choose any server_info given to it, perhaps in any list, or perhaps a plausible subset (perhaps this is configurable).

For now, the existing restrictions are documented under:
https://github.com/openshift/origin-server/tree/master/plugins/gear-placement
... and I believe the formal docs will soon offer this information as well.

Comment 6 Brenton Leanhardt 2014-10-30 15:10:33 UTC
This is working as designed and the 2.2 docs have been updated.


Note You need to log in before you can comment on or make changes to this bug.