Hide Forgot
+++ This bug was initially created as a clone of Bug #1359848 +++ Description of problem: As of OCP 3.1.1.6 the node-label kublet argument has been back ported(https://bugzilla.redhat.com/show_bug.cgi?id=1318804). We would like to use this feature so that in the event we have to stop/start a node it will keep its node labels. Additional info: http://kubernetes.io/docs/admin/kubelet/ "--node-labels=: <Warning: Alpha feature> Labels to add when registering the node in the cluster. Labels must be key=value pairs separated by ','." --- Additional comment from Wesley Hearn on 2016-07-26 13:41:11 EDT --- We have a workaround for this: put "openshift_node_kubelet_args={'node-labels': '{{ (openshift_node_labels | default({})).items() | map("join", "=") | list }}'}" in your byo inventory under OSEv3:vars. It would be nice to not have to do this and the node config role automatically parse openshift_node_labels if it exists and then append it to openshift_node_kublet_args. --- Additional comment from Scott Dodson on 2016-10-04 12:00:50 EDT --- Need openshift_manage_nodes to add labels when it's invoked, also need to merge openshift_node_labels into openshift_node_kubelet_args? --- Additional comment from Tim Bielawa on 2016-10-14 15:47:21 EDT --- @whearn I've written up a patch which I think will fulfill your request. On my existing cluster I begin with a node-config.yaml file with an empty `kubeletArguments` key. None of my inventory hosts had `openshift_node_labels` set during initial setup. When I apply this patch and add some openshift_node_labels like this: > ec2-w-x-y-z.compute-1.amazonaws.com openshift_node_labels="{'region': 'infra', 'foo': 'bar'}" .... Then the node-config.yaml file shows the following for the value of the `kubeletArguments` key when I re-run the playbook. > kubeletArguments: > node-labels: > - region=infra,foo=bar This will also work on the first provisioning run. I was just describing my testing process above. Is this what you were looking for? --- Additional comment from Wesley Hearn on 2016-10-14 15:50:55 EDT --- node-labels is an array so it should be node-labels: - region=infra - foo=bar --- Additional comment from Tim Bielawa on 2016-10-17 11:06:48 EDT --- Got it. Worked out a fix to make it a list. Presently debugging a failure this causes when there is more than one node label. --- Additional comment from Tim Bielawa on 2016-10-18 11:34:56 EDT --- https://github.com/openshift/openshift-ansible/pull/2615 --- Additional comment from Tim Bielawa on 2016-10-20 14:57:57 EDT --- We've worked out the bugs and have made this feature complete. Featureisms: * Existing labels can be modified * New labels can be added Limitations: * Labels can not be removed Pending PR review and merge. --- Additional comment from Tim Bielawa on 2016-10-24 16:09:49 EDT --- Looking for the +1 from you guys before we merge this. Please take a gander at the github PR. Thanks! --- Additional comment from Wesley Hearn on 2016-10-25 09:38:24 EDT --- Looks good --- Additional comment from Tim Bielawa on 2016-10-25 10:07:47 EDT --- Merged into master. Thanks Wes
This functionality was described as a known current limitation in the bug where this work was accepted. I'm moving this card to upcoming release while we focus on blockers.
This should be possible now that we have the oc_label module from ops integrated into openshift-ansible: https://github.com/openshift/openshift-ansible/blob/master/roles/lib_openshift/library/oc_label.py
@Johnny Liu @Gan Huang This bug is titled "Node labels can not be removed when re-installing the cluster". As noted in my earlier comment, it is now possible to remove node labels. Are you requesting a specific feature or new entry-point to run node-label removing actions? I'm not sure what else to do with this bug report now. Please advise.
Hi Tim, Problem still persists after testing against openshift-ansible-3.6.3-1.git.0.622449e.el7.noarch The detailed step is in https://bugzilla.redhat.com/show_bug.cgi?id=1359848#c12 We're not requesting a new entry-point or a specific feature. What my original thoughts were that we should make sure the node labels on the nodes same with what we defined in inventory hosts file even if it's a re-install. Do you think it's reasonable? (Anyway, this issue is not a big deal for me) Thanks.