Description of problem: As of OCP 3.1.1.6 the node-label kublet argument has been back ported(https://bugzilla.redhat.com/show_bug.cgi?id=1318804). We would like to use this feature so that in the event we have to stop/start a node it will keep its node labels. Additional info: http://kubernetes.io/docs/admin/kubelet/ "--node-labels=: <Warning: Alpha feature> Labels to add when registering the node in the cluster. Labels must be key=value pairs separated by ','."
We have a workaround for this: put "openshift_node_kubelet_args={'node-labels': '{{ (openshift_node_labels | default({})).items() | map("join", "=") | list }}'}" in your byo inventory under OSEv3:vars. It would be nice to not have to do this and the node config role automatically parse openshift_node_labels if it exists and then append it to openshift_node_kublet_args.
Need openshift_manage_nodes to add labels when it's invoked, also need to merge openshift_node_labels into openshift_node_kubelet_args?
@whearn I've written up a patch which I think will fulfill your request. On my existing cluster I begin with a node-config.yaml file with an empty `kubeletArguments` key. None of my inventory hosts had `openshift_node_labels` set during initial setup. When I apply this patch and add some openshift_node_labels like this: > ec2-w-x-y-z.compute-1.amazonaws.com openshift_node_labels="{'region': 'infra', 'foo': 'bar'}" .... Then the node-config.yaml file shows the following for the value of the `kubeletArguments` key when I re-run the playbook. > kubeletArguments: > node-labels: > - region=infra,foo=bar This will also work on the first provisioning run. I was just describing my testing process above. Is this what you were looking for?
node-labels is an array so it should be node-labels: - region=infra - foo=bar
Got it. Worked out a fix to make it a list. Presently debugging a failure this causes when there is more than one node label.
https://github.com/openshift/openshift-ansible/pull/2615
We've worked out the bugs and have made this feature complete. Featureisms: * Existing labels can be modified * New labels can be added Limitations: * Labels can not be removed Pending PR review and merge.
Looking for the +1 from you guys before we merge this. Please take a gander at the github PR. Thanks!
Looks good
Merged into master. Thanks Wes
Verified with openshift-ansible-3.4.13-1.git.0.ff1d588.el7.noarch.rpm 1. Trigger a fresh installation #cat hosts <--snip--> openshift_node_labels="{'registry': 'enabled','router': 'enabled','role': 'node'}" <--snip--> # oc get nodes --show-labels=true NAME STATUS AGE LABELS ip-172-18-10-139.ec2.internal Ready,SchedulingDisabled 35m beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=ip-172-18-10-139.ec2.internal,role=node ip-172-18-11-38.ec2.internal Ready 1m beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=ip-172-18-11-38.ec2.internal,registry=enabled,role=node,router=enabled On the labeled node: #cat /etc/origin/node/node-config.yaml <--snip> node-labels: - router=enabled - role=node - registry=enabled <--snip> 2. Modify the inventory hosts, then re-install the above env openshift_node_labels="{'registry': 'enabled','router': 'disabled','app': 'test1'}" # oc get nodes --show-labels=true NAME STATUS AGE LABELS ip-172-18-10-139.ec2.internal Ready,SchedulingDisabled 2h beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=ip-172-18-10-139.ec2.internal,role=node ip-172-18-11-38.ec2.internal Ready 1h app=test1,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=ip-172-18-11-38.ec2.internal,registry=enabled,role=node,router=disabled On the labeled node: #cat /etc/origin/node/node-config.yaml node-labels: - router=disabled - app=test1 - role=node - registry=enabled From the testing above, labels can be updated/added, but can't be removed. Verify this bug and track the minor issue in BZ#1389674
@gan, I noted the limitation in BZ#1389674. Do you need anything else from me?
No, thanks, Tim!
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:0066