Bug 1359848 - [RFE] Support for node-labels in the Ansible installer
Summary: [RFE] Support for node-labels in the Ansible installer
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.2.1
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Tim Bielawa
QA Contact: Gan Huang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-07-25 14:30 UTC by Wesley Hearn
Modified: 2017-03-08 18:43 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Feature: Provides ability to add persistent node-labels to hosts. Reason: Rebooting hosts (such as in cloud environments) would not have the same labels applied after reboot. Result: Node-labels persist across reboots.
Clone Of:
: 1389674 (view as bug list)
Environment:
Last Closed: 2017-01-18 12:51:45 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:0066 0 normal SHIPPED_LIVE Red Hat OpenShift Container Platform 3.4 RPM Release Advisory 2017-01-18 17:23:26 UTC

Description Wesley Hearn 2016-07-25 14:30:57 UTC
Description of problem:
As of OCP 3.1.1.6 the node-label kublet argument has been back ported(https://bugzilla.redhat.com/show_bug.cgi?id=1318804). We would like to use this feature so that in the event we have to stop/start a node it will keep its node labels. 


Additional info:
http://kubernetes.io/docs/admin/kubelet/
"--node-labels=: <Warning: Alpha feature> Labels to add when registering the node in the cluster.  Labels must be key=value pairs separated by ','."

Comment 1 Wesley Hearn 2016-07-26 17:41:11 UTC
We have a workaround for this:
put "openshift_node_kubelet_args={'node-labels': '{{ (openshift_node_labels | default({})).items() | map("join", "=") | list  }}'}" in your byo inventory under OSEv3:vars. It would be nice to not have to do this and the node config role automatically parse openshift_node_labels if it exists and then append it to openshift_node_kublet_args.

Comment 2 Scott Dodson 2016-10-04 16:00:50 UTC
Need openshift_manage_nodes to add labels when it's invoked, also need to merge openshift_node_labels into openshift_node_kubelet_args?

Comment 3 Tim Bielawa 2016-10-14 19:47:21 UTC
@whearn

I've written up a patch which I think will fulfill your request.

On my existing cluster I begin with a node-config.yaml file with an empty `kubeletArguments` key. None of my inventory hosts had `openshift_node_labels` set during initial setup.

When I apply this patch and add some openshift_node_labels like this:

> ec2-w-x-y-z.compute-1.amazonaws.com openshift_node_labels="{'region': 'infra', 'foo': 'bar'}" ....

Then the node-config.yaml file shows the following for the value of the `kubeletArguments` key when I re-run the playbook.

> kubeletArguments:
>   node-labels:
>   - region=infra,foo=bar

This will also work on the first provisioning run. I was just describing my testing process above.

Is this what you were looking for?

Comment 4 Wesley Hearn 2016-10-14 19:50:55 UTC
node-labels is an array so it should be

node-labels:
- region=infra
- foo=bar

Comment 5 Tim Bielawa 2016-10-17 15:06:48 UTC
Got it. Worked out a fix to make it a list. Presently debugging a failure this causes when there is more than one node label.

Comment 7 Tim Bielawa 2016-10-20 18:57:57 UTC
We've worked out the bugs and have made this feature complete.

Featureisms:

* Existing labels can be modified
* New labels can be added

Limitations:

* Labels can not be removed

Pending PR review and merge.

Comment 8 Tim Bielawa 2016-10-24 20:09:49 UTC
Looking for the +1 from you guys before we merge this. Please take a gander at the github PR. Thanks!

Comment 9 Wesley Hearn 2016-10-25 13:38:24 UTC
Looks good

Comment 10 Tim Bielawa 2016-10-25 14:07:47 UTC
Merged into master. Thanks Wes

Comment 12 Gan Huang 2016-10-28 08:12:51 UTC
Verified with openshift-ansible-3.4.13-1.git.0.ff1d588.el7.noarch.rpm

1. Trigger a fresh installation

#cat hosts
<--snip-->
openshift_node_labels="{'registry': 'enabled','router': 'enabled','role': 'node'}"
<--snip-->

# oc get nodes --show-labels=true
NAME                            STATUS                     AGE       LABELS
ip-172-18-10-139.ec2.internal   Ready,SchedulingDisabled   35m       beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=ip-172-18-10-139.ec2.internal,role=node
ip-172-18-11-38.ec2.internal    Ready                      1m        beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=ip-172-18-11-38.ec2.internal,registry=enabled,role=node,router=enabled

On the labeled node:
#cat /etc/origin/node/node-config.yaml
<--snip>
  node-labels:
  - router=enabled
  - role=node
  - registry=enabled
<--snip>

2. Modify the inventory hosts, then re-install the above env
openshift_node_labels="{'registry': 'enabled','router': 'disabled','app': 'test1'}"

# oc get nodes --show-labels=true
NAME                            STATUS                     AGE       LABELS
ip-172-18-10-139.ec2.internal   Ready,SchedulingDisabled   2h        beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=ip-172-18-10-139.ec2.internal,role=node
ip-172-18-11-38.ec2.internal    Ready                      1h        app=test1,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=ip-172-18-11-38.ec2.internal,registry=enabled,role=node,router=disabled

On the labeled node:
#cat /etc/origin/node/node-config.yaml
  node-labels:
  - router=disabled
  - app=test1
  - role=node
  - registry=enabled

From the testing above, labels can be updated/added, but can't be removed. Verify this bug and track the minor issue in BZ#1389674

Comment 13 Tim Bielawa 2016-11-02 16:35:56 UTC
@gan, I noted the limitation in BZ#1389674. Do you need anything else from me?

Comment 14 Gan Huang 2016-11-03 02:56:10 UTC
No, thanks, Tim!

Comment 16 errata-xmlrpc 2017-01-18 12:51:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:0066


Note You need to log in before you can comment on or make changes to this bug.