Bug 1359848
| Summary: | [RFE] Support for node-labels in the Ansible installer | |||
|---|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Wesley Hearn <whearn> | |
| Component: | Installer | Assignee: | Tim Bielawa <tbielawa> | |
| Status: | CLOSED ERRATA | QA Contact: | Gan Huang <ghuang> | |
| Severity: | medium | Docs Contact: | ||
| Priority: | medium | |||
| Version: | 3.2.1 | CC: | aos-bugs, bleanhar, ghuang, jialiu, jokerman, mmccomas, tbielawa, tdawson, whearn | |
| Target Milestone: | --- | |||
| Target Release: | --- | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | Doc Type: | Enhancement | ||
| Doc Text: |
Feature: Provides ability to add persistent node-labels to hosts.
Reason: Rebooting hosts (such as in cloud environments) would not have the same labels applied after reboot.
Result: Node-labels persist across reboots.
|
Story Points: | --- | |
| Clone Of: | ||||
| : | 1389674 (view as bug list) | Environment: | ||
| Last Closed: | 2017-01-18 12:51:45 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
|
Description
Wesley Hearn
2016-07-25 14:30:57 UTC
We have a workaround for this:
put "openshift_node_kubelet_args={'node-labels': '{{ (openshift_node_labels | default({})).items() | map("join", "=") | list }}'}" in your byo inventory under OSEv3:vars. It would be nice to not have to do this and the node config role automatically parse openshift_node_labels if it exists and then append it to openshift_node_kublet_args.
Need openshift_manage_nodes to add labels when it's invoked, also need to merge openshift_node_labels into openshift_node_kubelet_args? @whearn I've written up a patch which I think will fulfill your request. On my existing cluster I begin with a node-config.yaml file with an empty `kubeletArguments` key. None of my inventory hosts had `openshift_node_labels` set during initial setup. When I apply this patch and add some openshift_node_labels like this: > ec2-w-x-y-z.compute-1.amazonaws.com openshift_node_labels="{'region': 'infra', 'foo': 'bar'}" .... Then the node-config.yaml file shows the following for the value of the `kubeletArguments` key when I re-run the playbook. > kubeletArguments: > node-labels: > - region=infra,foo=bar This will also work on the first provisioning run. I was just describing my testing process above. Is this what you were looking for? node-labels is an array so it should be node-labels: - region=infra - foo=bar Got it. Worked out a fix to make it a list. Presently debugging a failure this causes when there is more than one node label. We've worked out the bugs and have made this feature complete. Featureisms: * Existing labels can be modified * New labels can be added Limitations: * Labels can not be removed Pending PR review and merge. Looking for the +1 from you guys before we merge this. Please take a gander at the github PR. Thanks! Looks good Merged into master. Thanks Wes Verified with openshift-ansible-3.4.13-1.git.0.ff1d588.el7.noarch.rpm
1. Trigger a fresh installation
#cat hosts
<--snip-->
openshift_node_labels="{'registry': 'enabled','router': 'enabled','role': 'node'}"
<--snip-->
# oc get nodes --show-labels=true
NAME STATUS AGE LABELS
ip-172-18-10-139.ec2.internal Ready,SchedulingDisabled 35m beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=ip-172-18-10-139.ec2.internal,role=node
ip-172-18-11-38.ec2.internal Ready 1m beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=ip-172-18-11-38.ec2.internal,registry=enabled,role=node,router=enabled
On the labeled node:
#cat /etc/origin/node/node-config.yaml
<--snip>
node-labels:
- router=enabled
- role=node
- registry=enabled
<--snip>
2. Modify the inventory hosts, then re-install the above env
openshift_node_labels="{'registry': 'enabled','router': 'disabled','app': 'test1'}"
# oc get nodes --show-labels=true
NAME STATUS AGE LABELS
ip-172-18-10-139.ec2.internal Ready,SchedulingDisabled 2h beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=ip-172-18-10-139.ec2.internal,role=node
ip-172-18-11-38.ec2.internal Ready 1h app=test1,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=ip-172-18-11-38.ec2.internal,registry=enabled,role=node,router=disabled
On the labeled node:
#cat /etc/origin/node/node-config.yaml
node-labels:
- router=disabled
- app=test1
- role=node
- registry=enabled
From the testing above, labels can be updated/added, but can't be removed. Verify this bug and track the minor issue in BZ#1389674
@gan, I noted the limitation in BZ#1389674. Do you need anything else from me? No, thanks, Tim! Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:0066 |