Bug 1729562 - Cluster certificates redeployment does not update server reference in node kubeconfigs
Summary: Cluster certificates redeployment does not update server reference in node ku...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.11.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 3.11.z
Assignee: Russell Teague
QA Contact: Johnny Liu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-07-12 16:00 UTC by Robert Sandu
Modified: 2019-10-23 06:48 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-08-12 19:07:47 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Robert Sandu 2019-07-12 16:00:10 UTC
Description of problem: when changing `masterPublicURL` and `masterURL`, the openshift-ansible/playbooks/redeploy-certificates.yml playbook changes the server reference in the `/etc/origin/master/*.kubeconfig` files, but not in the `/etc/origin/node/*.kubeconfig` files.

Version-Release number of the following components:
rpm -q openshift-ansible: openshift-ansible-3.11.104-1.git.0.379a011.el7.noarch
rpm -q ansible: ansible-2.6.12-1.el7ae.noarch
ansible --version:

ansible 2.6.12
  config file = /home/rsandu/ansible.cfg
  configured module search path = [u'/home/rsandu/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, Sep 12 2018, 05:31:16) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]

How reproducible: always

Steps to Reproduce:
1. Change `masterPublicURL`, `masterURL` and follow https://access.redhat.com/solutions/2362011

Before:

openshift_master_cluster_hostname=lb-internal.local.lab
openshift_master_cluster_public_hostname=lb-public.local.lab

After:

openshift_master_cluster_hostname=test-internal.local.lab
openshift_master_cluster_public_hostname=test-public.local.lab

2. Update openshift_master_cluster_hostname, openshift_master_cluster_public_hostname and run openshift-ansible/playbooks/redeploy-certificates.yml
3. Check Kubeconfig files in /etc/origin/master/ and/or /etc/origin/node/.

Actual results: the "server:" reference is updates in the `/etc/origin/master/*.kubeconfig` files but no in the `/etc/origin/node/*.kubeconfig` files.

Master node: master-0.local.lab

$ sudo grep -r "server:" /etc/origin/master/
/etc/origin/master/bootstrap.kubeconfig:    server: https://lb-internal.local.lab:443
/etc/origin/master/openshift-master.kubeconfig:    server: https://master-0.local.lab:443
/etc/origin/master/admin.kubeconfig:    server: https://test-internal.local.lab:443
/etc/origin/master/admin.kubeconfig:    server: https://test-public.local.lab:443
/etc/origin/master/aggregator-front-proxy.kubeconfig:    server: https://localhost:8443

$ sudo grep -r "server:" /etc/origin/node/
/etc/origin/node/bootstrap.kubeconfig:    server: https://lb-internal.local.lab:443
/etc/origin/node/bootstrap.kubeconfig:    server: https://lb-public.local.lab:443
/etc/origin/node/node.kubeconfig:    server: https://lb-internal.local.lab:443 <-- Old masterPublicURL 
/etc/origin/node/node.kubeconfig:    server: https://lb-public.local.lab:443 <-- Old masterPublicURL

Compute node: node-0.local.lab

$ sudo grep -r "server:" /etc/origin/node/
/etc/origin/node/bootstrap.kubeconfig:    server: https://lb-internal.local.lab:443 <-- Old masterURL 
/etc/origin/node/node.kubeconfig:    server: https://lb-internal.local.lab:443 <-- Old masterURL 

Expected results: the "server:" reference to be updates to the new masterPublicURL in both `/etc/origin/master/*.kubeconfig` and `/etc/origin/node/*.kubeconfig` files.

Additional info: redeploy-certificates.yml verbose output attached.

Comment 3 Scott Dodson 2019-08-12 19:07:47 UTC
Altering the cluster's API hostname is not the intent of the certificate redeploy playbooks. I'm afraid in order to do that you'll have to work around the problem by manually updating the node kubeconfigs.


Note You need to log in before you can comment on or make changes to this bug.