Bug 1965827 - openshift-ansible lacks individual public certificate redeployment playbook for catalog, but 3.9 has it
Summary: openshift-ansible lacks individual public certificate redeployment playbook f...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.11.0
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ---
: 3.11.z
Assignee: Russell Teague
QA Contact: Fan Jia
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-05-30 14:21 UTC by Pablo Alonso Rodriguez
Modified: 2024-10-01 18:23 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-06-30 15:46:04 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift openshift-ansible pull 12330 0 None open Bug 1965827: Add catalog entrypoint playbook 2021-06-03 12:30:52 UTC
Red Hat Product Errata RHSA-2021:2517 0 None None None 2021-06-30 15:47:27 UTC

Description Pablo Alonso Rodriguez 2021-05-30 14:21:35 UTC
Version:

3.11.420 

Platform:

(not relevant)

What happened?

3.9 used to have publicly runnable `/usr/share/ansible/openshift-ansible/playbooks/openshift-service-catalog/redeploy-certificates.yml` playbook[1], but openshift 3.11 doesn't have it[2]. 

As catalog certificates redeployment private playbook is only invoked from main "/usr/share/ansible/openshift-ansible/playbooks/redeploy-certificates.yml" playbook, user needs to perform a full certificate redeployment just to redeploy catalog certificates only, which is not acceptable on big or loaded environments.

[1] - https://github.com/openshift/openshift-ansible/blob/release-3.9/playbooks/openshift-service-catalog/redeploy-certificates.yml
[2] - https://github.com/openshift/openshift-ansible/tree/release-3.11/playbooks/openshift-service-catalog

What did you expect to happen?

To have /usr/share/ansible/openshift-ansible/playbooks/openshift-service-catalog/redeploy-certificates.yml on 3.11

How to reproduce it (as minimally and precisely as possible)?

If you redeploy control plane certificates only, and then certificates of other components one by one, you end up with catalog certificates expired and cannot renew them without running "/usr/share/ansible/openshift-ansible/playbooks/redeploy-certificates.yml".

Anything else we need to know?

It looks like it should work with just re-adding the playbook[1] to 3.11 code base.

Comment 1 Russell Teague 2021-06-01 15:29:13 UTC
Investigation:
It looks like in [1] the service catalog cert redeploy playbook was only added to 3.9 because it was thought to not be needed in later versions of OCP.  Then in [2] a playbook was created for 3.11 for service catalog cert redeploy but an entry-point playbook was not created.  The entry-point playbook was not removed, if just never existed in 3.10 or 3.11.  It should be possible to add the entry-point playbook as it existed in 3.9 [3].


[1] https://github.com/openshift/openshift-ansible/pull/9585
[2] https://github.com/openshift/openshift-ansible/pull/11681
[3] https://github.com/openshift/openshift-ansible/blob/release-3.9/playbooks/openshift-service-catalog/redeploy-certificates.yml

Comment 2 Pablo Alonso Rodriguez 2021-06-03 09:26:04 UTC
Opened PR: https://github.com/openshift/openshift-ansible/pull/12330

Please make any comment needed.

Thanks and regards.

Comment 3 Fan Jia 2021-06-03 12:22:14 UTC
verified.
$ ansible-playbook /home/jfan/projects/src/github.com/openshift/openshift-ansible/playbooks/openshift-service-catalog/redeploy-certificates.yml
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit
localhost does not match 'all'

[WARNING]: Could not match supplied host pattern, ignoring: oo_masters_to_config

[WARNING]: Could not match supplied host pattern, ignoring: oo_first_master

[WARNING]: Skipping plugin (/home/jfan/projects/src/github.com/openshift/openshift-
ansible/roles/lib_utils/callback_plugins/aa_version_requirement.py) as it seems to be
invalid: The 'cryptography' distribution was not found and is required by ansible


PLAY [Initialization Checkpoint Start] *******************************************************
skipping: no hosts matched

PLAY [Populate config host groups] ***********************************************************

TASK [Load group name mapping variables] *****************************************************
Thursday 03 June 2021  20:20:36 +0800 (0:00:00.029)       0:00:00.029 ********* 
ok: [localhost]

TASK [Evaluate groups - g_nfs_hosts is single host] ******************************************
Thursday 03 June 2021  20:20:36 +0800 (0:00:00.015)       0:00:00.044 ********* 
skipping: [localhost]

TASK [Evaluate oo_all_hosts] *****************************************************************
Thursday 03 June 2021  20:20:36 +0800 (0:00:00.049)       0:00:00.093 ********* 

TASK [Evaluate oo_masters] *******************************************************************
Thursday 03 June 2021  20:20:36 +0800 (0:00:00.047)       0:00:00.141 ********* 

TASK [Evaluate oo_first_master] **************************************************************
Thursday 03 June 2021  20:20:36 +0800 (0:00:00.037)       0:00:00.178 ********* 
skipping: [localhost]

TASK [Evaluate oo_new_etcd_to_config] ********************************************************
Thursday 03 June 2021  20:20:36 +0800 (0:00:00.036)       0:00:00.215 ********* 

TASK [Evaluate oo_masters_to_config] *********************************************************
Thursday 03 June 2021  20:20:36 +0800 (0:00:00.035)       0:00:00.250 ********* 

TASK [Evaluate oo_etcd_to_config] ************************************************************
Thursday 03 June 2021  20:20:37 +0800 (0:00:00.035)       0:00:00.286 ********* 

TASK [Evaluate oo_first_etcd] ****************************************************************
Thursday 03 June 2021  20:20:37 +0800 (0:00:00.033)       0:00:00.319 ********* 
skipping: [localhost]

TASK [Evaluate oo_etcd_hosts_to_upgrade] *****************************************************
Thursday 03 June 2021  20:20:37 +0800 (0:00:00.034)       0:00:00.353 ********* 

TASK [Evaluate oo_etcd_hosts_to_backup] ******************************************************
Thursday 03 June 2021  20:20:37 +0800 (0:00:00.032)       0:00:00.386 ********* 

TASK [Evaluate oo_nodes_to_config] ***********************************************************
Thursday 03 June 2021  20:20:37 +0800 (0:00:00.033)       0:00:00.420 ********* 

TASK [Evaluate oo_lb_to_config] **************************************************************
Thursday 03 June 2021  20:20:37 +0800 (0:00:00.038)       0:00:00.458 ********* 

TASK [Evaluate oo_nfs_to_config] *************************************************************
Thursday 03 June 2021  20:20:37 +0800 (0:00:00.037)       0:00:00.495 ********* 

TASK [Evaluate oo_glusterfs_to_config] *******************************************************
Thursday 03 June 2021  20:20:37 +0800 (0:00:00.034)       0:00:00.530 ********* 

TASK [Evaluate oo_etcd_to_migrate] ***********************************************************
Thursday 03 June 2021  20:20:37 +0800 (0:00:00.036)       0:00:00.566 ********* 
[WARNING]: Could not match supplied host pattern, ignoring: oo_etcd_to_config

[WARNING]: Could not match supplied host pattern, ignoring: oo_lb_to_config

[WARNING]: Could not match supplied host pattern, ignoring: oo_nfs_to_config


PLAY [Ensure that all non-node hosts are accessible] *****************************************
skipping: no hosts matched

PLAY [Initialize basic host facts] ***********************************************************
skipping: no hosts matched

PLAY [Retrieve existing master configs and validate] *****************************************
skipping: no hosts matched

PLAY [Initialize special first-master variables] *********************************************
skipping: no hosts matched

PLAY [Disable web console if required] *******************************************************
skipping: no hosts matched

PLAY [Setup yum repositories for all hosts] **************************************************
skipping: no hosts matched
[WARNING]: Could not match supplied host pattern, ignoring: oo_all_hosts


PLAY [Install packages necessary for installer] **********************************************
skipping: no hosts matched

PLAY [Initialize cluster facts] **************************************************************
skipping: no hosts matched

PLAY [Initialize etcd host variables] ********************************************************
skipping: no hosts matched

PLAY [Determine openshift_version to configure on first master] ******************************
skipping: no hosts matched

PLAY [Set openshift_version for etcd, node, and master hosts] ********************************
skipping: no hosts matched

PLAY [Verify Requirements] *******************************************************************
skipping: no hosts matched

PLAY [Verify Node Prerequisites] *************************************************************
skipping: no hosts matched

PLAY [Validate Aci deployment variables] *****************************************************
skipping: no hosts matched

PLAY [Validate certificate configuration] ****************************************************
skipping: no hosts matched

PLAY [Initialization Checkpoint End] *********************************************************
skipping: no hosts matched

PLAY [Update service catalog certificates] ***************************************************
skipping: no hosts matched

PLAY RECAP ***********************************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=0    skipped=15   rescued=0    ignored=0   

Thursday 03 June 2021  20:20:37 +0800 (0:00:00.076)       0:00:00.642 ********* 
=============================================================================== 
Evaluate oo_etcd_to_migrate ----------------------------------------------------------- 0.08s
Evaluate groups - g_nfs_hosts is single host ------------------------------------------ 0.05s
Evaluate oo_all_hosts ----------------------------------------------------------------- 0.05s
Evaluate oo_nodes_to_config ----------------------------------------------------------- 0.04s
Evaluate oo_masters ------------------------------------------------------------------- 0.04s
Evaluate oo_lb_to_config -------------------------------------------------------------- 0.04s
Evaluate oo_first_master -------------------------------------------------------------- 0.04s
Evaluate oo_glusterfs_to_config ------------------------------------------------------- 0.04s
Evaluate oo_masters_to_config --------------------------------------------------------- 0.04s
Evaluate oo_new_etcd_to_config -------------------------------------------------------- 0.04s
Evaluate oo_first_etcd ---------------------------------------------------------------- 0.03s
Evaluate oo_nfs_to_config ------------------------------------------------------------- 0.03s
Evaluate oo_etcd_hosts_to_backup ------------------------------------------------------ 0.03s
Evaluate oo_etcd_to_config ------------------------------------------------------------ 0.03s
Evaluate oo_etcd_hosts_to_upgrade ----------------------------------------------------- 0.03s
Load group name mapping variables ----------------------------------------------------- 0.02s

Comment 4 Pablo Alonso Rodriguez 2021-06-03 12:31:41 UTC
@Fan are you sure that you tested using a right openshift-ansible inventory? I ask mainly because of these lines:

[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit
localhost does not match 'all'

Comment 5 Fan Jia 2021-06-04 03:12:15 UTC
(In reply to Pablo Alonso Rodriguez from comment #4)
> @Fan are you sure that you tested using a right openshift-ansible inventory?
> I ask mainly because of these lines:
> 
> [WARNING]: provided hosts list is empty, only localhost is available. Note
> that the implicit
> localhost does not match 'all'

Hi, Pablo openshift-ansible rpm package is not ready. We will test it again when the new rpm package(with the new coed) is ready. I test the code to make sure the grammar is ok and the function will be tested till the new rpm package is ready.

Comment 7 Fan Jia 2021-06-22 03:16:40 UTC
verified.
# oc version
oc v3.11.461
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://jfan-debugmaster-etcd-1:8443
openshift v3.11.461
kubernetes v1.11.0+d4cacc0

Before redeploy-certificates:
The catalog CA cash:ca_hash: 27aa0b5fb1b08a096da583b7f670bca596283b81

Run the playbook:
- private-openshift-ansible/playbooks/openshift-service-catalog/redeploy-certificates.yml

...............
06-22 11:05:38.676 
 PLAY [Update service catalog certificates] *************************************
06-22 11:05:38.676 
 
06-22 11:05:38.676 
 TASK [Gathering Facts] *********************************************************
06-22 11:05:39.766 
 ok: [ci-vm-10-0-148-87.hosted.upshift.rdu2.redhat.com]
06-22 11:05:39.766 
 
06-22 11:05:39.766 
 TASK [openshift_service_catalog : Remove TLS secret] ***************************
06-22 11:05:41.198 

..............
06-22 11:09:00.053 
 ok: [ci-vm-10-0-148-87.hosted.upshift.rdu2.redhat.com] => {"attempts": 5, "changed": false, "module_results": {"cmd": "/usr/bin/oc get daemonset apiserver -o json -n openshift-template-service-broker", "results": [{"apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": {"annotations": {"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"apiserver\":\"true\"},\"name\":\"apiserver\",\"namespace\":\"openshift-template-service-broker\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"apiserver\":\"true\"},\"name\":\"apiserver\"},\"spec\":{\"containers\":[{\"command\":[\"/usr/bin/template-service-broker\",\"start\",\"template-service-broker\",\"--secure-port=8443\",\"--audit-log-path=-\",\"--tls-cert-file=/var/serving-cert/tls.crt\",\"--tls-private-key-file=/var/serving-cert/tls.key\",\"--v=0\",\"--config=/var/apiserver-config/apiserver-config.yaml\"],\"image\":\"registry-proxy.engineering.redhat.com/rh-osbs/openshift3-ose-template-service-broker:v3.11\",\"imagePullPolicy\":\"IfNotPresent\",\"livenessProbe\":{\"failureThreshold\":3,\"httpGet\":{\"path\":\"/healthz\",\"port\":8443,\"scheme\":\"HTTPS\"},\"initialDelaySeconds\":30,\"periodSeconds\":10,\"successThreshold\":1,\"timeoutSeconds\":5},\"name\":\"c\",\"ports\":[{\"containerPort\":8443}],\"readinessProbe\":{\"failureThreshold\":3,\"httpGet\":{\"path\":\"/healthz\",\"port\":8443,\"scheme\":\"HTTPS\"},\"initialDelaySeconds\":30,\"periodSeconds\":5,\"successThreshold\":1,\"timeoutSeconds\":5},\"volumeMounts\":[{\"mountPath\":\"/var/serving-cert\",\"name\":\"serving-cert\"},{\"mountPath\":\"/var/apiserver-config\",\"name\":\"apiserver-config\"}]}],\"nodeSelector\":{\"node-role.kubernetes.io/master\":\"true\"},\"serviceAccountName\":\"apiserver\",\"volumes\":[{\"name\":\"serving-cert\",\"secret\":{\"defaultMode\":420,\"secretName\":\"apiserver-serving-cert\"}},{\"configMap\":{\"defaultMode\":420,\"name\":\"apiserver-config\"},\"name\":\"apiserver-config\"}]}},\"updateStrategy\":{\"type\":\"RollingUpdate\"}}}\n"}, "creationTimestamp": "2021-06-22T02:23:56Z", "generation": 1, "labels": {"apiserver": "true"}, "name": "apiserver", "namespace": "openshift-template-service-broker", "resourceVersion": "11053", "selfLink": "/apis/extensions/v1beta1/namespaces/openshift-template-service-broker/daemonsets/apiserver", "uid": "e54a945b-d300-11eb-92ac-fa163ef851b2"}, "spec": {"revisionHistoryLimit": 10, "selector": {"matchLabels": {"apiserver": "true"}}, "template": {"metadata": {"creationTimestamp": null, "labels": {"apiserver": "true"}, "name": "apiserver"}, "spec": {"containers": [{"command": ["/usr/bin/template-service-broker", "start", "template-service-broker", "--secure-port=8443", "--audit-log-path=-", "--tls-cert-file=/var/serving-cert/tls.crt", "--tls-private-key-file=/var/serving-cert/tls.key", "--v=0", "--config=/var/apiserver-config/apiserver-config.yaml"], "image": "registry-proxy.engineering.redhat.com/rh-osbs/openshift3-ose-template-service-broker:v3.11", "imagePullPolicy": "IfNotPresent", "livenessProbe": {"failureThreshold": 3, "httpGet": {"path": "/healthz", "port": 8443, "scheme": "HTTPS"}, "initialDelaySeconds": 30, "periodSeconds": 10, "successThreshold": 1, "timeoutSeconds": 5}, "name": "c", "ports": [{"containerPort": 8443, "protocol": "TCP"}], "readinessProbe": {"failureThreshold": 3, "httpGet": {"path": "/healthz", "port": 8443, "scheme": "HTTPS"}, "initialDelaySeconds": 30, "periodSeconds": 5, "successThreshold": 1, "timeoutSeconds": 5}, "resources": {}, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [{"mountPath": "/var/serving-cert", "name": "serving-cert"}, {"mountPath": "/var/apiserver-config", "name": "apiserver-config"}]}], "dnsPolicy": "ClusterFirst", "nodeSelector": {"node-role.kubernetes.io/master": "true"}, "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "apiserver", "serviceAccountName": "apiserver", "terminationGracePeriodSeconds": 30, "volumes": [{"name": "serving-cert", "secret": {"defaultMode": 420, "secretName": "apiserver-serving-cert"}}, {"configMap": {"defaultMode": 420, "name": "apiserver-config"}, "name": "apiserver-config"}]}}, "templateGeneration": 1, "updateStrategy": {"rollingUpdate": {"maxUnavailable": 1}, "type": "RollingUpdate"}}, "status": {"currentNumberScheduled": 1, "desiredNumberScheduled": 1, "numberAvailable": 1, "numberMisscheduled": 0, "numberReady": 1, "observedGeneration": 1, "updatedNumberScheduled": 1}}], "returncode": 0}, "state": "list"}
06-22 11:09:00.054 
 
06-22 11:09:00.054 
 PLAY RECAP *********************************************************************
06-22 11:09:00.054 
 ci-vm-10-0-148-17.hosted.upshift.rdu2.redhat.com : ok=0    changed=0    unreachable=0    failed=0    skipped=6    rescued=0    ignored=0   
06-22 11:09:00.054 
 ci-vm-10-0-148-87.hosted.upshift.rdu2.redhat.com : ok=82   changed=22   unreachable=0    failed=0    skipped=37   rescued=0    ignored=0   
06-22 11:09:00.054 
 ci-vm-10-0-151-239.hosted.upshift.rdu2.redhat.com : ok=0    changed=0    unreachable=0    failed=0    skipped=6    rescued=0    ignored=0   
06-22 11:09:00.054 
 localhost                  : ok=11   changed=0    unreachable=0    failed=0    skipped=5    rescued=0    ignored=0   
06-22 11:09:00.054 
 
06-22 11:09:00.054 
 
06-22 11:09:00.054 
 INSTALLER STATUS ***************************************************************
06-22 11:09:00.054 
 Initialization  : Complete (0:00:34)

After redeploy-certificates:
ca_hash: bcaf1bf05fe79714a260a9b34fb7d004de6f0363
apiserver, controller-manager, template-service-broker, ansible-service-broker pods restart.

Comment 10 errata-xmlrpc 2021-06-30 15:46:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: OpenShift Container Platform 3.11.462 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2517


Note You need to log in before you can comment on or make changes to this bug.