Bug 1885002 - network kube-rbac-proxy scripts crashloop rather than non-crash looping
Summary: network kube-rbac-proxy scripts crashloop rather than non-crash looping
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.6
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 4.7.0
Assignee: Dan Winship
QA Contact: zhaozhanqi
URL:
Whiteboard:
Depends On:
Blocks: 1950407
TreeView+ depends on / blocked
 
Reported: 2020-10-04 12:01 UTC by Dan Winship
Modified: 2021-11-30 03:19 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-02-24 15:22:35 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-network-operator pull 822 0 None closed Bug 1885002: Fix kube-rbac-proxy startup scripts 2020-11-12 21:56:10 UTC
Red Hat Product Errata RHSA-2020:5633 0 None None None 2021-02-24 15:23:05 UTC

Description Dan Winship 2020-10-04 12:01:35 UTC
The startup scripts in the kube-rbac-proxy pods in openshift-sdn and ovn-kubernetes are supposed to loop waiting for the certificate to be available, but they currently end up crashlooping instead.

eg, 

https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/openshift_release/12418/rehearse-12418-pull-ci-openshift-cluster-network-operator-master-e2e-aws-sdn-multi/1312520637901705216/artifacts/e2e-aws-sdn-multi/gather-extra/pods/openshift-sdn_sdn-cv44b_kube-rbac-proxy.log

leading to

                "containerStatuses": [
                    {
                        "containerID": "cri-o://0e599c18a241489f0bfe5e2e346da4ebbee2d7a7650fdcae39d0631a142a0bd8",
                        "image": "registry.build01.ci.openshift.org/ci-op-c4zx7fww/stable@sha256:b081dbc06695312d01f1df26f545ba4374cec97a7a849df3dd825f9608d244be",
                        "imageID": "registry.build01.ci.openshift.org/ci-op-c4zx7fww/stable@sha256:b081dbc06695312d01f1df26f545ba4374cec97a7a849df3dd825f9608d244be",
                        "lastState": {
                            "terminated": {
                                "containerID": "cri-o://0e599c18a241489f0bfe5e2e346da4ebbee2d7a7650fdcae39d0631a142a0bd8",
                                "exitCode": 1,
                                "finishedAt": "2020-10-03T23:14:21Z",
                                "message": "Traceback (most recent call last):\n  File \"\u003cstring\u003e\", line 1, in \u003cmodule\u003e\n  File \"/usr/lib64/python3.6/json/__init__.py\", line 299, in load\n    parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)\n  File \"/usr/lib64/python3.6/json/__init__.py\", line 354, in loads\n    return _default_decoder.decode(s)\n  File \"/usr/lib64/python3.6/json/decoder.py\", line 339, in decode\n    obj, end = self.raw_decode(s, idx=_w(s, 0).end())\n  File \"/usr/lib64/python3.6/json/decoder.py\", line 357, in raw_decode\n    raise JSONDecodeError(\"Expecting value\", s, err.value) from None\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\n",
                                "reason": "Error",
                                "startedAt": "2020-10-03T23:14:18Z"
                            }
                        },
                        "name": "kube-rbac-proxy",
                        "ready": false,
                        "restartCount": 10,
                        "started": false,
                        "state": {
                            "waiting": {
                                "message": "back-off 5m0s restarting failed container=kube-rbac-proxy pod=sdn-cv44b_openshift-sdn(37e1855c-b33e-4d4e-b696-2b84ce5797a0)",
                                "reason": "CrashLoopBackOff"
                            }
                        }
                    },

leading to

                    {
                        "lastTransitionTime": "2020-10-03T22:52:29Z",
                        "message": "DaemonSet \"openshift-sdn/sdn\" rollout is not making progress - pod sdn-cv44b is in CrashLoopBackOff State\nDaemonSet \"openshift-sdn/sdn\" rollout is not making progress - last change 2020-10-03T22:47:56Z",
                        "reason": "RolloutHung",
                        "status": "True",
                        "type": "Degraded"
                    },

Comment 2 zhaozhanqi 2020-10-12 06:17:36 UTC
Verified this bug on 4.7.0-0.nightly-2020-10-11-135849

Comment 5 errata-xmlrpc 2021-02-24 15:22:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633


Note You need to log in before you can comment on or make changes to this bug.