Bug 1843319 - daemonsets fail to rollout during upgrade
Summary: daemonsets fail to rollout during upgrade
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-controller-manager
Version: 4.5
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.6.0
Assignee: Tomáš Nožička
QA Contact: zhou ying
URL:
Whiteboard:
Depends On:
Blocks: 1845242
TreeView+ depends on / blocked
 
Reported: 2020-06-03 02:31 UTC by Ben Parees
Modified: 2020-10-27 16:05 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: DaemonSet recreation Consequence: DaemonSet could get stuck for 5 minutes while expectations expire. Fix: DaemonSet controller now clears expectations on recreate. Result: Doesn't get stuck.
Clone Of:
: 1845242 (view as bug list)
Environment:
Last Closed: 2020-10-27 16:04:37 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github kubernetes kubernetes pull 91915 0 None closed Fix DS expectations on recreate 2020-11-17 19:23:19 UTC
Github openshift origin pull 25208 0 None closed Bug 1843319: Fix DS expectations on recreate 2020-11-17 19:23:20 UTC
Red Hat Product Errata RHBA-2020:4196 0 None None None 2020-10-27 16:04:59 UTC

Description Ben Parees 2020-06-03 02:31:41 UTC
Description of problem:

Upgrading from 4.4 to 4.5, monitoring failed to achieve the new level because its node-exporter daemonset failed to rollout:

https://deck-ci.apps.ci.l2s4.p1.openshiftapps.com/view/gcs/origin-ci-test/logs/release-openshift-origin-installer-e2e-azure-upgrade-4.4-stable-to-4.5-ci/56


The clusteroperator reports:

"message": "Failed to rollout the stack. Error: running task Updating node-exporter failed: reconciling node-exporter DaemonSet failed: updating DaemonSet object failed: waiting for DaemonSetRollout of node-exporter: daemonset node-exporter is not ready. status: (desired: 6, updated: 6, ready: 0, unavailable: 6)",


David(who asked this bug be opened) referenced:
https://github.com/kubernetes/kubernetes/pull/91008

as a possible cause and then also indicated there might be a GC issue related to the pods being reaped when the damonset was deleted.

Hopefully he can provide more details about what he saw that made him point this towards workloads vs the monitoring component itself.

Comment 1 Maciej Szulik 2020-06-03 10:54:58 UTC
Yeah, David already opened bug 1843187, so I'll close this as a duplicate.

*** This bug has been marked as a duplicate of bug 1843187 ***

Comment 2 Tomáš Nožička 2020-06-03 12:22:32 UTC
fyi, I've confirmed the pods are actually ready so this is likely the expectations bug we are tracking

Comment 3 David Eads 2020-06-05 13:09:03 UTC
This is a different bug. The GC controller is not cleaning up six pods in openshift-monitoring that do not have valid owner references.

Comment 4 Ben Parees 2020-06-05 13:17:24 UTC
reopening based on https://bugzilla.redhat.com/show_bug.cgi?id=1843319#c3

Comment 5 Ben Parees 2020-06-05 14:59:54 UTC
This is causing upgrade failures in 4.5, what is the basis for deferring it?

(in general comments should always be added to a bug explaining a deferral, when the bug is deferred)

Comment 6 Maciej Szulik 2020-06-08 08:23:44 UTC
(In reply to Ben Parees from comment #5)
> This is causing upgrade failures in 4.5, what is the basis for deferring it?
> 
> (in general comments should always be added to a bug explaining a deferral,
> when the bug is deferred)

After bug 1843187 is fixed Tomas will need to dig through the logs and identify what is causing the actual problem.
The fix has to land in 4.6 first and only then be back-ported (through clone of this BZ) to 4.5. It's not that
we are deferring thos bug, we're following the process, but it takes a bit of time to nail down the root cause and
find a fix.

Comment 7 Tomáš Nožička 2020-06-08 09:32:49 UTC
I think the DS expectations didn't get clear on re-create case, working on a fix upstream.

Comment 8 Ben Parees 2020-06-08 13:55:15 UTC
> The fix has to land in 4.6 first and only then be back-ported (through clone of this BZ) to 4.5. It's not that
we are deferring thos bug, we're following the process, but it takes a bit of time to nail down the root cause and
find a fix.

you can still open the 4.5 clone now so we have a complete view of our blocker list for 4.5.  Otherwise 4.5 risks going out the door w/o this being addressed (because no one except you and I are aware it affects 4.5, it doesn't show up on any 4.5 lists).

So by not opening the 4.5 BZ now, you are (implicitly) saying you're ok shipping as is/deferring this bug.

Comment 9 Tomáš Nožička 2020-06-18 09:15:27 UTC
This bug is actively worked on.

Comment 12 zhou ying 2020-07-02 08:42:02 UTC
Check with the unit test code , the issue has fixed:

[root@dhcp-140-138 daemon]# go test -v -run TestExpectationsOnRecreate
...
=== RUN   TestExpectationsOnRecreate
I0702 16:40:46.789765    8026 shared_informer.go:223] Waiting for caches to sync for test dsc
I0702 16:40:46.890030    8026 shared_informer.go:230] Caches are synced for test dsc 
--- PASS: TestExpectationsOnRecreate (0.41s)
PASS
ok  	k8s.io/kubernetes/pkg/controller/daemon	0.426s

Comment 14 errata-xmlrpc 2020-10-27 16:04:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4196


Note You need to log in before you can comment on or make changes to this bug.