Bug 1428123 - Failed to list *apps.StatefulSet errors after upgrading from OCP 3.4 to 3.5 and restarting master
Summary: Failed to list *apps.StatefulSet errors after upgrading from OCP 3.4 to 3.5 a...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: apiserver-auth
Version: 3.5.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Jordan Liggitt
QA Contact: Chuan Yu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-03-01 20:36 UTC by Vikas Laad
Modified: 2017-07-24 14:11 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of:
Environment:
Last Closed: 2017-04-12 19:14:23 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Message logs showing before/after for restart after update to 3.5 (562.54 KB, application/x-gzip)
2017-03-01 23:09 UTC, Mike Fiedler
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:0884 0 normal SHIPPED_LIVE Red Hat OpenShift Container Platform 3.5 RPM Release Advisory 2017-04-12 22:50:07 UTC

Description Vikas Laad 2017-03-01 20:36:29 UTC
Description of problem:
I upgraded cluster from OCP 3.4 to OCP 3.5 followed the following doc 

https://docs.openshift.com/container-platform/3.4/install_config/upgrading/manual_upgrades.html#upgrading-masters

After rebooting master I see frequently following error messages in logs

Mar  1 15:19:29 ip-172-31-60-104 atomic-openshift-master: E0301 15:19:29.963222    1135 reflector.go:199] pkg/controller/disruption/disruption.go:329: Failed to list *apps.StatefulSet: User "system:serviceaccount:openshift-infra:disruption-controller" cannot list all apps.statefulsets in the cluster


Version-Release number of selected component (if applicable):
Before 
openshift v3.4.1.7
kubernetes v1.4.0+776c994
etcd 3.1.0-rc.0

After
openshift v3.5.0.37
kubernetes v1.5.2+43a9be4
etcd 3.1.0


How reproducible:
Always

Steps to Reproduce:
1. Install 3.4 cluster
2. Create lots of projects/endpoints/services/routes
3. Upgrade cluster using docs

Actual results:
Frequent errors from atomic-openshift-master in logs.

Expected results:
Should not have errors.

Additional info:

Comment 1 Mike Fiedler 2017-03-01 23:08:46 UTC
The list errors start after restarting atomic-openshift-master after updating the RPMs and come in at a rate of 3-5 sec on a small cluster.

Step 4 here:   https://docs.openshift.com/container-platform/3.4/install_config/upgrading/manual_upgrades.html

Comment 2 Mike Fiedler 2017-03-01 23:09:36 UTC
Created attachment 1258932 [details]
Message logs showing before/after for restart after update to 3.5

Comment 3 Jordan Liggitt 2017-03-02 06:15:43 UTC
did you reconcile cluster roles with a 3.5 oadm client?

https://docs.openshift.com/container-platform/3.4/install_config/upgrading/manual_upgrades.html#updating-policy-definitions

can you include the output of the following?

oadm version
oadm policy reconcile-cluster-roles -o name
oadm policy reconcile-cluster-roles -o name --confirm

Comment 4 Jordan Liggitt 2017-03-02 06:20:45 UTC
oh, missed this was the service account.

Fix in https://github.com/openshift/origin/pull/13187

Comment 5 Michal Fojtik 2017-03-02 14:47:26 UTC
And 1.5 PR is here: https://github.com/openshift/origin/pull/13199

Comment 6 Troy Dawson 2017-03-03 17:56:35 UTC
This has been merged into ocp and is in OCP v3.5.0.38 or newer.

Comment 8 Chuan Yu 2017-03-06 01:12:37 UTC
Verified in clean install environment, with latest OCP 3.5 puddle, no such error logged.
# openshift version
openshift v3.5.0.39
kubernetes v1.5.2+43a9be4
etcd 3.1.0

Verb/resource added to system:disruption-controller ClusterRole:
{
            "apiGroups": [
                "apps"
            ],
            "attributeRestrictions": null,
            "resources": [
                "statefulsets"
            ],
            "verbs": [
                "list",
                "watch"
            ]
        },
Set the bug status to verified now, will continue check it in the upgrade environment.

Comment 9 Vikas Laad 2017-03-06 16:31:38 UTC
I also check this with a scaled cluster upgrade, problem is gone.

Comment 11 errata-xmlrpc 2017-04-12 19:14:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:0884


Note You need to log in before you can comment on or make changes to this bug.