Bug 1690101 - openshift-apiserver constantly reporting: "couldn't get resource list for [X]: Unauthorized"
Summary: openshift-apiserver constantly reporting: "couldn't get resource list for [X]...
Keywords:
Status: CLOSED DUPLICATE of bug 1688820
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Master
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Michal Fojtik
QA Contact: Xingxing Xia
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-03-18 19:18 UTC by Justin Pierce
Modified: 2019-03-19 03:10 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-03-19 03:10:12 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Justin Pierce 2019-03-18 19:18:13 UTC
Description of problem:
Openshift API working only sporadically 48h after cluster installation and configuration.

openshift-apiserver pods filled with messages like:
E0318 18:31:41.332139       1 memcache.go:140] couldn't get resource list for build.openshift.io/v1: Unauthorized
E0318 18:31:41.345938       1 memcache.go:140] couldn't get resource list for quota.openshift.io/v1: Unauthorized
E0318 18:31:41.356445       1 memcache.go:140] couldn't get resource list for user.openshift.io/v1: Unauthorized
E0318 18:31:51.458213       1 memcache.go:140] couldn't get resource list for apps.openshift.io/v1: Unauthorized
E0318 18:31:51.463793       1 memcache.go:140] couldn't get resource list for build.openshift.io/v1: Unauthorized
E0318 18:31:51.472429       1 memcache.go:140] couldn't get resource list for project.openshift.io/v1: Unauthorized
E0318 18:31:51.478258       1 memcache.go:140] couldn't get resource list for security.openshift.io/v1: Unauthorized
E0318 18:32:01.554762       1 memcache.go:140] couldn't get resource list for oauth.openshift.io/v1: Unauthorized
E0318 18:32:01.557236       1 memcache.go:140] couldn't get resource list for project.openshift.io/v1: Unauthorized
E0318 18:32:01.559807       1 memcache.go:140] couldn't get resource list for quota.openshift.io/v1: Unauthorized
...

oc get clusteroperators:


[ec2-user us-east-2 ~]$ oc get clusteroperators
NAME                                  VERSION                           AVAILABLE   PROGRESSING   FAILING   SINCE
authentication                        4.0.0-0.alpha-2019-03-16-161625   True        False         False     5s
cluster-autoscaler                    4.0.0-0.alpha-2019-03-16-161625   True        False         False     9h
console                               4.0.0-0.alpha-2019-03-16-161625   True        False         False     2d1h
dns                                   4.0.0-0.alpha-2019-03-16-161625   True        False         False     2d1h
image-registry                        4.0.0-0.alpha-2019-03-16-161625   True        False         False     2d1h
ingress                               4.0.0-0.alpha-2019-03-16-161625   True        False         False     2d1h
kube-apiserver                        4.0.0-0.alpha-2019-03-16-161625   True        False         False     9h
kube-controller-manager               4.0.0-0.alpha-2019-03-16-161625   True        False         False     9h
kube-scheduler                        4.0.0-0.alpha-2019-03-16-161625   True        False         True      2d1h
machine-api                           4.0.0-0.alpha-2019-03-16-161625   True        False         False     2d1h
machine-config                        4.0.0-0.alpha-2019-03-16-161625   True        False         False     17h
marketplace-operator                  4.0.0-0.alpha-2019-03-16-161625   True        False         False     2d1h
monitoring                            4.0.0-0.alpha-2019-03-16-161625   False       False         True      11m
network                               4.0.0-0.alpha-2019-03-16-161625   True        False         False     2d1h
node-tuning                           4.0.0-0.alpha-2019-03-16-161625   True        False         False     17h
openshift-apiserver                   4.0.0-0.alpha-2019-03-16-161625   False       False         False     3m3s
openshift-cloud-credential-operator   4.0.0-0.alpha-2019-03-16-161625   True        False         False     2d1h
openshift-controller-manager          4.0.0-0.alpha-2019-03-16-161625   True        False         False     2d1h
openshift-samples                     4.0.0-0.alpha-2019-03-16-161625   True        False         False     2d1h
operator-lifecycle-manager            4.0.0-0.alpha-2019-03-16-161625   True        False         False     2d1h
service-ca                                                              True        False         False     9h
service-catalog-apiserver             4.0.0-0.alpha-2019-03-16-161625   True        False         False     9h
service-catalog-controller-manager    4.0.0-0.alpha-2019-03-16-161625   True        False         False     2d1h
storage                               4.0.0-0.alpha-2019-03-16-161625   True        False         False     2d1h

apiserver pods are in running state since the initial configuration:
[ec2-user us-east-2 ~]$ oc get pods -n openshift-apiserver
NAME              READY   STATUS    RESTARTS   AGE
apiserver-f9cx9   1/1     Running   0          2d1h
apiserver-wh49t   1/1     Running   0          2d1h
apiserver-wx778   1/1     Running   0          2d1h

[ec2-user us-east-2 ~]$ oc get pods -n openshift-apiserver-operator
NAME                                           READY   STATUS    RESTARTS   AGE
openshift-apiserver-operator-d88456bbd-dczmk   1/1     Running   3          2d1h



Version-Release number of selected component (if applicable):
4.0.0-0.alpha-2019-03-16-161625

How reproducible:
Unknown

Actual results:
- Resources from openshift-apiserver are returned sporadically. For example, as system:admin:
[ec2-user us-east-2 ~]$ oc get users
No resources found.
[ec2-user us-east-2 ~]$ oc get users
No resources found.
[ec2-user us-east-2 ~]$ oc get users
No resources found.
[ec2-user us-east-2 ~]$ oc get users
error: You must be logged in to the server (Unauthorized)

Additional info:
- Must-gather cannot run successfully against this cluster. 
- pod logs and kubelet journals from masters: http://file.rdu.redhat.com/~jupierce/share/apiserver-outage.tgz

Comment 1 Xingxing Xia 2019-03-19 03:10:12 UTC
Same as bug 1688820 (which, again, duplicates bug 1688147 / bug 1688503)

*** This bug has been marked as a duplicate of bug 1688820 ***


Note You need to log in before you can comment on or make changes to this bug.