Bug 1667030 - api-resources should work when hit abnormal apiserver groups
Summary: api-resources should work when hit abnormal apiserver groups
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: oc
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: 4.1.0
Assignee: Maciej Szulik
QA Contact: Xingxing Xia
Depends On:
TreeView+ depends on / blocked
Reported: 2019-01-17 09:44 UTC by shahan
Modified: 2019-06-04 10:42 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: Discovery errors were causing api-resources command to stop working. Consequence: It could happen that when one of the apiservers was unreachable oc api-resources would fail. Fix: Aggregate discovery errors, but continue execution. Result: oc api-resources is resilient to discovery errors.
Clone Of:
Last Closed: 2019-06-04 10:42:02 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0758 0 None None None 2019-06-04 10:42:09 UTC

Internal Links: 1656295

Description shahan 2019-01-17 09:44:17 UTC
Description of problem:
When one apiserver group not work well, it should not abort output normal ones under oc api-resources

Version-Release number of selected component (if applicable):
 oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE     STATUS
version   4.0.0-0.nightly-2019-01-15-184339   True        False         20s       Cluster version is 4.0.0-0.nightly-2019-01-15-184339

oc version: oc v4.0.0-0.123.0

How reproducible:

Steps to Reproduce:
1. $oc scale --replicas=0 deploy/prometheus-adapter -n openshift-monitoring (for reproduce issue)
2. $oc api-resources
3. $oc api-resources --api-group=extensions 

Actual results:
2-3 error: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request

Expected results:
2-3 should print the supported normal API resources

Additional info:

Comment 1 Xingxing Xia 2019-01-17 10:26:48 UTC
shahan, just remembered there reported 3.11 bug 1656295 . But that has additional `oc adm migrate storage` symptom.

Comment 2 Juan Vallejo 2019-01-17 19:01:24 UTC
Origin PR: https://github.com/openshift/origin/pull/21816

Comment 3 Juan Vallejo 2019-02-04 15:33:08 UTC
Origin PR has merged. Moving to ON_QA

Comment 4 shahan 2019-02-12 09:19:59 UTC
[hasha@fedora_pc ~]$ oc scale --replicas=0 deploy/prometheus-adapter -n openshift-monitoring
deployment.extensions/prometheus-adapter scaled
[hasha@fedora_pc ~]$ oc api-resources
NAME                                  SHORTNAMES       APIGROUP                                NAMESPACED   KIND
groups                                                 user.openshift.io                       false        Group
identities                                             user.openshift.io                       false        Identity
useridentitymappings                                   user.openshift.io                       false        UserIdentityMapping
users                                                  user.openshift.io                       false        User
error: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request, packages.apps.redhat.com/v1alpha1: the server is currently unable to handle the request

cluster version: 4.0.0-0.alpha-2019-02-11-201342
Verified this bug

Comment 7 Peter Larsen 2019-05-17 19:46:25 UTC
Can this be backported to 3.11? Same problem.

Comment 8 Maciej Szulik 2019-05-20 12:00:56 UTC
You can easily use oc 4.1 binary for that. I'm not sure that this is that critical to get it backported.

Comment 10 errata-xmlrpc 2019-06-04 10:42:02 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.