Bug 1525014 - server doesn't have a resource type "scc"
Summary: server doesn't have a resource type "scc"
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Master
Version: 3.7.1
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 3.10.0
Assignee: Maciej Szulik
QA Contact: Wang Haoran
URL:
Whiteboard:
: 1564539 1596546 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-12-12 12:44 UTC by Rune Henriksen
Modified: 2023-03-24 13:55 UTC (History)
24 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: The client was not able to read full discovery but hanged on the first aggregated server which was temporarily unavailable. Consequence: This lead to not having the proper information about all the resources that were available. Fix: Introduce default timeout for discovery actions. Result: In case of a failure on an aggregated server we'll continue discovering resources on other servers and allow user to work with the ones that are available.
Clone Of:
Environment:
Last Closed: 2018-07-30 19:09:00 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Snippet from ansible-playbook console output. (3.48 KB, text/plain)
2017-12-12 12:44 UTC, Rune Henriksen
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 3353251 0 None None None 2018-05-30 14:00:07 UTC
Red Hat Product Errata RHBA-2018:1816 0 None None None 2018-07-30 19:09:46 UTC

Internal Links: 1596546

Description Rune Henriksen 2017-12-12 12:44:16 UTC
Created attachment 1366639 [details]
Snippet from ansible-playbook console output.

Description of problem:
When using the advanced installer for OSE 3.7 I get an error stating that "scc" isn't a resource type.

Version-Release number of selected component (if applicable):
output of "oc version"
oc v3.7.9
kubernetes v1.7.6+a08f5eeb62
features: Basic-Auth GSSAPI Kerberos SPNEGO

How reproducible:
Happens every time I run the playbooks. Also happens when manually calling the oc command on the masters.

Steps to Reproduce:
1. Using the "Red Hat OpenShift Container Platform 3.7 RPMs x86_64" repo on all nodes
2. Run advanced installer with inventory file
OR 
1. Using the "Red Hat OpenShift Container Platform 3.7 RPMs x86_64" repo on a master
2. Call "oc get scc"


Actual results:
Playbook returns an error on task "Gather OpenShift Logging Facts"
"There was an exception trying to run the command '/usr/local/bin/oc get scc privileged --user=system:admin/master-int-ocp-skat-dk:8443 --config=/etc/origin/master/admin.kubeconfig -o json' the server doesn't have a resource type \"scc\""

oc run directly on a master returns same error 'the server doesn't have a resource type "scc"'

Expected results:
oc should behave the same regardless of whether it's called as "oc get scc" or "oc get securitycontextconstraints"

Additional info:
After digging around on my master servers i found that I can call "oc get securitycontextconstraints" but not "oc get scc", so the oc binary seems to just not have that abbreviation. 

Additionally, when typing "oc get" to preview the list of options securitycontextconstraints isn't on the list, causing a bit more confusion.

I've attached the playbook console output, but it's more or less the same info.

Comment 3 Maciej Szulik 2018-01-04 14:39:45 UTC
Can you please double check and report the exact version you're using. I just tried with:

oc v3.7.9-1+7c71a2d
kubernetes v1.7.6+a08f5eeb62

and that seems to be working as expected. Can you please also ensure you're not running a 3.7 client against 3.6 cluster, which might be causing this problem?

Comment 4 Rune Henriksen 2018-01-05 20:55:57 UTC
I was using OC locally on one of the masters, so client and server version were the same.

I can't replicate the issue anymore as I've upgraded the cluster to 3.7.14 where the scc expander works as intended.

Comment 5 Maciej Szulik 2018-01-08 11:25:16 UTC
Thanks Rune for the information, in that case I'm moving this to QA so they can double-check and close as needed.

Comment 12 Ed Seymour 2018-01-17 09:35:57 UTC
Workaround, fixes issue (bear with me, it does appear to deal with something complete different). 

1. delete the service catalog API reference
oc delete apiservices.apiregistration.k8s.io/v1beta1.servicecatalog.k8s.io -n kube-service-catalog

2. re-run the service catalog installer
ansible-playbooks -i <you inventory> /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/service-catalog.yml

(this may fail, you may need to delete the kube-service-catalog project too, but just to get oc get scc working, don't worry about it failing). 

3. delete any of the existing Pods in the service catalog namespace
oc delete pods --all -n kube-service-catalog

4. Check oc get scc - it should now work.

Comment 13 Maciej Szulik 2018-01-17 16:15:14 UTC
Ed am I reading this correctly, is installing the service catalog preventing scc alias from working? Or that only happens in combination with the cert error? It might be that the service catalog creates a similar alias which then leads oc to confusion.

Comment 14 Ed Seymour 2018-01-18 07:07:57 UTC
In the environment I was working on (a fresh 3.7.14 deployment), it was an error in service catalog deployment that caused the problem. Checking kube-service-catalog project it was noted that the controller pod was failing with a similar cert error as reported above. 

I found a big reporting a similar issue here: https://github.com/openshift/origin/issues/17952 with a suggestion that the api registration was incorrect, and to remove and recreate. 

Doing this I discovered that `oc get scc` now worked (in additional to fixing a few other odd issues, such as some projects not deleting). 

To answer Maciej's question, the issue was not due to the presence of the service catalog, rather a problem with the deployment of the service catalog. Fixing the service catalog, fixed the problem.

Comment 15 Muhammad Aizuddin Zali 2018-01-18 07:17:11 UTC
Having same issue on fresh OSE 3.7 installation with "openshift_enable_service_catalog=false" in inventory file. Does service catalog is mandatory to make it works?

Comment 16 Maciej Szulik 2018-01-18 09:54:04 UTC
Thanks Ed for the thorough explanation.

Comment 17 Muhammad Aizuddin Zali 2018-01-19 11:28:34 UTC
(In reply to Muhammad Aizuddin Zali from comment #15)
> Having same issue on fresh OSE 3.7 installation with
> "openshift_enable_service_catalog=false" in inventory file. Does service
> catalog is mandatory to make it works?

Ignore this comment, somehow after installed on new VM both 'false' and 'true' flag does not produce the issue mentioned.

Comment 20 jimmy 2018-04-02 11:42:24 UTC
(In reply to Muhammad Aizuddin Zali from comment #17)
> (In reply to Muhammad Aizuddin Zali from comment #15)
> > Having same issue on fresh OSE 3.7 installation with
> > "openshift_enable_service_catalog=false" in inventory file. Does service
> > catalog is mandatory to make it works?
> 
> Ignore this comment, somehow after installed on new VM both 'false' and
> 'true' flag does not produce the issue mentioned.

v 3.7 is default enabled

Comment 25 Kenjiro Nakayama 2018-04-10 23:52:38 UTC
*** Bug 1564539 has been marked as a duplicate of this bug. ***

Comment 28 Maciej Szulik 2018-04-17 08:46:50 UTC
After some investigation of the log that Dmitry is pointing to, it looks like the problem is similar to the one described in https://github.com/openshift/origin/issues/17159, I'm currently working on a fix for that.

Comment 30 Maciej Szulik 2018-05-07 13:16:00 UTC
PR in flight https://github.com/openshift/origin/pull/19471

Comment 32 Robert Bost 2018-05-18 20:38:09 UTC
Setting status to POST since there's an upstream PR.

Comment 35 Wang Haoran 2018-05-24 08:54:56 UTC
Verified with:
openshift v3.10.0-0.50.0

Comment 40 Maciej Szulik 2018-07-02 14:06:48 UTC
*** Bug 1596546 has been marked as a duplicate of this bug. ***

Comment 42 errata-xmlrpc 2018-07-30 19:09:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:1816

Comment 45 Matthew Robson 2018-09-13 12:21:40 UTC
If you're hitting the 'error: the server doesn't have a resource type' issue on 3.9.x, I would look at my comment #15 here: https://bugzilla.redhat.com/show_bug.cgi?id=1624493#c15

Even with this fix, a degraded or slow service catalog or down etcd node can trigger the same issue.


Note You need to log in before you can comment on or make changes to this bug.