Bug 1653228
Summary: | [Next_gen_installer] Got 'error: unsupported protocol scheme ""' when oc login | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | weiwei jiang <wjiang> |
Component: | apiserver-auth | Assignee: | Standa Laznicka <slaznick> |
Status: | CLOSED ERRATA | QA Contact: | Chuan Yu <chuyu> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 4.1.0 | CC: | aos-bugs, evb, slaznick, wsun, xxia |
Target Milestone: | --- | Keywords: | TestBlocker |
Target Release: | 4.1.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: |
Cause:
The OAuth .well-known endpoint was advertising wrong values because of a bad default configuration and general incompleteness of the feature in 4.0
Consequence:
The code tried to parse empty string as an URL causing the error message `error: unsupported protocol scheme ""`
Fix:
Fixed by both implementing an operator that sets the masterURL properly in OAuth config and by removing the corrupted code path.
Result:
The default OAuth configuration in 4.0 does not cause failures now.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2019-06-04 10:41:04 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
weiwei jiang
2018-11-26 10:22:58 UTC
I don't think your expected result is correct. There will always be an error if you try to login w/o having an identity provider set for your cluster. The thing here is - it should probably be a different error informing you about what's actually wrong (in this case, we're missing URLs in the default OAuth config) and we should definitely fix that. Right now there is no default identity providers configuration in 4.0 clusters. If you would like a temporary workaround to be able to login as a random user, you can use the patch commands from https://github.com/openshift/installer/pull/758/files. Importantly, be aware that these commands move your cluster to an unsupported state, so this is by far not the permanent solution. Note that there is a running effort to improve the situation around bootstrapped OAuth: https://github.com/openshift/origin/pull/21580 https://github.com/openshift/cluster-kube-apiserver-operator/pull/152 Simply running `oc login` should not be throwing errors anymore as of https://github.com/openshift/origin/pull/21621. The current workflow is to login as the user kubeadmin with the password from auth/kubeadmin-password from the installation directory to set up your identity providers (which is still under development). Checked and this has been fixed. Thanks $ bin/openshift-install version bin/openshift-install v0.5.0-master-36-gb4f5ceb6bfde8d3dc0e29f708e0494488ea37ee0 Terraform v0.11.8 Your version of Terraform is out of date! The latest version is 0.11.10. You can update by downloading from www.terraform.io/downloads.html $ oc version oc v4.0.0-0.66.0 kubernetes v1.11.0+d4cacc0 features: Basic-Auth GSSAPI Kerberos SPNEGO Server https://ocp-api.tt.testing:6443 kubernetes v1.11.0+99b66db $ oc login --server=https://ocp-api.tt.testing:6443 --config=test --insecure-skip-tls-verify=true Authentication required for https://ocp-api.tt.testing:6443 (openshift) Username: kubeadmin Password: Login successful. You have access to the following projects and can switch between them with 'oc project <projectname>': * default kube-public kube-system openshift openshift-apiserver openshift-cluster-api openshift-cluster-dns openshift-cluster-dns-operator openshift-cluster-kube-apiserver-operator openshift-cluster-kube-controller-manager-operator openshift-cluster-kube-scheduler-operator openshift-cluster-machine-approver openshift-cluster-network-operator openshift-cluster-node-tuning-operator openshift-cluster-openshift-apiserver-operator openshift-cluster-openshift-controller-manager-operator openshift-cluster-samples-operator openshift-cluster-version openshift-config openshift-config-managed openshift-console openshift-controller-manager openshift-core-operators openshift-csi-operator openshift-image-registry openshift-infra openshift-ingress openshift-ingress-operator openshift-kube-apiserver openshift-kube-controller-manager openshift-kube-scheduler openshift-machine-config-operator openshift-monitoring openshift-node openshift-operator-lifecycle-manager openshift-sdn openshift-service-cert-signer Using project "default". Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758 |