Bug 1972703
Summary: | Subctl fails to join cluster, since it cannot auto-generate a valid cluster id | ||
---|---|---|---|
Product: | Red Hat Advanced Cluster Management for Kubernetes | Reporter: | Noam Manos <nmanos> |
Component: | Submariner | Assignee: | Nir Yechiel <nyechiel> |
Status: | CLOSED ERRATA | QA Contact: | Noam Manos <nmanos> |
Severity: | medium | Docs Contact: | Christopher Dawson <cdawson> |
Priority: | unspecified | ||
Version: | rhacm-2.3 | CC: | skitt, tfreger |
Target Milestone: | --- | Keywords: | Reopened |
Target Release: | rhacm-2.3 | Flags: | ming:
rhacm-2.3+
|
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2021-08-06 00:52:39 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Noam Manos
2021-06-16 12:56:32 UTC
Could you attach the kubeconfig used? apiVersion: v1 clusters: - cluster: certificate-authority-data: ... server: https://api.pkomarov-cluster-a.devcluster.openshift.com:6443 name: api-pkomarov-cluster-a-devcluster-openshift-com:6443 - cluster: certificate-authority-data: ... server: https://api.pkomarov-cluster-a.devcluster.openshift.com:6443 name: pkomarov-cluster-a contexts: - context: cluster: pkomarov-cluster-a namespace: default user: admin name: pkomarov-cluster-a current-context: pkomarov-cluster-a kind: Config preferences: {} users: - name: admin user: client-certificate-data: ... client-key-data: ... - name: master/api-pkomarov-cluster-a-devcluster-openshift-com:6443 user: token: sha256~... Note that the kubeconfig is as in previous comment, but `oc config view` shows another context "pkomarov-cluster-a_old": $ oc config view apiVersion: v1 clusters: - cluster: certificate-authority-data: DATA+OMITTED server: https://api.pkomarov-cluster-a.devcluster.openshift.com:6443 name: api-pkomarov-cluster-a-devcluster-openshift-com:6443 - cluster: certificate-authority-data: DATA+OMITTED server: https://api.pkomarov-cluster-a.devcluster.openshift.com:6443 name: pkomarov-cluster-a contexts: - context: cluster: api-pkomarov-cluster-a-devcluster-openshift-com:6443 user: master/api-pkomarov-cluster-a-devcluster-openshift-com:6443 name: default/api-pkomarov-cluster-a-devcluster-openshift-com:6443/master - context: cluster: pkomarov-cluster-a namespace: default user: admin name: pkomarov-cluster-a - context: cluster: pkomarov-cluster-a namespace: default user: admin name: pkomarov-cluster-a_old current-context: default/api-pkomarov-cluster-a-devcluster-openshift-com:6443/master kind: Config preferences: {} users: - name: admin user: client-certificate-data: REDACTED client-key-data: REDACTED - name: master/api-pkomarov-cluster-a-devcluster-openshift-com:6443 user: token: REDACTED What’s happening here is that the detected cluster name is “api-pkomarov-cluster-a-devcluster-openshift-com:6443”, which isn’t a valid cluster id (it can’t contain colons). So subctl asks the user for a valid cluster id. This is a valid bug, since the cluster context name is a valid name: "api-pkomarov-cluster-a-devcluster-openshift-com:6443" It was generated after `oc login` with a new user "master", of type "HTPasswd" identity provider. Please fix subctl join command to deal with such context name. (In reply to Noam Manos from comment #5) > This is a valid bug, since the cluster context name is a valid name: > "api-pkomarov-cluster-a-devcluster-openshift-com:6443" It’s a valid context name, but it’s not a valid cluster id. cluster IDs must be valid DNS-1123 names, with only lowercase alphanumerics, '.' or '-' (and the first and last characters must be alphanumerics). We use the context name by default because that’s guaranteed to be locally unique; however if we start converting context names to avoid problematic characters, we will lose that guarantee. https://github.com/submariner-io/submariner-operator/pull/1424 will at least show a more explicit error message: $ bin/subctl join --clusterid=test:123 output/broker-info.subm * output/broker-info.subm says broker is at: https://172.18.0.5:6443 Error: cluster IDs must be valid DNS-1123 names, with only lowercase alphanumerics, '.' or '-' (and the first and last characters must be alphanumerics). test:123 doesn't meet these requirements ? What is your cluster ID? (In reply to Stephen Kitt from comment #6) > It’s a valid context name, but it’s not a valid cluster id. I did not specify --clusterid <invalid cluster id> in the join command, as I've expected it to be auto-generated (according to https://submariner.io/operations/deployment/subctl/#join). (In reply to Noam Manos from comment #8) > (In reply to Stephen Kitt from comment #6) > > It’s a valid context name, but it’s not a valid cluster id. > > I did not specify --clusterid <invalid cluster id> in the join command, > as I've expected it to be auto-generated (according to > https://submariner.io/operations/deployment/subctl/#join). I know you didn’t. When a cluster id is not specified, subctl attempts to auto-generate one based on the context name. If it can’t, it asks the user to provide one. The docs have been updated, see https://submariner.io/operations/deployment/subctl/#join-flags-general Is that sufficient? Docs looks clear now: --clusterid <string> Cluster ID used to identify the tunnels. Every cluster needs to have a unique cluster ID. If not provided, one will be generated by default based on the cluster name in the kubeconfig file; if the cluster name is not a valid cluster ID, the user will be prompted for one Closing issue. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat Advanced Cluster Management for Kubernetes version 2.3), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:3016 |