Description of problem: The console management is inaccessible when there are no workers. How reproducible: Install cluster with 0 workers or scale down workers to 0 at Azure. Steps to Reproduce: 1. Install a cluster following (https://docs.openshift.com/container-platform/4.3/installing/installing_azure/installing-azure-default.html) 2. Scale down with this command oc scale --replicas=0 machineset NAME_MACHINESET -n openshift-machine-api 3. Try to acces web console management Actual results: The web console is inaccessible. Expected results: The web console management continue to work Additional info: I realize the LB on Azure doesn't have any backend when I scale down or install with 0 workers.
Investigating this. Both the console & its operator are scheduled to master nodes so should not disappear if you eliminate your workers.
Console no longer is available at its route on: - Azure - AWS The initial bug report was for Azure, but I've confirmed the same on AWS. Console route will not display the console: - https://console-openshift-console.apps.bpetersen.devcluster.openshift.com/ Instead providing: Secure Connection Failed An error occurred during a connection to console-openshift-console.apps.bpetersen.devcluster.openshift.com. PR_END_OF_FILE_ERROR However, all of the console pods are running: oc get pods -n openshift-console NAME READY STATUS RESTARTS AGE console-784db4f5dd-hvvb7 1/1 Running 0 28m console-784db4f5dd-mll9w 1/1 Running 1 32m downloads-c9c6f59f6-7hw6j 1/1 Running 0 38m downloads-c9c6f59f6-qrwrp 1/1 Running 0 38m Logs are as expected: 2020-03-27T20:46:12Z cmd/main: Binding to [::]:8443... 2020-03-27T20:46:12Z cmd/main: using TLS Note also that oauth is not responsive either: - https://oauth-openshift.apps.bpetersen.devcluster.openshift.com/oauth/token Instead providing: Secure Connection Failed An error occurred during a connection to oauth-openshift.apps.bpetersen.devcluster.openshift.com. PR_END_OF_FILE_ERROR
There is a set of operators reporting problems: oc get clusteroperator NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.0-0.ci-2020-03-27-170200 True False True 30m cloud-credential 4.5.0-0.ci-2020-03-27-170200 True False False 53m cluster-autoscaler 4.5.0-0.ci-2020-03-27-170200 True False False 41m console 4.5.0-0.ci-2020-03-27-170200 True False True 31m csi-snapshot-controller 4.5.0-0.ci-2020-03-27-170200 False True False 10m dns 4.5.0-0.ci-2020-03-27-170200 True False False 46m etcd 4.5.0-0.ci-2020-03-27-170200 True False False 45m image-registry 4.5.0-0.ci-2020-03-27-170200 False True False 10m ingress 4.5.0-0.ci-2020-03-27-170200 False True True 13m insights 4.5.0-0.ci-2020-03-27-170200 True False False 42m kube-apiserver 4.5.0-0.ci-2020-03-27-170200 True False False 45m kube-controller-manager 4.5.0-0.ci-2020-03-27-170200 True False False 46m kube-scheduler 4.5.0-0.ci-2020-03-27-170200 True False False 44m kube-storage-version-migrator 4.5.0-0.ci-2020-03-27-170200 False False False 10m machine-api 4.5.0-0.ci-2020-03-27-170200 True False False 42m machine-config 4.5.0-0.ci-2020-03-27-170200 True False False 45m marketplace 4.5.0-0.ci-2020-03-27-170200 True False False 42m monitoring 4.5.0-0.ci-2020-03-27-170200 False True True 4m7s network 4.5.0-0.ci-2020-03-27-170200 True False False 46m node-tuning 4.5.0-0.ci-2020-03-27-170200 True False False 48m openshift-apiserver 4.5.0-0.ci-2020-03-27-170200 True False False 43m openshift-controller-manager 4.5.0-0.ci-2020-03-27-170200 True False False 43m openshift-samples 4.5.0-0.ci-2020-03-27-170200 True False False 41m operator-lifecycle-manager 4.5.0-0.ci-2020-03-27-170200 True False False 47m operator-lifecycle-manager-catalog 4.5.0-0.ci-2020-03-27-170200 True False False 47m operator-lifecycle-manager-packageserver 4.5.0-0.ci-2020-03-27-170200 True False False 43m service-ca 4.5.0-0.ci-2020-03-27-170200 True False False 47m service-catalog-apiserver 4.5.0-0.ci-2020-03-27-170200 True False False 47m service-catalog-controller-manager 4.5.0-0.ci-2020-03-27-170200 True False False 48m storage 4.5.0-0.ci-2020-03-27-170200 True False False 42m
Most notable is ingress: oc get clusteroperator ingress -o yaml apiVersion: config.openshift.io/v1 kind: ClusterOperator metadata: creationTimestamp: "2020-03-27T20:35:25Z" generation: 1 name: ingress resourceVersion: "27303" selfLink: /apis/config.openshift.io/v1/clusteroperators/ingress uid: 3d4bc73e-8f71-4666-b653-923e0c486d94 spec: {} status: conditions: - lastTransitionTime: "2020-03-27T21:05:42Z" message: 'Some ingresscontrollers are degraded: default' reason: IngressControllersDegraded status: "True" type: Degraded - lastTransitionTime: "2020-03-27T21:05:12Z" message: Not all ingress controllers are available. reason: Reconciling status: "True" type: Progressing - lastTransitionTime: "2020-03-27T21:05:12Z" message: Not all ingress controllers are available. reason: IngressUnavailable status: "False" type: Available extension: null relatedObjects: - group: "" name: openshift-ingress-operator resource: namespaces - group: operator.openshift.io name: "" namespace: openshift-ingress-operator resource: IngressController - group: ingress.operator.openshift.io name: "" namespace: openshift-ingress-operator resource: DNSRecord - group: "" name: openshift-ingress resource: namespaces versions: - name: operator version: 4.5.0-0.ci-2020-03-27-170200 - name: ingress-controller version: registry.svc.ci.openshift.org/ocp/4.5-2020-03-27-170200@sha256:6e67a6a2d067e655494967ae296c739466806b42731e93caec8a0374f6f62161
There are no pods running in openshift-ingress project oc get pods -n openshift-ingress NAME READY STATUS RESTARTS AGE router-default-ddbbf777b-8mmbj 0/1 Pending 0 14m router-default-ddbbf777b-xxzgv 0/1 Pending 0 12m
Passing this to networking.
Hi Could you attach the events / describe output of those pods? I am assigning to Routing as this is not under "Networking" scope. -Alex
Ingress doesn't work on Azure in compact cluster topologies (https://bugzilla.redhat.com/show_bug.cgi?id=1794839). You can either close this one as a duplicate or keep it opened under Console but blocked by 1794839 if you like.
*** This bug has been marked as a duplicate of bug 1794839 ***
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 365 days