Bug 1818023 - Management Console stop to working when set workers to 0
Summary: Management Console stop to working when set workers to 0
Keywords:
Status: CLOSED DUPLICATE of bug 1794839
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.3.0
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ---
: 4.5.0
Assignee: Miciah Dashiel Butler Masters
QA Contact: Hongan Li
URL:
Whiteboard:
Depends On: 1794839
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-03-27 13:26 UTC by Rafael Sales
Modified: 2023-09-15 01:29 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-05-08 20:03:02 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Rafael Sales 2020-03-27 13:26:27 UTC
Description of problem:
The console management is inaccessible when there are no workers. 

How reproducible:
Install cluster with 0 workers or scale down workers to 0 at Azure. 

Steps to Reproduce:
1. Install a cluster following (https://docs.openshift.com/container-platform/4.3/installing/installing_azure/installing-azure-default.html)
2. Scale down with this command oc scale --replicas=0 machineset NAME_MACHINESET -n openshift-machine-api
3. Try to acces web console management 

Actual results:
The web console is inaccessible. 

Expected results:
The web console management continue to work

Additional info:
I realize the LB on Azure doesn't have any backend when I scale down or install with 0 workers.

Comment 1 bpeterse 2020-03-27 20:16:00 UTC
Investigating this.  Both the console & its operator are scheduled to master nodes so should not disappear if you eliminate your workers.

Comment 2 bpeterse 2020-03-27 21:18:15 UTC
Console no longer is available at its route on:
- Azure
- AWS

The initial bug report was for Azure, but I've confirmed the same on AWS.  

Console route will not display the console:
- https://console-openshift-console.apps.bpetersen.devcluster.openshift.com/

Instead providing:
  Secure Connection Failed
  An error occurred during a connection to console-openshift-console.apps.bpetersen.devcluster.openshift.com. PR_END_OF_FILE_ERROR 

However, all of the console pods are running:
  oc get pods -n openshift-console
  NAME                        READY   STATUS    RESTARTS   AGE
  console-784db4f5dd-hvvb7    1/1     Running   0          28m
  console-784db4f5dd-mll9w    1/1     Running   1          32m
  downloads-c9c6f59f6-7hw6j   1/1     Running   0          38m
  downloads-c9c6f59f6-qrwrp   1/1     Running   0          38m

Logs are as expected:
  2020-03-27T20:46:12Z cmd/main: Binding to [::]:8443...
  2020-03-27T20:46:12Z cmd/main: using TLS


Note also that oauth is not responsive either:
- https://oauth-openshift.apps.bpetersen.devcluster.openshift.com/oauth/token

Instead providing:
  Secure Connection Failed
  An error occurred during a connection to oauth-openshift.apps.bpetersen.devcluster.openshift.com. PR_END_OF_FILE_ERROR

Comment 3 bpeterse 2020-03-27 21:18:40 UTC
There is a set of operators reporting problems:

oc get clusteroperator
NAME                                       VERSION                        AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                             4.5.0-0.ci-2020-03-27-170200   True        False         True       30m
cloud-credential                           4.5.0-0.ci-2020-03-27-170200   True        False         False      53m
cluster-autoscaler                         4.5.0-0.ci-2020-03-27-170200   True        False         False      41m
console                                    4.5.0-0.ci-2020-03-27-170200   True        False         True       31m
csi-snapshot-controller                    4.5.0-0.ci-2020-03-27-170200   False       True          False      10m
dns                                        4.5.0-0.ci-2020-03-27-170200   True        False         False      46m
etcd                                       4.5.0-0.ci-2020-03-27-170200   True        False         False      45m
image-registry                             4.5.0-0.ci-2020-03-27-170200   False       True          False      10m
ingress                                    4.5.0-0.ci-2020-03-27-170200   False       True          True       13m
insights                                   4.5.0-0.ci-2020-03-27-170200   True        False         False      42m
kube-apiserver                             4.5.0-0.ci-2020-03-27-170200   True        False         False      45m
kube-controller-manager                    4.5.0-0.ci-2020-03-27-170200   True        False         False      46m
kube-scheduler                             4.5.0-0.ci-2020-03-27-170200   True        False         False      44m
kube-storage-version-migrator              4.5.0-0.ci-2020-03-27-170200   False       False         False      10m
machine-api                                4.5.0-0.ci-2020-03-27-170200   True        False         False      42m
machine-config                             4.5.0-0.ci-2020-03-27-170200   True        False         False      45m
marketplace                                4.5.0-0.ci-2020-03-27-170200   True        False         False      42m
monitoring                                 4.5.0-0.ci-2020-03-27-170200   False       True          True       4m7s
network                                    4.5.0-0.ci-2020-03-27-170200   True        False         False      46m
node-tuning                                4.5.0-0.ci-2020-03-27-170200   True        False         False      48m
openshift-apiserver                        4.5.0-0.ci-2020-03-27-170200   True        False         False      43m
openshift-controller-manager               4.5.0-0.ci-2020-03-27-170200   True        False         False      43m
openshift-samples                          4.5.0-0.ci-2020-03-27-170200   True        False         False      41m
operator-lifecycle-manager                 4.5.0-0.ci-2020-03-27-170200   True        False         False      47m
operator-lifecycle-manager-catalog         4.5.0-0.ci-2020-03-27-170200   True        False         False      47m
operator-lifecycle-manager-packageserver   4.5.0-0.ci-2020-03-27-170200   True        False         False      43m
service-ca                                 4.5.0-0.ci-2020-03-27-170200   True        False         False      47m
service-catalog-apiserver                  4.5.0-0.ci-2020-03-27-170200   True        False         False      47m
service-catalog-controller-manager         4.5.0-0.ci-2020-03-27-170200   True        False         False      48m
storage                                    4.5.0-0.ci-2020-03-27-170200   True        False         False      42m

Comment 4 bpeterse 2020-03-27 21:19:06 UTC
Most notable is ingress:

oc get clusteroperator ingress -o yaml
apiVersion: config.openshift.io/v1
kind: ClusterOperator
metadata:
  creationTimestamp: "2020-03-27T20:35:25Z"
  generation: 1
  name: ingress
  resourceVersion: "27303"
  selfLink: /apis/config.openshift.io/v1/clusteroperators/ingress
  uid: 3d4bc73e-8f71-4666-b653-923e0c486d94
spec: {}
status:
  conditions:
  - lastTransitionTime: "2020-03-27T21:05:42Z"
    message: 'Some ingresscontrollers are degraded: default'
    reason: IngressControllersDegraded
    status: "True"
    type: Degraded
  - lastTransitionTime: "2020-03-27T21:05:12Z"
    message: Not all ingress controllers are available.
    reason: Reconciling
    status: "True"
    type: Progressing
  - lastTransitionTime: "2020-03-27T21:05:12Z"
    message: Not all ingress controllers are available.
    reason: IngressUnavailable
    status: "False"
    type: Available
  extension: null
  relatedObjects:
  - group: ""
    name: openshift-ingress-operator
    resource: namespaces
  - group: operator.openshift.io
    name: ""
    namespace: openshift-ingress-operator
    resource: IngressController
  - group: ingress.operator.openshift.io
    name: ""
    namespace: openshift-ingress-operator
    resource: DNSRecord
  - group: ""
    name: openshift-ingress
    resource: namespaces
  versions:
  - name: operator
    version: 4.5.0-0.ci-2020-03-27-170200
  - name: ingress-controller
    version: registry.svc.ci.openshift.org/ocp/4.5-2020-03-27-170200@sha256:6e67a6a2d067e655494967ae296c739466806b42731e93caec8a0374f6f62161

Comment 5 bpeterse 2020-03-27 21:20:35 UTC
There are no pods running in openshift-ingress project

oc get pods -n openshift-ingress
NAME                             READY   STATUS    RESTARTS   AGE
router-default-ddbbf777b-8mmbj   0/1     Pending   0          14m
router-default-ddbbf777b-xxzgv   0/1     Pending   0          12m

Comment 6 bpeterse 2020-03-27 21:23:02 UTC
Passing this to networking.

Comment 7 Alexander Constantinescu 2020-03-30 14:42:32 UTC
Hi 

Could you attach the events / describe output of those pods? 

I am assigning to Routing as this is not under "Networking" scope.

-Alex

Comment 8 Dan Mace 2020-03-30 14:54:09 UTC
Ingress doesn't work on Azure in compact cluster topologies (https://bugzilla.redhat.com/show_bug.cgi?id=1794839). You can either close this one as a duplicate or keep it opened under Console but blocked by 1794839 if you like.

Comment 9 Ben Bennett 2020-05-08 20:03:02 UTC

*** This bug has been marked as a duplicate of bug 1794839 ***

Comment 10 Red Hat Bugzilla 2023-09-15 01:29:19 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 365 days


Note You need to log in before you can comment on or make changes to this bug.