Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1532684

Summary: 3.9.0-0.16.0 install failure verifying web console - curl of console web service fails with connection refused
Product: OpenShift Container Platform Reporter: Mike Fiedler <mifiedle>
Component: Management ConsoleAssignee: Samuel Padgett <spadgett>
Status: CLOSED NOTABUG QA Contact: Johnny Liu <jialiu>
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.9.0CC: aos-bugs, ekuric, jokerman, mifiedle, mmccomas, mtaru, sdodson, spadgett, sselvan
Target Milestone: ---Keywords: Reopened
Target Release: 3.9.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-05-04 11:32:25 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Ansible log of install failure none

Description Mike Fiedler 2018-01-09 15:19:23 UTC
Created attachment 1379141 [details]
Ansible log of install failure

Description of problem:

Installing 3.9.0-0.16.0 with openshift-ansible/playbooks/deploy_cluster.yml fails in this task with 120 retries:

TASK [openshift_web_console : Verify that the web console is running]


fatal: [ec2-54-190-196-89.us-west-2.compute.amazonaws.com]: FAILED! => {"attempts": 120, "changed": false, "cmd": ["curl", "-k", "https://webconsole.openshift-web-console.svc/healthz"], "delta": "0:00:01.013538", "end": "2018-01-09 13:12:56.996536", "msg": "non-zero return code", "rc": 7, "start": "2018-01-09 13:12:55.982998", "stderr": "  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current\n                                 Dload  Upload   Total   Spent    Left  Speed\n\r  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0curl: (7) Failed connect to webconsole.openshift-web-console.svc:443; Connection refused", "stderr_lines": ["  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current", "                                 Dload  Upload   Total   Spent    Left  Speed", "", "  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0", "  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0curl: (7) Failed connect to webconsole.openshift-web-console.svc:443; Connection refused"], "stdout": "", "stdout_lines": []}

Version-Release number of the following components:
rpm -q openshift-ansible

openshift-ansible-3.9.0-0.16.0.git.0.9f19afc.el7.noarch

ansible --version           
ansible 2.4.2.0                    
  config file = /etc/ansible/ansible.cfg                              
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']                                 
  ansible python module location = /usr/lib/python2.7/site-packages/ansible                                                                  
  executable location = /usr/bin/ansible                              
  python version = 2.7.5 (default, Dec  8 2017, 16:39:59) [GCC 4.8.5 20150623 (Red Hat 4.8.5-25)] 

How reproducible: Always

Steps to Reproduce:
1.  Install 3.9.0-0.16.0 with the inventory below (or similar) on AWS


Actual results:
Please include the entire output from the last TASK line through the end of output if an error is generated

TASK [openshift_web_console : Verify that the web console is running] **********
Tuesday 09 January 2018  13:07:37 +0000 (0:00:00.953)       0:12:07.503 ******* 
FAILED - RETRYING: Verify that the web console is running (120 retries left).
FAILED - RETRYING: Verify that the web console is running (119 retries left).
FAILED - RETRYING: Verify that the web console is running (118 retries left).
FAILED - RETRYING: Verify that the web console is running (117 retries left).
FAILED - RETRYING: Verify that the web console is running (116 retries left).
FAILED - RETRYING: Verify that the web console is running (115 retries left).
FAILED - RETRYING: Verify that the web console is running (114 retries left).
FAILED - RETRYING: Verify that the web console is running (113 retries left).
FAILED - RETRYING: Verify that the web console is running (112 retries left).
FAILED - RETRYING: Verify that the web console is running (111 retries left).
FAILED - RETRYING: Verify that the web console is running (110 retries left).
FAILED - RETRYING: Verify that the web console is running (109 retries left).
FAILED - RETRYING: Verify that the web console is running (108 retries left).
FAILED - RETRYING: Verify that the web console is running (107 retries left).
FAILED - RETRYING: Verify that the web console is running (106 retries left).
FAILED - RETRYING: Verify that the web console is running (105 retries left).
FAILED - RETRYING: Verify that the web console is running (104 retries left).
FAILED - RETRYING: Verify that the web console is running (103 retries left).
FAILED - RETRYING: Verify that the web console is running (102 retries left).
FAILED - RETRYING: Verify that the web console is running (101 retries left).
FAILED - RETRYING: Verify that the web console is running (100 retries left).
FAILED - RETRYING: Verify that the web console is running (99 retries left).
FAILED - RETRYING: Verify that the web console is running (98 retries left).
FAILED - RETRYING: Verify that the web console is running (97 retries left).
FAILED - RETRYING: Verify that the web console is running (96 retries left).
FAILED - RETRYING: Verify that the web console is running (95 retries left).
FAILED - RETRYING: Verify that the web console is running (94 retries left).
FAILED - RETRYING: Verify that the web console is running (93 retries left).
FAILED - RETRYING: Verify that the web console is running (92 retries left).
FAILED - RETRYING: Verify that the web console is running (91 retries left).
FAILED - RETRYING: Verify that the web console is running (90 retries left).
FAILED - RETRYING: Verify that the web console is running (89 retries left).
FAILED - RETRYING: Verify that the web console is running (88 retries left).
FAILED - RETRYING: Verify that the web console is running (87 retries left).
FAILED - RETRYING: Verify that the web console is running (86 retries left).
FAILED - RETRYING: Verify that the web console is running (85 retries left).
FAILED - RETRYING: Verify that the web console is running (84 retries left).
FAILED - RETRYING: Verify that the web console is running (83 retries left).
FAILED - RETRYING: Verify that the web console is running (82 retries left).
FAILED - RETRYING: Verify that the web console is running (81 retries left).
FAILED - RETRYING: Verify that the web console is running (80 retries left).
FAILED - RETRYING: Verify that the web console is running (79 retries left).
FAILED - RETRYING: Verify that the web console is running (78 retries left).
FAILED - RETRYING: Verify that the web console is running (77 retries left).
FAILED - RETRYING: Verify that the web console is running (76 retries left).
FAILED - RETRYING: Verify that the web console is running (75 retries left).
FAILED - RETRYING: Verify that the web console is running (74 retries left).
FAILED - RETRYING: Verify that the web console is running (73 retries left).
FAILED - RETRYING: Verify that the web console is running (72 retries left).
FAILED - RETRYING: Verify that the web console is running (71 retries left).
FAILED - RETRYING: Verify that the web console is running (70 retries left).
FAILED - RETRYING: Verify that the web console is running (69 retries left).
FAILED - RETRYING: Verify that the web console is running (68 retries left).
FAILED - RETRYING: Verify that the web console is running (67 retries left).
FAILED - RETRYING: Verify that the web console is running (66 retries left).
FAILED - RETRYING: Verify that the web console is running (65 retries left).
FAILED - RETRYING: Verify that the web console is running (64 retries left).
FAILED - RETRYING: Verify that the web console is running (63 retries left).
FAILED - RETRYING: Verify that the web console is running (62 retries left).
FAILED - RETRYING: Verify that the web console is running (61 retries left).
FAILED - RETRYING: Verify that the web console is running (60 retries left).
FAILED - RETRYING: Verify that the web console is running (59 retries left).
FAILED - RETRYING: Verify that the web console is running (58 retries left).
FAILED - RETRYING: Verify that the web console is running (57 retries left).
FAILED - RETRYING: Verify that the web console is running (56 retries left).
FAILED - RETRYING: Verify that the web console is running (55 retries left).
FAILED - RETRYING: Verify that the web console is running (54 retries left).
FAILED - RETRYING: Verify that the web console is running (53 retries left).
FAILED - RETRYING: Verify that the web console is running (52 retries left).
FAILED - RETRYING: Verify that the web console is running (51 retries left).
FAILED - RETRYING: Verify that the web console is running (50 retries left).
FAILED - RETRYING: Verify that the web console is running (49 retries left).
FAILED - RETRYING: Verify that the web console is running (48 retries left).
FAILED - RETRYING: Verify that the web console is running (47 retries left).
FAILED - RETRYING: Verify that the web console is running (46 retries left).
FAILED - RETRYING: Verify that the web console is running (45 retries left).
FAILED - RETRYING: Verify that the web console is running (44 retries left).
FAILED - RETRYING: Verify that the web console is running (43 retries left).
FAILED - RETRYING: Verify that the web console is running (42 retries left).
FAILED - RETRYING: Verify that the web console is running (41 retries left).
FAILED - RETRYING: Verify that the web console is running (40 retries left).
FAILED - RETRYING: Verify that the web console is running (39 retries left).
FAILED - RETRYING: Verify that the web console is running (38 retries left).
FAILED - RETRYING: Verify that the web console is running (37 retries left).
FAILED - RETRYING: Verify that the web console is running (36 retries left).
FAILED - RETRYING: Verify that the web console is running (35 retries left).
FAILED - RETRYING: Verify that the web console is running (34 retries left).
FAILED - RETRYING: Verify that the web console is running (33 retries left).
FAILED - RETRYING: Verify that the web console is running (32 retries left).
FAILED - RETRYING: Verify that the web console is running (31 retries left).
FAILED - RETRYING: Verify that the web console is running (30 retries left).
FAILED - RETRYING: Verify that the web console is running (29 retries left).
FAILED - RETRYING: Verify that the web console is running (28 retries left).
FAILED - RETRYING: Verify that the web console is running (27 retries left).
FAILED - RETRYING: Verify that the web console is running (26 retries left).
FAILED - RETRYING: Verify that the web console is running (25 retries left).
FAILED - RETRYING: Verify that the web console is running (24 retries left).
FAILED - RETRYING: Verify that the web console is running (23 retries left).
FAILED - RETRYING: Verify that the web console is running (22 retries left).
FAILED - RETRYING: Verify that the web console is running (21 retries left).
FAILED - RETRYING: Verify that the web console is running (20 retries left).
FAILED - RETRYING: Verify that the web console is running (19 retries left).
FAILED - RETRYING: Verify that the web console is running (18 retries left).
FAILED - RETRYING: Verify that the web console is running (17 retries left).
FAILED - RETRYING: Verify that the web console is running (16 retries left).
FAILED - RETRYING: Verify that the web console is running (15 retries left).
FAILED - RETRYING: Verify that the web console is running (14 retries left).
FAILED - RETRYING: Verify that the web console is running (13 retries left).
FAILED - RETRYING: Verify that the web console is running (12 retries left).
FAILED - RETRYING: Verify that the web console is running (11 retries left).
FAILED - RETRYING: Verify that the web console is running (10 retries left).
FAILED - RETRYING: Verify that the web console is running (9 retries left).
FAILED - RETRYING: Verify that the web console is running (8 retries left).
FAILED - RETRYING: Verify that the web console is running (7 retries left).
FAILED - RETRYING: Verify that the web console is running (6 retries left).
FAILED - RETRYING: Verify that the web console is running (5 retries left).
FAILED - RETRYING: Verify that the web console is running (4 retries left).
FAILED - RETRYING: Verify that the web console is running (3 retries left).
FAILED - RETRYING: Verify that the web console is running (2 retries left).
FAILED - RETRYING: Verify that the web console is running (1 retries left).
fatal: [ec2-54-190-196-89.us-west-2.compute.amazonaws.com]: FAILED! => {"attempts": 120, "changed": false, "cmd": ["curl", "-k", "https://webconsole.openshift-web-console.svc/healthz"], "delta": "0:00:01.013538", "end": "2018-01-09 13:12:56.996536", "msg": "non-zero return code", "rc": 7, "start": "2018-01-09 13:12:55.982998", "stderr": "  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current\n                                 Dload  Upload   Total   Spent    Left  Speed\n\r  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0curl: (7) Failed connect to webconsole.openshift-web-console.svc:443; Connection refused", "stderr_lines": ["  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current", "                                 Dload  Upload   Total   Spent    Left  Speed", "", "  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0", "  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0curl: (7) Failed connect to webconsole.openshift-web-console.svc:443; Connection refused"], "stdout": "", "stdout_lines": []}
	to retry, use: --limit @/home/slave3/workspace/Launch Environment Flexy/private-openshift-ansible/playbooks/deploy_cluster.retry

Expected results:

Successful install

Additional info:


Inventory with some info redacted

[OSEv3:children]
masters
nodes

etcd





[OSEv3:vars]

#The following parameters is used by post-actions
iaas_name=AWS
use_rpm_playbook=true
openshift_playbook_rpm_repos=[{'id': 'aos-playbook-rpm', 'name': 'aos-playbook-rpm', 'baseurl': 'http://download.eng.bos.redhat.com/rcm-guest/puddles/RHAOS/AtomicOpenShift/3.9/latest/x86_64/os', 'enabled': 1, 'gpgcheck': 0}]




update_is_images_url=registry.reg-aws.openshift.com:443











#The following parameters is used by openshift-ansible
ansible_ssh_user=root




openshift_cloudprovider_kind=aws

openshift_cloudprovider_aws_access_key=redacted

openshift_cloudprovider_aws_secret_key=redacted












openshift_master_default_subdomain_enable=true
openshift_master_default_subdomain=apps.0109-b1n.qe.rhcloud.com




openshift_auth_type=allowall

openshift_master_identity_providers=[{'name': 'allow_all', 'login': 'true', 'challenge': 'true', 'kind': 'AllowAllPasswordIdentityProvider'}]



openshift_release=v3.9
openshift_deployment_type=openshift-enterprise
openshift_cockpit_deployer_prefix=registry.reg-aws.openshift.com:443/openshift3/
oreg_url=registry.reg-aws.openshift.com:443/openshift3/ose-${component}:${version}
oreg_auth_user={{ lookup('env','REG_AUTH_USER') }}
oreg_auth_password={{ lookup('env','REG_AUTH_PASSWORD') }}
openshift_docker_additional_registries=registry.reg-aws.openshift.com:443
openshift_docker_insecure_registries=registry.reg-aws.openshift.com:443
openshift_service_catalog_image_prefix=registry.reg-aws.openshift.com:443/openshift3/ose-
ansible_service_broker_image_prefix=registry.reg-aws.openshift.com:443/openshift3/ose-
template_service_broker_prefix=registry.reg-aws.openshift.com:443/openshift3/
openshift_enable_service_catalog=true
osm_cockpit_plugins=['cockpit-kubernetes']
osm_use_cockpit=false
openshift_docker_options=--log-opt max-size=20M --log-opt max-file=5 --signature-verification=false
use_cluster_metrics=true
openshift_master_cluster_method=native
openshift_master_dynamic_provisioning_enabled=true
openshift_hosted_router_registryurl=registry.reg-aws.openshift.com:443/openshift3/ose-${component}:${version}
openshift_hosted_registry_registryurl=registry.reg-aws.openshift.com:443/openshift3/ose-${component}:latest
osm_default_node_selector=region=primary
openshift_registry_selector="region=infra,zone=default"
openshift_hosted_router_selector="region=infra,zone=default"
openshift_disable_check=disk_availability,memory_availability,package_availability,docker_image_availability,docker_storage,package_version
openshift_master_portal_net=172.24.0.0/14
openshift_portal_net=172.24.0.0/14
osm_cluster_network_cidr=172.20.0.0/14
osm_host_subnet_length=9
openshift_node_kubelet_args={"pods-per-core": ["0"], "max-pods": ["510"], "image-gc-high-threshold": ["80"], "image-gc-low-threshold": ["70"], "cloud-config":["/etc/origin/cloudprovider/aws.conf"]}
debug_level=2
openshift_set_hostname=true
openshift_override_hostname_check=true
os_sdn_network_plugin_name=redhat/openshift-ovs-multitenant
openshift_hosted_router_replicas=1
openshift_hosted_registry_storage_kind=object
openshift_hosted_registry_storage_provider=s3
openshift_hosted_registry_storage_s3_accesskey=redacted
openshift_hosted_registry_storage_s3_secretkey=redacted
openshift_hosted_registry_storage_s3_bucket=aoe-svt-test
openshift_hosted_registry_storage_s3_region=us-west-2
openshift_hosted_registry_replicas=1
openshift_metrics_install_metrics=false
openshift_metrics_image_prefix=registry.reg-aws.openshift.com:443/openshift3/
openshift_metrics_image_version=v3.9
openshift_metrics_cassandra_storage_type=dynamic
openshift_metrics_cassandra_pvc_size=25Gi
openshift_logging_install_logging=false
openshift_logging_image_prefix=registry.reg-aws.openshift.com:443/openshift3/
openshift_logging_image_version=v3.9
openshift_logging_storage_kind=dynamic
openshift_logging_es_pvc_size=50Gi
openshift_logging_es_pvc_dynamic=true
openshift_clusterid=mffiedler-39
openshift_use_system_containers=false
openshift_use_crio=false
system_images_registry=registry.reg-aws.openshift.com:443
openshift_image_tag=v3.9.0-0.16.0




[lb]


[etcd]
ec2-54-190-196-89.us-west-2.compute.amazonaws.com ansible_user=root ansible_ssh_user=root ansible_ssh_private_key_file="/home/slave3/workspace/Launch Environment Flexy/private/config/keys/id_rsa_perf" openshift_public_hostname=ec2-54-190-196-89.us-west-2.compute.amazonaws.com


[masters]
ec2-54-190-196-89.us-west-2.compute.amazonaws.com ansible_user=root ansible_ssh_user=root ansible_ssh_private_key_file="/home/slave3/workspace/Launch Environment Flexy/private/config/keys/id_rsa_perf" openshift_public_hostname=ec2-54-190-196-89.us-west-2.compute.amazonaws.com



[nodes]
ec2-54-190-196-89.us-west-2.compute.amazonaws.com ansible_user=root ansible_ssh_user=root ansible_ssh_private_key_file="/home/slave3/workspace/Launch Environment Flexy/private/config/keys/id_rsa_perf" openshift_public_hostname=ec2-54-190-196-89.us-west-2.compute.amazonaws.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" openshift_scheduleable=false

ec2-54-218-127-22.us-west-2.compute.amazonaws.com ansible_user=root ansible_ssh_user=root ansible_ssh_private_key_file="/home/slave3/workspace/Launch Environment Flexy/private/config/keys/id_rsa_perf" openshift_public_hostname=ec2-54-218-127-22.us-west-2.compute.amazonaws.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"

ec2-54-218-127-22.us-west-2.compute.amazonaws.com ansible_user=root ansible_ssh_user=root ansible_ssh_private_key_file="/home/slave3/workspace/Launch Environment Flexy/private/config/keys/id_rsa_perf" openshift_public_hostname=ec2-54-218-127-22.us-west-2.compute.amazonaws.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"

ec2-54-186-253-57.us-west-2.compute.amazonaws.com ansible_user=root ansible_ssh_user=root ansible_ssh_private_key_file="/home/slave3/workspace/Launch Environment Flexy/private/config/keys/id_rsa_perf" openshift_public_hostname=ec2-54-186-253-57.us-west-2.compute.amazonaws.com openshift_node_labels="{'region': 'primary', 'zone': 'default'}"
ec2-34-209-73-58.us-west-2.compute.amazonaws.com ansible_user=root ansible_ssh_user=root ansible_ssh_private_key_file="/home/slave3/workspace/Launch Environment Flexy/private/config/keys/id_rsa_perf" openshift_public_hostname=ec2-34-209-73-58.us-west-2.compute.amazonaws.com openshift_node_labels="{'region': 'primary', 'zone': 'default'}"

Comment 1 Scott Dodson 2018-01-09 15:50:11 UTC
Can you describe the deployments in openshift-web-console namespace to see what problems it's having deploying?

Comment 2 Elvir Kuric 2018-01-09 16:00:49 UTC
I have seen this too, from my cluster

-- 
# oc describe all
Name:               webconsole
Namespace:          openshift-web-console
CreationTimestamp:  Tue, 09 Jan 2018 14:42:58 +0000
Labels:             app=openshift-web-console
                    webconsole=true
Annotations:        deployment.kubernetes.io/revision=1
                    kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1beta1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"openshift-web-console","webconsole":"true"},"name":"webc...
Selector:           webconsole=true
Replicas:           1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType:       Recreate
MinReadySeconds:    0
Pod Template:
  Labels:           webconsole=true
  Service Account:  webconsole
  Containers:
   webconsole:
    Image:  registry.access.redhat.com/openshift3/ose-web-console:v3.9
    Port:   8443/TCP
    Command:
      /usr/bin/origin-web-console
      --audit-log-path=-
      --config=/var/webconsole-config/webconsole-config.yaml
    Readiness:    http-get https://:8443/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /var/serving-cert from serving-cert (rw)
      /var/webconsole-config from webconsole-config (rw)
  Volumes:
   serving-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  webconsole-serving-cert
    Optional:    false
   webconsole-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      webconsole-config
    Optional:  false
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      False   MinimumReplicasUnavailable
  Progressing    False   ProgressDeadlineExceeded
OldReplicaSets:  <none>
NewReplicaSet:   webconsole-7c6f9fb789 (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  1h    deployment-controller  Scaled up replica set webconsole-7c6f9fb789 to 1


Name:           webconsole-7c6f9fb789
Namespace:      openshift-web-console
Selector:       pod-template-hash=3729596345,webconsole=true
Labels:         pod-template-hash=3729596345
                webconsole=true
Annotations:    deployment.kubernetes.io/desired-replicas=1
                deployment.kubernetes.io/max-replicas=1
                deployment.kubernetes.io/revision=1
Controlled By:  Deployment/webconsole
Replicas:       1 current / 1 desired
Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           pod-template-hash=3729596345
                    webconsole=true
  Service Account:  webconsole
  Containers:
   webconsole:
    Image:  registry.access.redhat.com/openshift3/ose-web-console:v3.9
    Port:   8443/TCP
    Command:
      /usr/bin/origin-web-console
      --audit-log-path=-
      --config=/var/webconsole-config/webconsole-config.yaml
    Readiness:    http-get https://:8443/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /var/serving-cert from serving-cert (rw)
      /var/webconsole-config from webconsole-config (rw)
  Volumes:
   serving-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  webconsole-serving-cert
    Optional:    false
   webconsole-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      webconsole-config
    Optional:  false
Events:
  Type     Reason            Age   From                   Message
  ----     ------            ----  ----                   -------
  Warning  FailedCreate      1h    replicaset-controller  Error creating: pods "webconsole-7c6f9fb789-" is forbidden: error looking up service account openshift-web-console/webconsole: serviceaccount "webconsole" not found
  Normal   SuccessfulCreate  1h    replicaset-controller  Created pod: webconsole-7c6f9fb789-sv6p4


Name:               webconsole
Namespace:          openshift-web-console
CreationTimestamp:  Tue, 09 Jan 2018 14:42:58 +0000
Labels:             app=openshift-web-console
                    webconsole=true
Annotations:        deployment.kubernetes.io/revision=1
                    kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1beta1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"openshift-web-console","webconsole":"true"},"name":"webc...
Selector:           webconsole=true
Replicas:           1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType:       Recreate
MinReadySeconds:    0
Pod Template:
  Labels:           webconsole=true
  Service Account:  webconsole
  Containers:
   webconsole:
    Image:  registry.access.redhat.com/openshift3/ose-web-console:v3.9
    Port:   8443/TCP
    Command:
      /usr/bin/origin-web-console
      --audit-log-path=-
      --config=/var/webconsole-config/webconsole-config.yaml
    Readiness:    http-get https://:8443/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /var/serving-cert from serving-cert (rw)
      /var/webconsole-config from webconsole-config (rw)
  Volumes:
   serving-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  webconsole-serving-cert
    Optional:    false
   webconsole-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      webconsole-config
    Optional:  false
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      False   MinimumReplicasUnavailable
  Progressing    False   ProgressDeadlineExceeded
OldReplicaSets:  <none>
NewReplicaSet:   webconsole-7c6f9fb789 (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  1h    deployment-controller  Scaled up replica set webconsole-7c6f9fb789 to 1


Name:           webconsole-7c6f9fb789
Namespace:      openshift-web-console
Selector:       pod-template-hash=3729596345,webconsole=true
Labels:         pod-template-hash=3729596345
                webconsole=true
Annotations:    deployment.kubernetes.io/desired-replicas=1
                deployment.kubernetes.io/max-replicas=1
                deployment.kubernetes.io/revision=1
Controlled By:  Deployment/webconsole
Replicas:       1 current / 1 desired
Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           pod-template-hash=3729596345
                    webconsole=true
  Service Account:  webconsole
  Containers:
   webconsole:
    Image:  registry.access.redhat.com/openshift3/ose-web-console:v3.9
    Port:   8443/TCP
    Command:
      /usr/bin/origin-web-console
      --audit-log-path=-
      --config=/var/webconsole-config/webconsole-config.yaml
    Readiness:    http-get https://:8443/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /var/serving-cert from serving-cert (rw)
      /var/webconsole-config from webconsole-config (rw)
  Volumes:
   serving-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  webconsole-serving-cert
    Optional:    false
   webconsole-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      webconsole-config
    Optional:  false
Events:
  Type     Reason            Age   From                   Message
  ----     ------            ----  ----                   -------
  Warning  FailedCreate      1h    replicaset-controller  Error creating: pods "webconsole-7c6f9fb789-" is forbidden: error looking up service account openshift-web-console/webconsole: serviceaccount "webconsole" not found
  Normal   SuccessfulCreate  1h    replicaset-controller  Created pod: webconsole-7c6f9fb789-sv6p4


Name:           webconsole-7c6f9fb789-sv6p4
Namespace:      openshift-web-console
Node:           ip-172-31-9-200.us-west-2.compute.internal/172.31.9.200
Start Time:     Tue, 09 Jan 2018 14:42:59 +0000
Labels:         pod-template-hash=3729596345
                webconsole=true
Annotations:    openshift.io/scc=restricted
Status:         Pending
IP:             10.128.0.48
Controlled By:  ReplicaSet/webconsole-7c6f9fb789
Containers:
  webconsole:
    Container ID:  
    Image:         registry.access.redhat.com/openshift3/ose-web-console:v3.9
    Image ID:      
    Port:          8443/TCP
    Command:
      /usr/bin/origin-web-console
      --audit-log-path=-
      --config=/var/webconsole-config/webconsole-config.yaml
    State:          Waiting
      Reason:       ImagePullBackOff
    Ready:          False
    Restart Count:  0
    Readiness:      http-get https://:8443/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from webconsole-token-pxcnz (ro)
      /var/serving-cert from serving-cert (rw)
      /var/webconsole-config from webconsole-config (rw)
Conditions:
  Type           Status
  Initialized    True 
  Ready          False 
  PodScheduled   True 
Volumes:
  serving-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  webconsole-serving-cert
    Optional:    false
  webconsole-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      webconsole-config
    Optional:  false
  webconsole-token-pxcnz:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  webconsole-token-pxcnz
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  region=infra
Tolerations:     <none>
Events:
  Type     Reason                 Age                From                                                 Message
  ----     ------                 ----               ----                                                 -------
  Normal   Scheduled              1h                 default-scheduler                                    Successfully assigned webconsole-7c6f9fb789-sv6p4 to ip-172-31-9-200.us-west-2.compute.internal
  Normal   SuccessfulMountVolume  1h                 kubelet, ip-172-31-9-200.us-west-2.compute.internal  MountVolume.SetUp succeeded for volume "webconsole-config"
  Normal   SuccessfulMountVolume  1h                 kubelet, ip-172-31-9-200.us-west-2.compute.internal  MountVolume.SetUp succeeded for volume "webconsole-token-pxcnz"
  Normal   SuccessfulMountVolume  1h                 kubelet, ip-172-31-9-200.us-west-2.compute.internal  MountVolume.SetUp succeeded for volume "serving-cert"
  Normal   Pulling                1h (x2 over 1h)    kubelet, ip-172-31-9-200.us-west-2.compute.internal  pulling image "registry.access.redhat.com/openshift3/ose-web-console:v3.9"
  Warning  Failed                 1h (x2 over 1h)    kubelet, ip-172-31-9-200.us-west-2.compute.internal  Failed to pull image "registry.access.redhat.com/openshift3/ose-web-console:v3.9": rpc error: code = Unknown desc = unknown: Not Found
  Warning  Failed                 1h (x2 over 1h)    kubelet, ip-172-31-9-200.us-west-2.compute.internal  Error: ErrImagePull
  Normal   SandboxChanged         1h (x6 over 1h)    kubelet, ip-172-31-9-200.us-west-2.compute.internal  Pod sandbox changed, it will be killed and re-created.
  Normal   BackOff                7m (x318 over 1h)  kubelet, ip-172-31-9-200.us-west-2.compute.internal  Back-off pulling image "registry.access.redhat.com/openshift3/ose-web-console:v3.9"
  Warning  Failed                 1m (x340 over 1h)  kubelet, ip-172-31-9-200.us-west-2.compute.internal  Error: ImagePullBackOff


Name:              webconsole
Namespace:         openshift-web-console
Labels:            app=openshift-web-console
Annotations:       kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"service.alpha.openshift.io/serving-cert-secret-name":"webconsole-serving-cert"},"labels...
                   service.alpha.openshift.io/serving-cert-secret-name=webconsole-serving-cert
                   service.alpha.openshift.io/serving-cert-signed-by=openshift-service-serving-signer@1515508692
Selector:          webconsole=true
Type:              ClusterIP
IP:                172.30.23.247
Port:              https  443/TCP
TargetPort:        8443/TCP
Endpoints:         
Session Affinity:  None
Events:            <none>
root@ip-172-31-9-200: ~ # 
---

Comment 3 Mike Fiedler 2018-01-09 16:01:50 UTC
Looks like a registry issue:

  Normal   Pulling                1h (x2 over 1h)     kubelet, ip-172-31-48-201.us-west-2.compute.internal  pulling image "registry.access.redhat.com/openshift3/ose-web-console:v3.9"
  Warning  Failed                 1h (x2 over 1h)     kubelet, ip-172-31-48-201.us-west-2.compute.internal  Failed to pull image "registry.access.redhat.com/openshift3/ose-web-console:v3.9": rpc error: code = Unknown desc = unknown: Not Found
  Warning  Failed                 1h (x2 over 1h)     kubelet, ip-172-31-48-201.us-west-2.compute.internal  Error: ErrImagePull              
  Normal   SandboxChanged         1h (x6 over 1h)     kubelet, ip-172-31-48-201.us-west-2.compute.internal  Pod sandbox changed, it will be killed and re-created.
  Warning  Failed                 5m (x325 over 1h)   kubelet, ip-172-31-48-201.us-west-2.compute.internal  Error: ImagePullBackOff          
  Normal   BackOff                21s (x348 over 1h)  kubelet, ip-172-31-48-201.us-west-2.compute.internal  Back-off pulling image "registry.access.redhat.com/openshift3/ose-web-console:v3.9"

Comment 6 Mike Fiedler 2018-01-09 19:19:51 UTC
Setting openshift_web_console_prefix works.   I still see an issue when the cluster has a default node selector but I will open that as a separate bug once I test a solution.

Comment 9 Samuel Padgett 2019-08-22 18:10:20 UTC
*** Bug 1716788 has been marked as a duplicate of this bug. ***