Bug 1766738 - Application Route does not work when launched via browser
Summary: Application Route does not work when launched via browser
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Multi-Arch
Version: 4.2.z
Hardware: s390x
OS: Linux
high
high
Target Milestone: ---
: ---
Assignee: David Benoit
QA Contact: Barry Donahue
URL:
Whiteboard:
Depends On:
Blocks: OCP/Z_4.2
TreeView+ depends on / blocked
 
Reported: 2019-10-29 18:42 UTC by Vijay Bhadriraju
Modified: 2019-12-09 13:30 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-12-06 02:49:42 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Vijay Bhadriraju 2019-10-29 18:42:33 UTC
Description of problem:

After deploying the application pod and creating the route for it, the route does not work when launched via the browser. I confirmed that the application pod is deployed and running successfully.


Version-Release number of selected component (if applicable):

Pre-beta OCP 4.2 for Z using RHEL8 KVM

How reproducible:

It can be easily reproduce.

Steps to Reproduce:
1. Install the OCP 4.2 early build for Z using RHEL8 KVM running in a bare metal IBM Z LPAR.
2. Create a new OCP project and deploy a Hello World app in the project.
3. Expose the app by creating a route.
4. Launch the hello world route URL via the browser in the same DNS system as the OCP cluster. The route does not display anything.

Actual results:

The route does not display the application in the browser

Expected results:

The route should display the application in the browser

Additional info:

Comment 1 Vijay Bhadriraju 2019-11-04 14:22:48 UTC
Trying to folllow up on this blocking defect. Is there any progress on it ?

Comment 2 David Benoit 2019-11-08 13:58:01 UTC
Hi all,

Sorry for the delay.  In /etc/nginx/nginx.conf, the default port 80 should be unbound and a new route should be created to the cluster endpoint at point 80.  Additionally, port 80 should be added to the hypervisor's firewalld public zone.  The resulting file will look something like this:

# For more information on configuration, see:
#   * Official English Documentation: http://nginx.org/en/docs/
#   * Official Russian Documentation: http://nginx.org/ru/docs/

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    # Load modular configuration files from the /etc/nginx/conf.d directory.
    # See http://nginx.org/en/docs/ngx_core_module.html#include
    # for more information.
    include /etc/nginx/conf.d/*.conf;

}

# BEGIN ANSIBLE MANAGED BLOCK
stream {
   upstream openshift_console {
      server 192.168.122.28:443 max_fails=3 fail_timeout=10s;
   }
   server {
       listen       443;
       proxy_pass openshift_console;
   }
   upstream openshift_api {
      server 192.168.122.28:6443 max_fails=3 fail_timeout=10s;
  }
  server {
       listen    6443;
       proxy_pass openshift_api;
   }
   upstream openshift_http {
      server 192.168.122.28:80 max_fails=3 fail_timeout=10s;
   }
   server {
       listen       80;
       proxy_pass openshift_http;
   }
}
# END ANSIBLE MANAGED BLOCK


Please let me know if this works for you.

Comment 3 Vijay Bhadriraju 2019-11-08 19:22:53 UTC
Can you be more specific with the name of the POD and node that the conf file - /etc/nginx/nginx.conf needed to be edited and also any instructions for port 80 to be added to the hypervisor's firewalld public zone. We are using KVM in this case ?? Thanks.

Comment 4 Vijay Bhadriraju 2019-11-08 19:25:43 UTC
Can you be more specific with the name of the POD and node that the conf file - /etc/nginx/nginx.conf needed to be edited and also any instructions for port 80 to be added to the hypervisor's firewalld public zone ? We are using KVM in this case. Thanks.

(In reply to David Benoit from comment #2)
> Hi all,
> 
> Sorry for the delay.  In /etc/nginx/nginx.conf, the default port 80 should
> be unbound and a new route should be created to the cluster endpoint at
> point 80.  Additionally, port 80 should be added to the hypervisor's
> firewalld public zone.  The resulting file will look something like this:
> 
> # For more information on configuration, see:
> #   * Official English Documentation: http://nginx.org/en/docs/
> #   * Official Russian Documentation: http://nginx.org/ru/docs/
> 
> user nginx;
> worker_processes auto;
> error_log /var/log/nginx/error.log;
> pid /run/nginx.pid;
> 
> # Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
> include /usr/share/nginx/modules/*.conf;
> 
> events {
>     worker_connections 1024;
> }
> 
> http {
>     log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
>                       '$status $body_bytes_sent "$http_referer" '
>                       '"$http_user_agent" "$http_x_forwarded_for"';
> 
>     access_log  /var/log/nginx/access.log  main;
> 
>     sendfile            on;
>     tcp_nopush          on;
>     tcp_nodelay         on;
>     keepalive_timeout   65;
>     types_hash_max_size 2048;
> 
>     include             /etc/nginx/mime.types;
>     default_type        application/octet-stream;
> 
>     # Load modular configuration files from the /etc/nginx/conf.d directory.
>     # See http://nginx.org/en/docs/ngx_core_module.html#include
>     # for more information.
>     include /etc/nginx/conf.d/*.conf;
> 
> }
> 
> # BEGIN ANSIBLE MANAGED BLOCK
> stream {
>    upstream openshift_console {
>       server 192.168.122.28:443 max_fails=3 fail_timeout=10s;
>    }
>    server {
>        listen       443;
>        proxy_pass openshift_console;
>    }
>    upstream openshift_api {
>       server 192.168.122.28:6443 max_fails=3 fail_timeout=10s;
>   }
>   server {
>        listen    6443;
>        proxy_pass openshift_api;
>    }
>    upstream openshift_http {
>       server 192.168.122.28:80 max_fails=3 fail_timeout=10s;
>    }
>    server {
>        listen       80;
>        proxy_pass openshift_http;
>    }
> }
> # END ANSIBLE MANAGED BLOCK
> 
> 
> Please let me know if this works for you.

Comment 5 David Benoit 2019-11-08 19:53:08 UTC
This change should be made to /etc/nginx/nginx.conf and firewalld directly on the hypervisor.  Sorry for the ambiguity.

You can add the firewalld port using:

firewall-cmd --zone=public --permanent --add-port=80/tcp

Please let me know if there are any more questions.  We are happy to help however we can.

Comment 6 Vijay Bhadriraju 2019-11-09 04:46:02 UTC
Here is the my edited nginx.conf file on the Hypervisor. I also ran the firewall-cmd --zone=public --permanent --add-port=80/tcp on the hypervisor that was successful. I deleted the old route and created a new route for the service of my application and the route still does not work with these changes. One thing not clear is how does the changes in the nginx.conf take effect without restarting any of OCP services. Am I missing something and are my nginx.conf file edits correct ?

[root@zrkvmpf2 vbhadrir]# cat /etc/nginx/nginx.conf
# For more information on configuration, see:
#   * Official English Documentation: http://nginx.org/en/docs/
#   * Official Russian Documentation: http://nginx.org/ru/docs/

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    # Load modular configuration files from the /etc/nginx/conf.d directory.
    # See http://nginx.org/en/docs/ngx_core_module.html#include
    # for more information.
    include /etc/nginx/conf.d/*.conf;
    }
#    server {
#        listen       80 default_server;
#        listen       [::]:80 default_server;
#        server_name  _;
#        root         /usr/share/nginx/html;

        # Load configuration files for the default server block.
#       include /etc/nginx/default.d/*.conf;

#        location / {
#        }

#        error_page 404 /404.html;
#            location = /40x.html {
#        }

#        error_page 500 502 503 504 /50x.html;
#            location = /50x.html {
#        }
#    }

# Settings for a TLS enabled server.
#
#    server {
#        listen       443 ssl http2 default_server;
#        listen       [::]:443 ssl http2 default_server;
#        server_name  _;
#        root         /usr/share/nginx/html;
#
#        ssl_certificate "/etc/pki/nginx/server.crt";
#        ssl_certificate_key "/etc/pki/nginx/private/server.key";
#        ssl_session_cache shared:SSL:1m;
#        ssl_session_timeout  10m;
#        ssl_ciphers PROFILE=SYSTEM;
#        ssl_prefer_server_ciphers on;
#
#        # Load configuration files for the default server block.
#        include /etc/nginx/default.d/*.conf;
#
#        location / {
#        }
#
#        error_page 404 /404.html;
#            location = /40x.html {
#        }
#
#        error_page 500 502 503 504 /50x.html;
#            location = /50x.html {
#        }
#    }


# BEGIN ANSIBLE MANAGED BLOCK
stream {
   upstream openshift_console {
      server 192.168.122.28:443 max_fails=3 fail_timeout=10s;
   }
   server {
       listen       443;
       proxy_pass openshift_console;
   }
   upstream openshift_api {
      server 192.168.122.28:6443 max_fails=3 fail_timeout=10s;
  }
  server {
       listen    6443;
       proxy_pass openshift_api;
   }
   upstream openshift_http {
       server 192.168.122.28:80 max_fails=3 fail_timeout=10s;
    }
    server {
        listen       80;
        proxy_pass openshift_http;
    }
}
# END ANSIBLE MANAGED BLOCK

Comment 7 Vijay Bhadriraju 2019-11-11 16:11:30 UTC
This defect is a blocking defect for OCP on Z as any application deployed on OCP running on Linux on Z cannot be used even though it is exposed via a route.

Comment 8 David Benoit 2019-11-11 17:30:18 UTC
Are you able to curl the application endpoint from the hypervisor itself, after the /etc/hosts entry is added to the hypervisor?

Comment 9 David Benoit 2019-11-11 17:32:36 UTC
Sorry, I did not mean to reset the severity

Comment 10 David Benoit 2019-11-11 17:46:14 UTC
One more thing, could you try running `sudo setenforce 0; sudo systemctl restart nginx` on the hypervisor and let me know if that works?  Depending on your RHEL settings, it is possible that SELinux is blocking nginx from binding.

Comment 11 David Benoit 2019-11-11 17:52:15 UTC
Setting SELinux to permissive mode like this would not be recommended in a production environment, but z/KVM is not going to be a supported configuration so it should be fine for internal development.  If this is indeed the issue, it should not be seen in any z/VM deployments.

Comment 12 Vijay Bhadriraju 2019-11-11 18:31:36 UTC
SELinux is already disabled on my hypervisor. getenforce results in disabled. I ran sudo systemctl restart nginx, deleted the existing route and recreated the route and still not able to curl the route or launch in the browser. I am running curl from my client that oc CLI installed and logged into the cluster.

root@x3650m5-12:/home/vbhadrir# curl https://megaweb-megabank.apps.cpo-ocpz-cluster.redhat.com
curl: (6) Could not resolve host: megaweb-megabank.apps.cpo-ocpz-cluster.redhat.com

Comment 13 David Benoit 2019-11-11 18:34:25 UTC
Can you curl the application URL from the hypervisor after adding the /etc/hosts entry though?  This step will help us identify whether or no the issue is routing through the hypervisor.

Thanks,
DB

Comment 14 David Benoit 2019-11-11 18:47:23 UTC
Oh, actually your latest post provides some key new insight I think!  The issue may be with the /etc/hosts file.  Since there is no DNS resolving these hostnames, you may need to add megaweb-megabank.apps.cpo-ocpz-cluster.redhat.com to you /etc/hosts on your laptop.  This will be the case for all new routes created for the cluster.

Please let me know if this works.  I will make sure this step is clear in the next release of the documentation.

Comment 15 Vijay Bhadriraju 2019-11-12 06:03:35 UTC
I tried the above fix with 2 different applications and the route for both the application does not work. Here is copy of my /etc/hosts file on my client. megaweb-megabank.apps.cpo-ocpz-cluster.redhat.com and acmeair-megabank.apps.cpo-ocpz-cluster.redhat.com are the 2 routes hostnames I added to the /etc/hosts on my client. 192.168.12.117 is the IP address of the RHEL 8 KVM Hypervisor running OCP.

root@x3650m5-12:/home/vbhadrir/minishift/minishift-1.34.1-linux-amd64# cat /etc/hosts
127.0.0.1	localhost
#9.76.59.39	x3650m5-12.cpolab.ibm.com	x3650m5-12
192.168.12.154	x3650m5-12.cpolab.ibm.com       x3650m5-12
#192.168.12.155	zlxuicp1.cpolab.ibm.com zlxuicp1
192.168.12.182	zuicp002.ctl.local	zuicp002
192.168.12.183  zuicp003.ctl.local      zuicp003
#9.76.59.36      zlxuicp1.cpolab.ibm.com zlxuicp1
# The following lines are desirable for IPv6 capable hosts
#::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.12.154	mycluster.icp
192.168.12.154 mycluster.icp
192.168.13.117 bastion.cpo-ocpz-cluster.redhat.com api.cpo-ocpz-cluster.redhat.com console-openshift-console.apps.cpo-ocpz-cluster.redhat.com oauth-openshift.apps.cpo-ocpz-cluster.redhat.com megaweb-megabank.apps.cpo-ocpz-cluster.redhat.com acmeair-megabank.apps.cpo-ocpz-cluster.redhat.com

Comment 16 David Benoit 2019-11-12 13:55:08 UTC
Ok, this is good progress though.  According to the email, you are now able to hit the endpoint of the cluster.  The next step is to find out why the route is not working.  Could you please post the list of commands and manifests you used to  create the route?

Thanks,
DB

Comment 17 David Benoit 2019-11-12 14:08:26 UTC
It would be good to see the manifest used for the service too.

Comment 18 Vijay Bhadriraju 2019-11-12 20:32:39 UTC
Route Meta-Date

root@x3650m5-12:~/Downloads# cat route-megaweb.yaml 
kind: Route
apiVersion: route.openshift.io/v1
metadata:
  name: megaweb
  namespace: megabank
  selfLink: /apis/route.openshift.io/v1/namespaces/megabank/routes/megaweb
  uid: e0f51fdf-04fc-11ea-a579-0a580a810028
  resourceVersion: '5116271'
  creationTimestamp: '2019-11-12T03:31:14Z'
  labels:
    app: megaweb
  annotations:
    openshift.io/host.generated: 'true'
spec:
  host: megaweb-megabank.apps.cpo-ocpz-cluster.redhat.com
  subdomain: ''
  to:
    kind: Service
    name: megaweb
    weight: 100
  port:
    targetPort: 9080-tcp
  wildcardPolicy: None
status:
  ingress:
    - host: megaweb-megabank.apps.cpo-ocpz-cluster.redhat.com
      routerName: default
      conditions:
        - type: Admitted
          status: 'True'
          lastTransitionTime: '2019-11-12T03:31:14Z'
      wildcardPolicy: None
      routerCanonicalHostname: apps.cpo-ocpz-cluster.redhat.com

Service Metadata

root@x3650m5-12:~/Downloads# cat service-megaweb.yaml 
kind: Service
apiVersion: v1
metadata:
  name: megaweb
  namespace: megabank
  selfLink: /api/v1/namespaces/megabank/services/megaweb
  uid: b5d7b23c-04fc-11ea-afe5-525400da7041
  resourceVersion: '5115927'
  creationTimestamp: '2019-11-12T03:30:01Z'
  labels:
    app: megaweb
  annotations:
    openshift.io/generated-by: OpenShiftNewApp
spec:
  ports:
    - name: 9080-tcp
      protocol: TCP
      port: 9080
      targetPort: 9080
    - name: 9443-tcp
      protocol: TCP
      port: 9443
      targetPort: 9443
  selector:
    app: megaweb
    deploymentconfig: megaweb
  clusterIP: 172.30.147.68
  type: ClusterIP
  sessionAffinity: None
status:
  loadBalancer: {}

Comment 19 David Benoit 2019-11-12 21:12:50 UTC
Can you try the following two commands to see if the app is routed as non-TLS?

curl http://megaweb-megabank.apps.cpo-ocpz-cluster.redhat.com

curl -k https://megaweb-megabank.apps.cpo-ocpz-cluster.redhat.com:80

Comment 20 Vijay Bhadriraju 2019-11-12 21:23:04 UTC
Here are the results from the two curl commands

root@x3650m5-12:~/Downloads# curl http://megaweb-megabank.apps.cpo-ocpz-cluster.redhat.com
curl: (7) Failed to connect to megaweb-megabank.apps.cpo-ocpz-cluster.redhat.com port 80: No route to host
root@x3650m5-12:~/Downloads# curl -k https://megaweb-megabank.apps.cpo-ocpz-cluster.redhat.com:80
curl: (7) Failed to connect to megaweb-megabank.apps.cpo-ocpz-cluster.redhat.com port 80: No route to host

Comment 21 David Benoit 2019-11-12 21:37:55 UTC
Can you post me the output of:

oc get routes --all-namespaces
oc get svc --all-namespaces

Also, you are able to access the main openshift web console gui, right?

Comment 22 Vijay Bhadriraju 2019-11-12 21:55:43 UTC
root@x3650m5-12:~/Downloads# oc get routes --all-namespaces
NAMESPACE                  NAME                HOST/PORT                                                                 PATH      SERVICES            PORT       TERMINATION            WILDCARD
megabank                   acmeair             acmeair-megabank.apps.cpo-ocpz-cluster.redhat.com                                   acmeair             9080-tcp                          None
megabank                   megaweb             megaweb-megabank.apps.cpo-ocpz-cluster.redhat.com                                   megaweb             9080-tcp                          None
openshift-authentication   oauth-openshift     oauth-openshift.apps.cpo-ocpz-cluster.redhat.com                                    oauth-openshift     6443       passthrough/Redirect   None
openshift-console          console             console-openshift-console.apps.cpo-ocpz-cluster.redhat.com                          console             https      reencrypt/Redirect     None
openshift-console          downloads           downloads-openshift-console.apps.cpo-ocpz-cluster.redhat.com                        downloads           http       edge/Redirect          None
openshift-monitoring       alertmanager-main   alertmanager-main-openshift-monitoring.apps.cpo-ocpz-cluster.redhat.com             alertmanager-main   web        reencrypt/Redirect     None
openshift-monitoring       grafana             grafana-openshift-monitoring.apps.cpo-ocpz-cluster.redhat.com                       grafana             https      reencrypt/Redirect     None
openshift-monitoring       prometheus-k8s      prometheus-k8s-openshift-monitoring.apps.cpo-ocpz-cluster.redhat.com                prometheus-k8s      web        reencrypt/Redirect     None
root@x3650m5-12:~/Downloads# oc get svc --all-namespaces
NAMESPACE                                               NAME                               TYPE           CLUSTER-IP       EXTERNAL-IP                            PORT(S)                      AGE
default                                                 kubernetes                         ClusterIP      172.30.0.1       <none>                                 443/TCP                      15d
default                                                 openshift                          ExternalName   <none>           kubernetes.default.svc.cluster.local   <none>                       15d
kube-system                                             kubelet                            ClusterIP      None             <none>                                 10250/TCP                    15d
megabank                                                acmeair                            ClusterIP      172.30.247.250   <none>                                 9080/TCP,9443/TCP            15h
megabank                                                helloworld                         ClusterIP      172.30.217.130   <none>                                 8080/TCP,8888/TCP            16h
megabank                                                megaweb                            ClusterIP      172.30.147.68    <none>                                 9080/TCP,9443/TCP            18h
openshift-apiserver-operator                            metrics                            ClusterIP      172.30.251.18    <none>                                 443/TCP                      15d
openshift-apiserver                                     api                                ClusterIP      172.30.28.216    <none>                                 443/TCP                      15d
openshift-authentication-operator                       metrics                            ClusterIP      172.30.142.225   <none>                                 443/TCP                      15d
openshift-authentication                                oauth-openshift                    ClusterIP      172.30.247.228   <none>                                 443/TCP                      15d
openshift-cloud-credential-operator                     controller-manager-service         ClusterIP      172.30.250.44    <none>                                 443/TCP                      15d
openshift-cluster-version                               cluster-version-operator           ClusterIP      172.30.172.222   <none>                                 9099/TCP                     15d
openshift-console-operator                              metrics                            ClusterIP      172.30.143.170   <none>                                 443/TCP                      15d
openshift-console                                       console                            ClusterIP      172.30.35.189    <none>                                 443/TCP                      15d
openshift-console                                       downloads                          ClusterIP      172.30.9.231     <none>                                 80/TCP                       15d
openshift-controller-manager-operator                   metrics                            ClusterIP      172.30.166.39    <none>                                 443/TCP                      15d
openshift-controller-manager                            controller-manager                 ClusterIP      172.30.34.221    <none>                                 443/TCP                      15d
openshift-dns                                           dns-default                        ClusterIP      172.30.0.10      <none>                                 53/UDP,53/TCP,9153/TCP       15d
openshift-etcd                                          etcd                               ClusterIP      172.30.120.96    <none>                                 2379/TCP,9979/TCP            15d
openshift-etcd                                          host-etcd                          ClusterIP      None             <none>                                 2379/TCP                     15d
openshift-image-registry                                image-registry                     ClusterIP      172.30.91.205    <none>                                 5000/TCP                     15d
openshift-ingress                                       router-internal-default            ClusterIP      172.30.221.67    <none>                                 80/TCP,443/TCP,1936/TCP      15d
openshift-kube-apiserver-operator                       metrics                            ClusterIP      172.30.98.201    <none>                                 443/TCP                      15d
openshift-kube-apiserver                                apiserver                          ClusterIP      172.30.231.192   <none>                                 443/TCP                      15d
openshift-kube-controller-manager-operator              metrics                            ClusterIP      172.30.224.5     <none>                                 443/TCP                      15d
openshift-kube-controller-manager                       kube-controller-manager            ClusterIP      172.30.222.222   <none>                                 443/TCP                      15d
openshift-kube-scheduler-operator                       metrics                            ClusterIP      172.30.245.74    <none>                                 443/TCP                      15d
openshift-kube-scheduler                                scheduler                          ClusterIP      172.30.107.227   <none>                                 443/TCP                      15d
openshift-machine-api                                   cluster-autoscaler-operator        ClusterIP      172.30.72.32     <none>                                 443/TCP,8080/TCP             15d
openshift-machine-api                                   machine-api-operator               ClusterIP      172.30.218.106   <none>                                 8080/TCP                     15d
openshift-marketplace                                   certified-operators                ClusterIP      172.30.87.124    <none>                                 50051/TCP                    4d5h
openshift-marketplace                                   community-operators                ClusterIP      172.30.128.132   <none>                                 50051/TCP                    8h
openshift-marketplace                                   marketplace-operator-metrics       ClusterIP      172.30.166.103   <none>                                 8383/TCP                     15d
openshift-marketplace                                   redhat-operators                   ClusterIP      172.30.249.170   <none>                                 50051/TCP                    8h
openshift-monitoring                                    alertmanager-main                  ClusterIP      172.30.153.28    <none>                                 9094/TCP                     15d
openshift-monitoring                                    alertmanager-operated              ClusterIP      None             <none>                                 9093/TCP,9094/TCP,9094/UDP   15d
openshift-monitoring                                    cluster-monitoring-operator        ClusterIP      None             <none>                                 8080/TCP                     15d
openshift-monitoring                                    grafana                            ClusterIP      172.30.203.77    <none>                                 3000/TCP                     15d
openshift-monitoring                                    kube-state-metrics                 ClusterIP      None             <none>                                 8443/TCP,9443/TCP            15d
openshift-monitoring                                    node-exporter                      ClusterIP      None             <none>                                 9100/TCP                     15d
openshift-monitoring                                    openshift-state-metrics            ClusterIP      None             <none>                                 8443/TCP,9443/TCP            15d
openshift-monitoring                                    prometheus-adapter                 ClusterIP      172.30.54.153    <none>                                 443/TCP                      15d
openshift-monitoring                                    prometheus-k8s                     ClusterIP      172.30.190.49    <none>                                 9091/TCP,9092/TCP            15d
openshift-monitoring                                    prometheus-operated                ClusterIP      None             <none>                                 9090/TCP                     15d
openshift-monitoring                                    prometheus-operator                ClusterIP      None             <none>                                 8080/TCP                     15d
openshift-multus                                        multus-admission-controller        ClusterIP      172.30.212.141   <none>                                 443/TCP                      15d
openshift-operator-lifecycle-manager                    catalog-operator-metrics           ClusterIP      172.30.46.66     <none>                                 8081/TCP                     15d
openshift-operator-lifecycle-manager                    olm-operator-metrics               ClusterIP      172.30.10.126    <none>                                 8081/TCP                     15d
openshift-operator-lifecycle-manager                    v1-packages-operators-coreos-com   ClusterIP      172.30.232.193   <none>                                 443/TCP                      14d
openshift-sdn                                           sdn                                ClusterIP      None             <none>                                 9101/TCP                     15d
openshift-service-catalog-apiserver-operator            metrics                            ClusterIP      172.30.200.69    <none>                                 443/TCP                      15d
openshift-service-catalog-controller-manager-operator   metrics                            ClusterIP      172.30.132.99    <none>                                 443/TCP                      15d
openshift                                               megaweb                            ClusterIP      172.30.65.152    <none>                                 9080/TCP,9443/TCP            18h
root@x3650m5-12:~/Downloads#

Comment 23 Vijay Bhadriraju 2019-11-12 21:57:36 UTC
Yes, I am able to access the openshift web console gui.

Comment 24 Vijay Bhadriraju 2019-11-12 22:00:55 UTC
One other problem I am having is that I am not able to access the OpenShift internal registry to push my container images. To work around this, I am using the docker.io public docker hub registry to push my images and deploy the new-app using the image from docker hub. If you have any tips on how to access the OpenShift internal registry it would be helpful.

Comment 25 Vijay Bhadriraju 2019-11-19 15:41:54 UTC
Have got on a call with David and cbaus late last week and have this problem resolved. The resolution has some next steps to be taken that needs changes in the OCP for Z product. David is going to work on the next steps. The defect can be closed at this point.

Comment 26 David Benoit 2019-11-22 06:29:55 UTC
Hi all,

I have spoken with several architects on the openshift team regarding the concerns over ease of configuration of routes, and after walking them through the cluster configuration we have provided you on z/VM and z/KVM, they have confirmed that either adding non-standard ports to the load balancer or creating services on ports 80 or 443 as demonstrated during the call is an expected administrative task for any UPI installation.  This is not specific to s390x, and while perhaps not particularly simple to configure by hand, it was deemed by the OCP team as not a bug.

Vijay, could you post the final yaml configuration of the working service and route?  It may be of help to others who run into this configuration issue in the future.

Thanks,
DB

Comment 27 Vijay Bhadriraju 2019-11-24 18:04:53 UTC
Here is the working yaml of the route and service for the application that had issues with the out of the box route created by OCP. The original port 9080 on which the application listening was replaced with port 443 for https to port 80 

[root@localhost ~]# oc get svc/megaweb -o yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    openshift.io/generated-by: OpenShiftNewApp
  creationTimestamp: "2019-11-12T03:30:01Z"
  labels:
    app: megaweb
  name: megaweb
  namespace: megabank
  resourceVersion: "6464614"
  selfLink: /api/v1/namespaces/megabank/services/megaweb
  uid: b5d7b23c-04fc-11ea-afe5-525400da7041
spec:
  clusterIP: 172.30.147.68
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: 9080
  - name: 9443-tcp
    port: 9443
    protocol: TCP
    targetPort: 9443
  selector:
    app: megaweb
    deploymentconfig: megaweb
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

For the route tls: element was added. This was added to mimic how the OCP console route is exposed.

tls:
    insecureEdgeTerminationPolicy: Redirect
    termination: edge

[root@localhost ~]# oc get route/megaweb -o yaml
apiVersion: route.openshift.io/v1
kind: Route
metadata:
  annotations:
    openshift.io/host.generated: "true"
  creationTimestamp: "2019-11-12T03:31:14Z"
  labels:
    app: megaweb
  name: megaweb
  namespace: megabank
  resourceVersion: "6463510"
  selfLink: /apis/route.openshift.io/v1/namespaces/megabank/routes/megaweb
  uid: e0f51fdf-04fc-11ea-a579-0a580a810028
spec:
  host: megaweb-megabank.apps.cpo-ocpz-cluster.redhat.com
  port:
    targetPort: https
  subdomain: ""
  tls:
    insecureEdgeTerminationPolicy: Redirect
    termination: edge
  to:
    kind: Service
    name: megaweb
    weight: 100
  wildcardPolicy: None
status:
  ingress:
  - conditions:
    - lastTransitionTime: "2019-11-12T03:31:14Z"
      status: "True"
      type: Admitted
    host: megaweb-megabank.apps.cpo-ocpz-cluster.redhat.com
    routerCanonicalHostname: apps.cpo-ocpz-cluster.redhat.com
    routerName: default
    wildcardPolicy: None

Comment 28 Vijay Bhadriraju 2019-11-26 02:44:17 UTC
(In reply to David Benoit from comment #26)
> Hi all,
> 
> I have spoken with several architects on the openshift team regarding the
> concerns over ease of configuration of routes, and after walking them
> through the cluster configuration we have provided you on z/VM and z/KVM,
> they have confirmed that either adding non-standard ports to the load
> balancer or creating services on ports 80 or 443 as demonstrated during the
> call is an expected administrative task for any UPI installation.  This is
> not specific to s390x, and while perhaps not particularly simple to
> configure by hand, it was deemed by the OCP team as not a bug.
> 
> Vijay, could you post the final yaml configuration of the working service
> and route?  It may be of help to others who run into this configuration
> issue in the future.
> 
> Thanks,
> DB




Here is the working yaml of the route and service for the application that had issues with the out of the box route created by OCP. The original port 9080 on which the application listening was replaced with port 443 for https to port 80 

[root@localhost ~]# oc get svc/megaweb -o yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    openshift.io/generated-by: OpenShiftNewApp
  creationTimestamp: "2019-11-12T03:30:01Z"
  labels:
    app: megaweb
  name: megaweb
  namespace: megabank
  resourceVersion: "6464614"
  selfLink: /api/v1/namespaces/megabank/services/megaweb
  uid: b5d7b23c-04fc-11ea-afe5-525400da7041
spec:
  clusterIP: 172.30.147.68
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: 9080
  - name: 9443-tcp
    port: 9443
    protocol: TCP
    targetPort: 9443
  selector:
    app: megaweb
    deploymentconfig: megaweb
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

For the route tls: element was added. This was added to mimic how the OCP console route is exposed.

tls:
    insecureEdgeTerminationPolicy: Redirect
    termination: edge

[root@localhost ~]# oc get route/megaweb -o yaml
apiVersion: route.openshift.io/v1
kind: Route
metadata:
  annotations:
    openshift.io/host.generated: "true"
  creationTimestamp: "2019-11-12T03:31:14Z"
  labels:
    app: megaweb
  name: megaweb
  namespace: megabank
  resourceVersion: "6463510"
  selfLink: /apis/route.openshift.io/v1/namespaces/megabank/routes/megaweb
  uid: e0f51fdf-04fc-11ea-a579-0a580a810028
spec:
  host: megaweb-megabank.apps.cpo-ocpz-cluster.redhat.com
  port:
    targetPort: https
  subdomain: ""
  tls:
    insecureEdgeTerminationPolicy: Redirect
    termination: edge
  to:
    kind: Service
    name: megaweb
    weight: 100
  wildcardPolicy: None
status:
  ingress:
  - conditions:
    - lastTransitionTime: "2019-11-12T03:31:14Z"
      status: "True"
      type: Admitted
    host: megaweb-megabank.apps.cpo-ocpz-cluster.redhat.com
    routerCanonicalHostname: apps.cpo-ocpz-cluster.redhat.com
    routerName: default
    wildcardPolicy: None

Comment 29 Jeremy Poulin 2019-12-02 18:16:36 UTC
This is marked as urgent severity, and it appears to have been resolved (in terms of providing a working configuration).

What remains to be done with this before we can complete it?

Sounds like we'd need to document the working configuration and let QA verify the configuration. Dbenoit - any thoughts?

Comment 30 David Benoit 2019-12-06 02:49:42 UTC
Hi all, I will close this now.

To summarize for anyone who runs into this in the future, Vijay's latest post is the correct configuration for this circumstance.  One caveat to the UPI installation is that there is an external load balancer to the cluster.  In the latest posted  yaml manifest, the service forwards the default https port (443) to the pod's target port.  Port 443 is already expected to be enabled in the external load balancer as part of the install instructions, so the service's route should then be resolvable.


Note You need to log in before you can comment on or make changes to this bug.