Bug 1789124 - Router doesn't listen on ipv6 interfaces when cluster network config indicates ipv6 support
Summary: Router doesn't listen on ipv6 interfaces when cluster network config indicate...
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Routing
Version: 4.3.0
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: 4.3.z
Assignee: Dan Mace
QA Contact: Marius Cornea
Whiteboard: ipv6
Depends On: 1789121 1796618
TreeView+ depends on / blocked
Reported: 2020-01-08 19:42 UTC by Dan Mace
Modified: 2020-02-19 05:40 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1789121
Last Closed: 2020-02-19 05:39:53 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Priority Status Summary Last Updated
Github openshift cluster-ingress-operator pull 352 None closed [release-4.3] Bug 1789124: Configure router for IPv6 2020-05-26 14:53:23 UTC
Red Hat Product Errata RHBA-2020:0492 None None None 2020-02-19 05:40:02 UTC

Description Dan Mace 2020-01-08 19:42:41 UTC
+++ This bug was initially created as a clone of Bug #1789121 +++

Description of problem:

When running on an IPv6-enabled cluster (defined as the presence of an ipv6 address in the network.config.openshift.io `.status.clusterNetwork` list), the router isn't listening on an ipv6 address.

This has already been fixed in https://github.com/openshift/cluster-ingress-operator/pull/342 but we'd like to backport the fix and document the change with a bug report.

Version-Release number of selected component (if applicable):

How reproducible:

Launch a single-stack IPv6 enabled cluster on AWS without the fix.

Actual results:

The router process won't listen on any ipv6 interface.

Expected results:

The router should listen on all available ipv4 and ipv6 interfaces.

Additional info:

Comment 1 Daneyon Hansen 2020-01-22 18:12:22 UTC
Removed PR 346. Superseded by PR 352.

Comment 3 Dan Winship 2020-02-07 14:25:27 UTC
Assigning all 4.3.z IPv6 bugs to Marius Cornea for QA, as they are not yet QA-able in stock release-4.3 builds.

Comment 4 Marius Cornea 2020-02-11 21:48:33 UTC
Verified on 4.3.0-0.nightly-2020-02-10-055634(included in 4.3.0-0.nightly-2020-02-10-055634-ipv6.3) on a bare metal deployment

Image used in local disconnected registry:
[kni@provisionhost-0 ~]$ oc adm release info --image-for=cluster-ingress-operator  -a ~/combined-secret.json   registry.ocp-edge-cluster.qe.lab.redhat.com:5000/localimages/local-release-image:4.3.0-0.nightly-2020-02-10-055634-ipv6.3

Image used in 4.3.0-0.nightly-2020-02-10-055634:
[kni@provisionhost-0 ~]$ oc adm release info --image-for=cluster-ingress-operator  -a ~/combined-secret.json   registry.svc.ci.openshift.org/ocp/release:4.3.0-0.nightly-2020-02-10-055634

[kni@provisionhost-0 ~]$ oc get co/ingress
NAME      VERSION                                    AVAILABLE   PROGRESSING   DEGRADED   SINCE
ingress   4.3.0-0.nightly-2020-02-10-055634-ipv6.3   True        False         False      89m

[kni@provisionhost-0 ~]$ oc get network/cluster -o yaml
apiVersion: config.openshift.io/v1
kind: Network
  creationTimestamp: "2020-02-11T20:04:57Z"
  generation: 2
  name: cluster
  resourceVersion: "1861"
  selfLink: /apis/config.openshift.io/v1/networks/cluster
  uid: 0cb632ca-766c-4959-8c66-187ecbb56579
  - cidr: fd01::/48
    hostPrefix: 64
    policy: {}
  networkType: OVNKubernetes
  - fd02::/112
  - cidr: fd01::/48
    hostPrefix: 64
  clusterNetworkMTU: 1400
  networkType: OVNKubernetes
  - fd02::/112

Ingress hostname is reacheable:

[kni@provisionhost-0 ~]$ curl -k https://test.apps.ocp-edge-cluster.qe.lab.redhat.com -I
HTTP/1.0 503 Service Unavailable
Pragma: no-cache
Cache-Control: private, max-age=0, no-cache, no-store
Connection: close
Content-Type: text/html

[kni@provisionhost-0 ~]$ curl -k https://test.apps.ocp-edge-cluster.qe.lab.redhat.com -I -v
* Rebuilt URL to: https://test.apps.ocp-edge-cluster.qe.lab.redhat.com/
*   Trying fd2e:6f44:5dd8:c956::10...
* Connected to test.apps.ocp-edge-cluster.qe.lab.redhat.com (fd2e:6f44:5dd8:c956::10) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/pki/tls/certs/ca-bundle.crt
  CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server did not agree to a protocol
* Server certificate:
*  subject: CN=*.apps.ocp-edge-cluster.qe.lab.redhat.com
*  start date: Feb 11 20:15:38 2020 GMT
*  expire date: Feb 10 20:15:39 2022 GMT
*  issuer: CN=ingress-operator@1581452136
*  SSL certificate verify result: self signed certificate in certificate chain (19), continuing anyway.
> HEAD / HTTP/1.1
> Host: test.apps.ocp-edge-cluster.qe.lab.redhat.com
> User-Agent: curl/7.61.1
> Accept: */*
* HTTP 1.0, assume close after body
< HTTP/1.0 503 Service Unavailable
HTTP/1.0 503 Service Unavailable
< Pragma: no-cache
Pragma: no-cache
< Cache-Control: private, max-age=0, no-cache, no-store
Cache-Control: private, max-age=0, no-cache, no-store
< Connection: close
Connection: close
< Content-Type: text/html
Content-Type: text/html

* Excess found in a non pipelined read: excess = 3131 url = / (zero-length body)
* Closing connection 0
* TLSv1.2 (OUT), TLS alert, close notify

Comment 6 errata-xmlrpc 2020-02-19 05:39:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.