Summary: | Creating a second clusteringress causes ingress-operator panic | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Dan Mace <dmace> |
Component: | Networking | Assignee: | Miciah Dashiel Butler Masters <mmasters> |
Networking sub component: | router | QA Contact: | Hongan Li <hongli> |
Status: | CLOSED ERRATA | Docs Contact: | |
Severity: | high | ||
Priority: | high | CC: | aos-bugs, dhansen, dmace, mmasters |
Version: | 4.1.0 | ||
Target Milestone: | --- | ||
Target Release: | 4.1.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | No Doc Update | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2019-06-04 10:44:14 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: |
Description
Dan Mace
2019-02-19 13:20:27 UTC
Hi Dan, What's your test version? I cannot reproduce it with 4.0.0-0.nightly-2019-02-18-223936. oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.0.0-0.nightly-2019-02-18-223936 True False 17h Cluster version is 4.0.0-0.nightly-2019-02-18-223936 $ ./openshift-install version ./openshift-install v4.0.0-0.176.0.0-dirty and this is my yaml file: $ cat clusteringress-new.yaml apiVersion: ingress.openshift.io/v1alpha1 kind: ClusterIngress metadata: name: new namespace: openshift-ingress-operator spec: defaultCertificateSecret: null highAvailability: type: Cloud ingressDomain: router-new namespaceSelector: null nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: "" replicas: 1 routeSelector: null unsupportedExtensions: null check the pod/svc after creating the second clusteringress: $ oc get svc -n openshift-ingress NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.237.33 ad379ed33342611e98d370acca36f761-707820507.us-east-2.elb.amazonaws.com 80:32747/TCP,443:30334/TCP 17h router-internal-default ClusterIP 172.30.100.121 <none> 80/TCP,443/TCP,1936/TCP 17h router-internal-new ClusterIP 172.30.20.160 <none> 80/TCP,443/TCP,1936/TCP 12m router-new LoadBalancer 172.30.183.93 a9fd3147f34b711e99bd9067ea8c2a8c-716371972.us-east-2.elb.amazonaws.com 80:30462/TCP,443:30658/TCP 12m $ oc get pod -n openshift-ingress NAME READY STATUS RESTARTS AGE router-default-7f6bdc99bf-5zstr 1/1 Running 0 17h router-default-7f6bdc99bf-vmr5l 1/1 Running 0 17h router-new-7b4d78d968-5j9f4 1/1 Running 0 12m $ oc get deployment -n openshift-ingress NAME READY UP-TO-DATE AVAILABLE AGE router-default 2/2 2 2 17h router-new 1/1 1 1 12m [hongan@hongli-t460 installer4]$ I authored the following PR for testing multiple ClusterIngress resources and did not observe this bug: https://github.com/openshift/cluster-ingress-operator/pull/131 Please try with the exact resource I provided in the bug description. The reproducers in https://bugzilla.redhat.com/show_bug.cgi?id=1678723#c1 and https://bugzilla.redhat.com/show_bug.cgi?id=1678723#c2 are not the same. It's almost certain that the nil values are significant. verified with 4.0.0-0.nightly-2019-02-24-045124 and issue has been fixed. created clusteringress with below yaml file and no panic in operator pod logs. ``` apiVersion: ingress.openshift.io/v1alpha1 kind: ClusterIngress metadata: name: test namespace: openshift-ingress-operator spec: highAvailability: type: Cloud ``` Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758 |