Hide Forgot
Description of problem: Creating a new clusteringress in the openshift-ingress-operator namespace causes an ingress-operator panic. Version-Release number of selected component (if applicable): How reproducible: Create the following resource: ```yaml apiVersion: ingress.openshift.io/v1alpha1 kind: ClusterIngress metadata: name: test namespace: openshift-ingress-operator spec: highAvailability: type: Cloud ``` Actual results: INFO[0020] queueing test for related /api/v1/namespaces/openshift-ingress/services/router-internal-test ERROR: logging before flag.Parse: E0219 08:18:53.301923 12392 runtime.go:69] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) /Users/dmace/Projects/cluster-ingress-operator/src/github.com/openshift/cluster-ingress-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:76 /Users/dmace/Projects/cluster-ingress-operator/src/github.com/openshift/cluster-ingress-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65 /Users/dmace/Projects/cluster-ingress-operator/src/github.com/openshift/cluster-ingress-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51 /Users/dmace/.go/1.10.3/src/runtime/asm_amd64.s:573 /Users/dmace/.go/1.10.3/src/runtime/panic.go:502 /Users/dmace/.go/1.10.3/src/runtime/panic.go:63 /Users/dmace/.go/1.10.3/src/runtime/signal_unix.go:388 /Users/dmace/Projects/cluster-ingress-operator/src/github.com/openshift/cluster-ingress-operator/pkg/operator/controller/controller_default_certificate.go:69 /Users/dmace/Projects/cluster-ingress-operator/src/github.com/openshift/cluster-ingress-operator/pkg/operator/controller/controller_default_certificate.go:37 /Users/dmace/Projects/cluster-ingress-operator/src/github.com/openshift/cluster-ingress-operator/pkg/operator/controller/controller.go:343 /Users/dmace/Projects/cluster-ingress-operator/src/github.com/openshift/cluster-ingress-operator/pkg/operator/controller/controller.go:164 /Users/dmace/Projects/cluster-ingress-operator/src/github.com/openshift/cluster-ingress-operator/pkg/operator/controller/controller.go:81 /Users/dmace/Projects/cluster-ingress-operator/src/github.com/openshift/cluster-ingress-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:215 /Users/dmace/Projects/cluster-ingress-operator/src/github.com/openshift/cluster-ingress-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158 /Users/dmace/Projects/cluster-ingress-operator/src/github.com/openshift/cluster-ingress-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 /Users/dmace/Projects/cluster-ingress-operator/src/github.com/openshift/cluster-ingress-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 /Users/dmace/Projects/cluster-ingress-operator/src/github.com/openshift/cluster-ingress-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 /Users/dmace/.go/1.10.3/src/runtime/asm_amd64.s:2361 panic: runtime error: invalid memory address or nil pointer dereference [recovered] panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1da0aac] goroutine 642 [running]: github.com/openshift/cluster-ingress-operator/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /Users/dmace/Projects/cluster-ingress-operator/src/github.com/openshift/cluster-ingress-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x107 panic(0x1f56f60, 0x2c00380) /Users/dmace/.go/1.10.3/src/runtime/panic.go:502 +0x229 github.com/openshift/cluster-ingress-operator/pkg/operator/controller.desiredRouterDefaultCertificateSecret(0xc420d8dcc0, 0xc42036ed80, 0xc420f3f4a0, 0xc420e53500, 0x68f, 0x690) /Users/dmace/Projects/cluster-ingress-operator/src/github.com/openshift/cluster-ingress-operator/pkg/operator/controller/controller_default_certificate.go:69 +0x5c github.com/openshift/cluster-ingress-operator/pkg/operator/controller.(*reconciler).ensureDefaultCertificateForIngress(0xc420736960, 0xc420d8b7c0, 0xc42036ed80, 0xc420f3f4a0, 0x21011f6, 0xa) /Users/dmace/Projects/cluster-ingress-operator/src/github.com/openshift/cluster-ingress-operator/pkg/operator/controller/controller_default_certificate.go:37 +0x265 github.com/openshift/cluster-ingress-operator/pkg/operator/controller.(*reconciler).ensureRouterForIngress(0xc420736960, 0xc420f3f4a0, 0xc420d8b7c0, 0xc420fb2000, 0xc420d8bb80, 0xc420bfda90, 0x7, 0x226eaa0) /Users/dmace/Projects/cluster-ingress-operator/src/github.com/openshift/cluster-ingress-operator/pkg/operator/controller/controller.go:343 +0x80c github.com/openshift/cluster-ingress-operator/pkg/operator/controller.(*reconciler).reconcile(0xc420736960, 0xc420d5ce20, 0x1a, 0xc420f28660, 0x4, 0xc420736a00, 0x0, 0x0, 0x0) /Users/dmace/Projects/cluster-ingress-operator/src/github.com/openshift/cluster-ingress-operator/pkg/operator/controller/controller.go:164 +0xa3f github.com/openshift/cluster-ingress-operator/pkg/operator/controller.(*reconciler).Reconcile(0xc420736960, 0xc420d5ce20, 0x1a, 0xc420f28660, 0x4, 0x2c148c0, 0x0, 0x0, 0x0) /Users/dmace/Projects/cluster-ingress-operator/src/github.com/openshift/cluster-ingress-operator/pkg/operator/controller/controller.go:81 +0x69 github.com/openshift/cluster-ingress-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc4203e2c80, 0x0) /Users/dmace/Projects/cluster-ingress-operator/src/github.com/openshift/cluster-ingress-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:215 +0x188 github.com/openshift/cluster-ingress-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1() /Users/dmace/Projects/cluster-ingress-operator/src/github.com/openshift/cluster-ingress-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158 +0x36 github.com/openshift/cluster-ingress-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc42092e010) /Users/dmace/Projects/cluster-ingress-operator/src/github.com/openshift/cluster-ingress-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x54 github.com/openshift/cluster-ingress-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc42092e010, 0x3b9aca00, 0x0, 0x1, 0xc4204e9320) /Users/dmace/Projects/cluster-ingress-operator/src/github.com/openshift/cluster-ingress-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0xbd github.com/openshift/cluster-ingress-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc42092e010, 0x3b9aca00, 0xc4204e9320) /Users/dmace/Projects/cluster-ingress-operator/src/github.com/openshift/cluster-ingress-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d created by github.com/openshift/cluster-ingress-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start /Users/dmace/Projects/cluster-ingress-operator/src/github.com/openshift/cluster-ingress-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157 +0x35b Expected results: The clusteringress should be materialized. Additional info:
Hi Dan, What's your test version? I cannot reproduce it with 4.0.0-0.nightly-2019-02-18-223936. oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.0.0-0.nightly-2019-02-18-223936 True False 17h Cluster version is 4.0.0-0.nightly-2019-02-18-223936 $ ./openshift-install version ./openshift-install v4.0.0-0.176.0.0-dirty and this is my yaml file: $ cat clusteringress-new.yaml apiVersion: ingress.openshift.io/v1alpha1 kind: ClusterIngress metadata: name: new namespace: openshift-ingress-operator spec: defaultCertificateSecret: null highAvailability: type: Cloud ingressDomain: router-new namespaceSelector: null nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: "" replicas: 1 routeSelector: null unsupportedExtensions: null check the pod/svc after creating the second clusteringress: $ oc get svc -n openshift-ingress NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.237.33 ad379ed33342611e98d370acca36f761-707820507.us-east-2.elb.amazonaws.com 80:32747/TCP,443:30334/TCP 17h router-internal-default ClusterIP 172.30.100.121 <none> 80/TCP,443/TCP,1936/TCP 17h router-internal-new ClusterIP 172.30.20.160 <none> 80/TCP,443/TCP,1936/TCP 12m router-new LoadBalancer 172.30.183.93 a9fd3147f34b711e99bd9067ea8c2a8c-716371972.us-east-2.elb.amazonaws.com 80:30462/TCP,443:30658/TCP 12m $ oc get pod -n openshift-ingress NAME READY STATUS RESTARTS AGE router-default-7f6bdc99bf-5zstr 1/1 Running 0 17h router-default-7f6bdc99bf-vmr5l 1/1 Running 0 17h router-new-7b4d78d968-5j9f4 1/1 Running 0 12m $ oc get deployment -n openshift-ingress NAME READY UP-TO-DATE AVAILABLE AGE router-default 2/2 2 2 17h router-new 1/1 1 1 12m [hongan@hongli-t460 installer4]$
I authored the following PR for testing multiple ClusterIngress resources and did not observe this bug: https://github.com/openshift/cluster-ingress-operator/pull/131
Please try with the exact resource I provided in the bug description. The reproducers in https://bugzilla.redhat.com/show_bug.cgi?id=1678723#c1 and https://bugzilla.redhat.com/show_bug.cgi?id=1678723#c2 are not the same. It's almost certain that the nil values are significant.
verified with 4.0.0-0.nightly-2019-02-24-045124 and issue has been fixed. created clusteringress with below yaml file and no panic in operator pod logs. ``` apiVersion: ingress.openshift.io/v1alpha1 kind: ClusterIngress metadata: name: test namespace: openshift-ingress-operator spec: highAvailability: type: Cloud ```
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758