Description of problem: I have CNV-2.3 deployed, and creating HPP CR. apiVersion: hostpathprovisioner.kubevirt.io/v1alpha1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent pathConfig: path: "/var/hpvolumes" useNamingPrefix: "false" I see HPP operator failing panic: runtime error: invalid memory address or nil pointer dereference [recovered] panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x12bb60d] Version-Release number of selected component (if applicable): hostpath-provisioner-container-v2.3.0-9 How reproducible: 100 Steps to Reproduce: 1. deploy cnv-2.3 2. create CR for HPP 3. Actual results: seeing SIGSEGV error in log Expected results: HPP operation deploys HPP on my cluster Additional info: [cloud-user@ocp-psi-executor ~]$ oc logs -n openshift-cnv hostpath-provisioner-operator-6b7f858488-5s2dv {"level":"info","ts":1580994806.099918,"logger":"cmd","msg":"Go Version: go1.12.12"} {"level":"info","ts":1580994806.1000233,"logger":"cmd","msg":"Go OS/Arch: linux/amd64"} {"level":"info","ts":1580994806.1000373,"logger":"cmd","msg":"Version of operator-sdk: v0.11.0"} {"level":"info","ts":1580994806.1004868,"logger":"leader","msg":"Trying to become the leader."} {"level":"info","ts":1580994808.6743476,"logger":"leader","msg":"Found existing lock with my name. I was likely restarted."} {"level":"info","ts":1580994808.674385,"logger":"leader","msg":"Continuing as the leader."} {"level":"info","ts":1580994811.23844,"logger":"cmd","msg":"Registering Components."} {"level":"info","ts":1580994811.2389398,"logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"hostpathprovisioner-controller","source":"kind source: /, Kind="} {"level":"info","ts":1580994811.2391994,"logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"hostpathprovisioner-controller","source":"kind source: /, Kind="} {"level":"info","ts":1580994811.2393537,"logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"hostpathprovisioner-controller","source":"kind source: /, Kind="} {"level":"info","ts":1580994811.2395244,"logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"hostpathprovisioner-controller","source":"kind source: /, Kind="} {"level":"info","ts":1580994811.2396758,"logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"hostpathprovisioner-controller","source":"kind source: /, Kind="} {"level":"info","ts":1580994811.2398303,"logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"hostpathprovisioner-controller","source":"kind source: /, Kind="} {"level":"info","ts":1580994811.240009,"logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"hostpathprovisioner-controller","source":"kind source: /, Kind="} {"level":"info","ts":1580994811.2401798,"logger":"cmd","msg":"Starting the Cmd."} E0206 13:13:31.258178 1 reflector.go:270] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io) {"level":"info","ts":1580994811.3410206,"logger":"controller-runtime.controller","msg":"Starting Controller","controller":"hostpathprovisioner-controller"} {"level":"info","ts":1580994811.4412744,"logger":"controller-runtime.controller","msg":"Starting workers","controller":"hostpathprovisioner-controller","worker count":1} {"level":"info","ts":1580994811.4414306,"logger":"controller_hostpathprovisioner","msg":"Reconciling HostPathProvisioner","Request.Namespace":"","Request.Name":"hostpath-provisioner"} E0206 13:13:31.441728 1 runtime.go:69] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) /go/src/kubevirt.io/hostpath-provisioner-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:76 /go/src/kubevirt.io/hostpath-provisioner-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65 /go/src/kubevirt.io/hostpath-provisioner-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51 /opt/rh/go-toolset-1.12/root/usr/lib/go-toolset-1.12-golang/src/runtime/panic.go:522 /opt/rh/go-toolset-1.12/root/usr/lib/go-toolset-1.12-golang/src/runtime/panic.go:82 /opt/rh/go-toolset-1.12/root/usr/lib/go-toolset-1.12-golang/src/runtime/signal_unix.go:390 /go/src/kubevirt.io/hostpath-provisioner-operator/pkg/controller/hostpathprovisioner/controller.go:202 /go/src/kubevirt.io/hostpath-provisioner-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:216 /go/src/kubevirt.io/hostpath-provisioner-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:192 /go/src/kubevirt.io/hostpath-provisioner-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:171 /go/src/kubevirt.io/hostpath-provisioner-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 /go/src/kubevirt.io/hostpath-provisioner-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 /go/src/kubevirt.io/hostpath-provisioner-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 /opt/rh/go-toolset-1.12/root/usr/lib/go-toolset-1.12-golang/src/runtime/asm_amd64.s:1337 panic: runtime error: invalid memory address or nil pointer dereference [recovered] panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x12bb60d] goroutine 511 [running]: kubevirt.io/hostpath-provisioner-operator/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /go/src/kubevirt.io/hostpath-provisioner-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x105 panic(0x1488ae0, 0x268bae0) /opt/rh/go-toolset-1.12/root/usr/lib/go-toolset-1.12-golang/src/runtime/panic.go:522 +0x1b5 kubevirt.io/hostpath-provisioner-operator/pkg/controller/hostpathprovisioner.(*ReconcileHostPathProvisioner).Reconcile(0xc000aee9e0, 0x0, 0x0, 0xc00056d040, 0x14, 0x26a69e0, 0xc0004d62d0, 0xc000320d88, 0xc000320db8) /go/src/kubevirt.io/hostpath-provisioner-operator/pkg/controller/hostpathprovisioner/controller.go:202 +0x5fd kubevirt.io/hostpath-provisioner-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0006aec80, 0x14e5e00, 0xc000811640, 0x14e5e00) /go/src/kubevirt.io/hostpath-provisioner-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:216 +0x146 kubevirt.io/hostpath-provisioner-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0006aec80, 0xc0000ad400) /go/src/kubevirt.io/hostpath-provisioner-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:192 +0xb5 kubevirt.io/hostpath-provisioner-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc0006aec80) /go/src/kubevirt.io/hostpath-provisioner-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:171 +0x2b kubevirt.io/hostpath-provisioner-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc00083e2b0) /go/src/kubevirt.io/hostpath-provisioner-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x54 kubevirt.io/hostpath-provisioner-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00083e2b0, 0x3b9aca00, 0x0, 0x1, 0xc00088b020) /go/src/kubevirt.io/hostpath-provisioner-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xf8 kubevirt.io/hostpath-provisioner-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc00083e2b0, 0x3b9aca00, 0xc00088b020) /go/src/kubevirt.io/hostpath-provisioner-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d created by kubevirt.io/hostpath-provisioner-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start /go/src/kubevirt.io/hostpath-provisioner-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157 +0x311
Alexander, please investigate.
Do we know which upstream version corresponds to hostpath-provisioner-container-v2.3.0-9? I see 2 signs its using an older version. 1. The exception line number in the latest code is a } and 2. The roles are incorrect as its printing a failure on watching storage classes, that was fixed one or two versions before the latest release.
we managed to fully deploy cnv 2.3 and this didn't reproduce. registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-hostpath-provisioner-rhel8@sha256:6461956d902351e6c3d08dbd5a6097bae9ca4e52404409f6d2cb8b819f2354e9 PULL_POLICY: IfNotPresent Mounts: /var/run/secrets/kubernetes.io/serviceaccount from hostpath-provisioner-operator-token-xmlbc (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2020:2011