Description of problem: In nightly build the packageserver operator will not start - It repeatedly SIGSEGVs root@ip-172-31-64-58: ~ # oc logs olm-operators-8jgwm fatal error: unexpected signal during runtime execution [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x0] runtime stack: runtime.throw(0x14e0e4a, 0x2a) /opt/rh/go-toolset-1.11/root/usr/lib/go-toolset-1.11-golang/src/runtime/panic.go:608 +0x72 runtime.sigpanic() /opt/rh/go-toolset-1.11/root/usr/lib/go-toolset-1.11-golang/src/runtime/signal_unix.go:374 +0x2f2 goroutine 1 [syscall, locked to thread]: runtime.cgocall(0x115f0a0, 0xc00006de48, 0x1377b00) /opt/rh/go-toolset-1.11/root/usr/lib/go-toolset-1.11-golang/src/runtime/cgocall.go:128 +0x5e fp=0xc00006de18 sp=0xc00006dde0 pc=0x40baae crypto/internal/boring._Cfunc__goboringcrypto_DLOPEN_OPENSSL(0x0) _cgo_gotypes.go:597 +0x4a fp=0xc00006de48 sp=0xc00006de18 pc=0x60687a crypto/internal/boring.init.0() /opt/rh/go-toolset-1.11/root/usr/lib/go-toolset-1.11-golang/src/crypto/internal/boring/boring.go:37 +0x47 fp=0xc00006de70 sp=0xc00006de48 pc=0x60c607 crypto/internal/boring.init() <autogenerated>:1 +0x12a fp=0xc00006dea0 sp=0xc00006de70 pc=0x617b8a crypto/ecdsa.init() <autogenerated>:1 +0x4b fp=0xc00006ded0 sp=0xc00006dea0 pc=0x63529b crypto/tls.init() <autogenerated>:1 +0x55 fp=0xc00006df10 sp=0xc00006ded0 pc=0x69d335 google.golang.org/grpc/credentials.init() <autogenerated>:1 +0x50 fp=0xc00006df50 sp=0xc00006df10 pc=0x6e0790 google.golang.org/grpc.init() <autogenerated>:1 +0x64 fp=0xc00006df88 sp=0xc00006df50 pc=0x83cb64 main.init() <autogenerated>:1 +0x5c fp=0xc00006df98 sp=0xc00006df88 pc=0x10b953c runtime.main() /opt/rh/go-toolset-1.11/root/usr/lib/go-toolset-1.11-golang/src/runtime/proc.go:189 +0x1bd fp=0xc00006dfe0 sp=0xc00006df98 pc=0x4350cd runtime.goexit() /opt/rh/go-toolset-1.11/root/usr/lib/go-toolset-1.11-golang/src/runtime/asm_amd64.s:1333 +0x1 fp=0xc00006dfe8 sp=0xc00006dfe0 pc=0x4604d1 root@ip-172-31-64-58: ~ # oc get clusteroperators operator-lifecycle-manager-packageserver -o yaml apiVersion: config.openshift.io/v1 kind: ClusterOperator metadata: creationTimestamp: "2019-07-09T14:37:00Z" generation: 1 name: operator-lifecycle-manager-packageserver resourceVersion: "4335" selfLink: /apis/config.openshift.io/v1/clusteroperators/operator-lifecycle-manager-packageserver uid: 031e5264-a257-11e9-93a6-06967f37f6f4 spec: {} status: conditions: - lastTransitionTime: "2019-07-09T14:37:00Z" status: "False" type: Degraded - lastTransitionTime: "2019-07-09T14:37:00Z" status: "False" type: Available - lastTransitionTime: "2019-07-09T14:37:00Z" message: Working toward 0.10.1 status: "True" type: Progressing extension: null relatedObjects: - group: "" name: openshift-operator-lifecycle-manager resource: namespaces - group: operators.coreos.com name: packageserver.v0.10.1 namespace: openshift-operator-lifecycle-manager resource: ClusterServiceVersion Version-Release number of selected component (if applicable): 4.2.0-0.nightly-2019-07-09-124131 How reproducible: Always
Same problem is being seen in the nightly builds: https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-aws-4.2/129
This build contains the fix: https://openshift-release.svc.ci.openshift.org/releasestream/4.2.0-0.ci/release/4.2.0-0.ci-2019-07-10-200538a *** This bug has been marked as a duplicate of bug 1728223 ***