Description of problem: When "oc status" run, the following runtime error has been happened. It seems like a nil variable does not handle well. https://github.com/openshift/oc/blob/release-4.2/pkg/helpers/describe/projectstatus.go#L950-L953 ~~~ func describeJobStatus(job *batchv1.Job) string { timeAt := strings.ToLower(formatRelativeTime(job.CreationTimestamp.Time)) return fmt.Sprintf("created %s ago %d/%d completed %d running", timeAt, job.Status.Succeeded, *job.Spec.Completions, job.Status.Active) <-- runtime error stack point here } ~~~ * the panic error messages are as follows. ~~~ $ oc project openshift-logging $ oc status panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x19e8a28] goroutine 1 [running]: github.com/openshift/oc/pkg/helpers/describe.describeJobStatus(0xc001512000, 0xc0017ae660, 0xc001abd700) /go/src/github.com/openshift/oc/pkg/helpers/describe/projectstatus.go:952 +0xc8 github.com/openshift/oc/pkg/helpers/describe.describeStandaloneJob(0x2e11ca0, 0xc000396ce0, 0xc001adaf30, 0xc001a84498, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0, ...) /go/src/github.com/openshift/oc/pkg/helpers/describe/projectstatus.go:946 +0x61a github.com/openshift/oc/pkg/helpers/describe.(*ProjectStatusDescriber).Describe.func1(0xc00112a160, 0x2e0bfc0, 0xc000e01c50) /go/src/github.com/openshift/oc/pkg/helpers/describe/projectstatus.go:442 +0x1b25 github.com/openshift/oc/pkg/helpers/describe.tabbedString(0xc001abf768, 0x4d22808, 0x0, 0x0, 0x2e53c20) /go/src/github.com/openshift/oc/pkg/helpers/describe/helpers.go:37 +0xb0 github.com/openshift/oc/pkg/helpers/describe.(*ProjectStatusDescriber).Describe(0xc0012c0000, 0xc000f15a40, 0x11, 0x0, 0x0, 0xc001138720, 0xc001138750, 0xc001112a00, 0x39) /go/src/github.com/openshift/oc/pkg/helpers/describe/projectstatus.go:266 +0xcec github.com/openshift/oc/pkg/cli/status.StatusOptions.RunStatus(0xc000f15a40, 0x11, 0x0, 0x0, 0x0, 0xc0012c0000, 0x0, 0xc000663948, 0x7, 0x290533b, ...) /go/src/github.com/openshift/oc/pkg/cli/status/status.go:219 +0x40b github.com/openshift/oc/pkg/cli/status.NewCmdStatus.func1(0xc000d2b680, 0x4d22808, 0x0, 0x0) /go/src/github.com/openshift/oc/pkg/cli/status/status.go:86 +0x1c1 github.com/openshift/oc/vendor/github.com/spf13/cobra.(*Command).execute(0xc000d2b680, 0x4d22808, 0x0, 0x0, 0xc000d2b680, 0x4d22808) /go/src/github.com/openshift/oc/vendor/github.com/spf13/cobra/command.go:760 +0x2ae github.com/openshift/oc/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000d2a500, 0x2, 0xc000d2a500, 0x2) /go/src/github.com/openshift/oc/vendor/github.com/spf13/cobra/command.go:846 +0x2ec github.com/openshift/oc/vendor/github.com/spf13/cobra.(*Command).Execute(...) /go/src/github.com/openshift/oc/vendor/github.com/spf13/cobra/command.go:794 main.main() /go/src/github.com/openshift/oc/cmd/oc/oc.go:103 +0x815 ~~~ * "openshift-logging" project information ~~~ $ oc get all NAME READY STATUS RESTARTS AGE pod/cluster-logging-operator-7bbbdd6668-xxxxx 1/1 Running 0 1d pod/curator-1578195068-xxxxx 0/1 Completed 0 23h pod/elasticsearch-cdm-gkswpz5r-1-84bf4b7d68-xxxxx 2/2 Running 0 1d pod/fluentd-xxxxx 1/1 Running 0 1d pod/fluentd-xxxxx 1/1 Running 0 1d pod/fluentd-xxxxx 1/1 Running 0 1d pod/fluentd-xxxxx 1/1 Running 0 1d pod/fluentd-xxxxx 1/1 Running 0 1d pod/kibana-bc9b88968-xxxxx 2/2 Running 0 1d NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/elasticsearch ClusterIP 172.30.68.10 <none> 9200/TCP 1d service/elasticsearch-cluster ClusterIP 172.30.68.11 <none> 9300/TCP 1d service/elasticsearch-metrics ClusterIP 172.30.68.12 <none> 60000/TCP 1d service/fluentd ClusterIP 172.30.68.13 <none> 24231/TCP 1d service/kibana ClusterIP 172.30.68.14 <none> 443/TCP 1d NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/fluentd 5 5 5 5 5 kubernetes.io/os=linux 1d NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/cluster-logging-operator 1/1 1 1 1d deployment.apps/elasticsearch-cdm-gkswpz68-1 1/1 1 1 1d deployment.apps/kibana 1/1 1 1 1d NAME DESIRED CURRENT READY AGE replicaset.apps/cluster-logging-operator-7bbbdd6668 1 1 1 1d replicaset.apps/elasticsearch-cdm-gkswpz5r-1-84bf4b7d68 1 1 1 1d replicaset.apps/kibana-bc9b88968 1 1 1 1d NAME COMPLETIONS DURATION AGE job.batch/curator-1578195068 1/1 9s 23h NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE cronjob.batch/curator 30 3 * * * False 0 23h 1d NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD route.route.openshift.io/kibana kibana-openshift-logging.apps.example.com kibana <all> reencrypt/Redirect None ~~~ Version-Release number of selected component (if applicable): Client Version: openshift-clients-4.2.2-201910250432-4-g4ac90784 Server Version: 4.2.10 Kubernetes Version: v1.14.6+17b1cc6 How reproducible: This issue has been happened CU's system. Steps to Reproduce: 1. 2. 3. Actual results: "oc status" did not work well due to runtime error. Expected results: "oc status" show all information of "openshift-logging" without any errors. Additional info:
Can't reproduce the issue now: [zhouying@dhcp-140-138 ~]$ oc version -o yaml clientVersion: buildDate: "2020-02-12T17:56:47Z" compiler: gc gitCommit: 1d7211968ceb449852d10d12eae92ed5c33d8b48 gitTreeState: clean gitVersion: v4.3.2 goVersion: go1.12.12 major: "" minor: "" platform: linux/amd64 [zhouying@dhcp-140-138 ~]$ oc status In project zhouyt on server https://api.yinzhou13.qe.devcluster.openshift.com:6443 job/pi manages openshift/perl-516-centos7 1 info identified, use 'oc status --suggest' to see details.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0528