Bug 1788016 - When "oc status" run, "panic: runtime error: invalid memory address or nil pointer dereference" is shown
Summary: When "oc status" run, "panic: runtime error: invalid memory address or nil po...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: oc
Version: 4.2.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.3.z
Assignee: Maciej Szulik
QA Contact: zhou ying
URL:
Whiteboard:
Depends On: 1788088
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-01-06 07:16 UTC by Daein Park
Modified: 2020-05-12 22:56 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1788088 (view as bug list)
Environment:
Last Closed: 2020-02-25 06:17:59 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Github openshift oc pull 243 None closed Bug 1788016: Suppress nil pointer dereference for job spec 2020-06-17 06:18:51 UTC
Red Hat Product Errata RHBA-2020:0528 None None None 2020-02-25 06:18:12 UTC

Description Daein Park 2020-01-06 07:16:47 UTC
Description of problem:

When "oc status" run, the following runtime error has been happened. It seems like a nil variable does not handle well.

https://github.com/openshift/oc/blob/release-4.2/pkg/helpers/describe/projectstatus.go#L950-L953
~~~
func describeJobStatus(job *batchv1.Job) string {
	timeAt := strings.ToLower(formatRelativeTime(job.CreationTimestamp.Time))   
	return fmt.Sprintf("created %s ago %d/%d completed %d running", timeAt, job.Status.Succeeded, *job.Spec.Completions, job.Status.Active)      <-- runtime error stack point here
}
~~~

* the panic error messages are as follows.
~~~
$ oc project openshift-logging
$ oc status
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x19e8a28]

goroutine 1 [running]:
github.com/openshift/oc/pkg/helpers/describe.describeJobStatus(0xc001512000, 0xc0017ae660, 0xc001abd700)
  /go/src/github.com/openshift/oc/pkg/helpers/describe/projectstatus.go:952 +0xc8
github.com/openshift/oc/pkg/helpers/describe.describeStandaloneJob(0x2e11ca0, 0xc000396ce0, 0xc001adaf30, 0xc001a84498, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0, ...)
  /go/src/github.com/openshift/oc/pkg/helpers/describe/projectstatus.go:946 +0x61a
github.com/openshift/oc/pkg/helpers/describe.(*ProjectStatusDescriber).Describe.func1(0xc00112a160, 0x2e0bfc0, 0xc000e01c50)
  /go/src/github.com/openshift/oc/pkg/helpers/describe/projectstatus.go:442 +0x1b25
github.com/openshift/oc/pkg/helpers/describe.tabbedString(0xc001abf768, 0x4d22808, 0x0, 0x0, 0x2e53c20)
  /go/src/github.com/openshift/oc/pkg/helpers/describe/helpers.go:37 +0xb0
github.com/openshift/oc/pkg/helpers/describe.(*ProjectStatusDescriber).Describe(0xc0012c0000, 0xc000f15a40, 0x11, 0x0, 0x0, 0xc001138720, 0xc001138750, 0xc001112a00, 0x39)
  /go/src/github.com/openshift/oc/pkg/helpers/describe/projectstatus.go:266 +0xcec
github.com/openshift/oc/pkg/cli/status.StatusOptions.RunStatus(0xc000f15a40, 0x11, 0x0, 0x0, 0x0, 0xc0012c0000, 0x0, 0xc000663948, 0x7, 0x290533b, ...)
  /go/src/github.com/openshift/oc/pkg/cli/status/status.go:219 +0x40b
github.com/openshift/oc/pkg/cli/status.NewCmdStatus.func1(0xc000d2b680, 0x4d22808, 0x0, 0x0)
  /go/src/github.com/openshift/oc/pkg/cli/status/status.go:86 +0x1c1
github.com/openshift/oc/vendor/github.com/spf13/cobra.(*Command).execute(0xc000d2b680, 0x4d22808, 0x0, 0x0, 0xc000d2b680, 0x4d22808)
  /go/src/github.com/openshift/oc/vendor/github.com/spf13/cobra/command.go:760 +0x2ae
github.com/openshift/oc/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000d2a500, 0x2, 0xc000d2a500, 0x2)
  /go/src/github.com/openshift/oc/vendor/github.com/spf13/cobra/command.go:846 +0x2ec
github.com/openshift/oc/vendor/github.com/spf13/cobra.(*Command).Execute(...)
  /go/src/github.com/openshift/oc/vendor/github.com/spf13/cobra/command.go:794
main.main()
  /go/src/github.com/openshift/oc/cmd/oc/oc.go:103 +0x815
~~~

* "openshift-logging" project information
~~~
$ oc get all
NAME                                                READY   STATUS      RESTARTS   AGE
pod/cluster-logging-operator-7bbbdd6668-xxxxx       1/1     Running     0          1d
pod/curator-1578195068-xxxxx                        0/1     Completed   0          23h
pod/elasticsearch-cdm-gkswpz5r-1-84bf4b7d68-xxxxx   2/2     Running     0          1d
pod/fluentd-xxxxx                                   1/1     Running     0          1d
pod/fluentd-xxxxx                                   1/1     Running     0          1d
pod/fluentd-xxxxx                                   1/1     Running     0          1d
pod/fluentd-xxxxx                                   1/1     Running     0          1d
pod/fluentd-xxxxx                                   1/1     Running     0          1d
pod/kibana-bc9b88968-xxxxx                          2/2     Running     0          1d

NAME                            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)     AGE
service/elasticsearch           ClusterIP   172.30.68.10     <none>        9200/TCP    1d
service/elasticsearch-cluster   ClusterIP   172.30.68.11     <none>        9300/TCP    1d
service/elasticsearch-metrics   ClusterIP   172.30.68.12     <none>        60000/TCP   1d
service/fluentd                 ClusterIP   172.30.68.13     <none>        24231/TCP   1d
service/kibana                  ClusterIP   172.30.68.14     <none>        443/TCP     1d

NAME                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
daemonset.apps/fluentd   5         5         5       5            5           kubernetes.io/os=linux   1d

NAME                                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/cluster-logging-operator       1/1     1            1           1d
deployment.apps/elasticsearch-cdm-gkswpz68-1   1/1     1            1           1d
deployment.apps/kibana                         1/1     1            1           1d

NAME                                                      DESIRED   CURRENT   READY   AGE
replicaset.apps/cluster-logging-operator-7bbbdd6668       1         1         1       1d
replicaset.apps/elasticsearch-cdm-gkswpz5r-1-84bf4b7d68   1         1         1       1d
replicaset.apps/kibana-bc9b88968                          1         1         1       1d

NAME                           COMPLETIONS   DURATION   AGE
job.batch/curator-1578195068   1/1           9s         23h

NAME                    SCHEDULE     SUSPEND   ACTIVE   LAST SCHEDULE   AGE
cronjob.batch/curator   30 3 * * *   False     0        23h             1d

NAME                              HOST/PORT                                         PATH   SERVICES   PORT    TERMINATION          WILDCARD
route.route.openshift.io/kibana   kibana-openshift-logging.apps.example.com                kibana     <all>   reencrypt/Redirect   None
~~~

Version-Release number of selected component (if applicable):

Client Version: openshift-clients-4.2.2-201910250432-4-g4ac90784
Server Version: 4.2.10
Kubernetes Version: v1.14.6+17b1cc6

How reproducible:

This issue has been happened CU's system.

Steps to Reproduce:
1.
2.
3.

Actual results:

"oc status" did not work well due to runtime error.

Expected results:

"oc status" show all information of "openshift-logging" without any errors.

Additional info:

Comment 4 zhou ying 2020-02-13 05:13:25 UTC
Can't reproduce the issue now:

[zhouying@dhcp-140-138 ~]$ oc version -o yaml
clientVersion:
  buildDate: "2020-02-12T17:56:47Z"
  compiler: gc
  gitCommit: 1d7211968ceb449852d10d12eae92ed5c33d8b48
  gitTreeState: clean
  gitVersion: v4.3.2
  goVersion: go1.12.12
  major: ""
  minor: ""
  platform: linux/amd64

[zhouying@dhcp-140-138 ~]$ oc status
In project zhouyt on server https://api.yinzhou13.qe.devcluster.openshift.com:6443

job/pi manages openshift/perl-516-centos7
  


1 info identified, use 'oc status --suggest' to see details.

Comment 6 errata-xmlrpc 2020-02-25 06:17:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0528


Note You need to log in before you can comment on or make changes to this bug.