Description of problem: `oc get all` output several empty lines in between objects Version-Release number of selected component (if applicable): Client Version: version.Info{Major:"", Minor:"", GitVersion:"v4.2.0-alpha.0-39-g911ae06", GitCommit:"911ae06dd", GitTreeState:"clean", BuildDate:"2019-08-29T09:55:19Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0+be7ae76", GitCommit:"be7ae76", GitTreeState:"clean", BuildDate:"2019-08-26T09:22:11Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} OpenShift Version: 4.2.0-0.ci-2019-08-26-110123 How reproducible: always Steps to Reproduce: 1. oc get all Actual results: $ oc get all NAME READY STATUS RESTARTS AGE pod/busyapp-1-5xtdf 1/1 Running 0 8m39s pod/busyapp-1-7fh47 1/1 Running 0 8m39s pod/busyapp-1-c7g9l 1/1 Running 0 8m39s pod/busyapp-1-deploy 0/1 Completed 0 8m44s pod/busyapp-1-m2j6v 1/1 Running 0 8m39s pod/busyapp-1-pxzpc 1/1 Running 0 8m39s NAME DESIRED CURRENT READY AGE replicationcontroller/busyapp-1 5 5 5 8m44s NAME REVISION DESIRED CURRENT TRIGGERED BY deploymentconfig.apps.openshift.io/busyapp 1 5 5 config $ Expected results: without the newlines Additional info: I guess those match the res of the resources in `all` that had no objects in this namespace.
registry.svc.ci.openshift.org/ocp/release:4.1.0-0.ci-2019-08-28-023007 was fine btw.
also tried it from an official build (registry.svc.ci.openshift.org/ocp/release:4.2.0-0.ci-2019-08-29-071726) and got the same new lines as reported
Sally, you need to check the status code, I'm suspecting that when we are missing resources we don't print them, but somehow empty lines are still printed which turns out badly in the output. Best verified on a default ns: oc get all -n default where there are not that many resources.
Use oc extracted from 4.2.0-0.nightly-2019-08-29-170426 still could reproduce this issue: [zhouying@dhcp-140-138 Downloads]$ oc get all NAME READY STATUS RESTARTS AGE pod/django-psql-example-1-7497p 1/1 Running 0 11m pod/django-psql-example-1-build 0/1 Completed 0 13m pod/django-psql-example-1-deploy 0/1 Completed 0 12m pod/postgresql-1-8vc9d 1/1 Running 0 13m pod/postgresql-1-deploy 0/1 Completed 0 13m NAME DESIRED CURRENT READY AGE replicationcontroller/django-psql-example-1 1 1 1 12m replicationcontroller/postgresql-1 1 1 1 13m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/django-psql-example ClusterIP 172.30.147.200 <none> 8080/TCP 14m service/postgresql ClusterIP 172.30.43.134 <none> 5432/TCP 13m NAME REVISION DESIRED CURRENT TRIGGERED BY deploymentconfig.apps.openshift.io/django-psql-example 1 1 1 config,image(django-psql-example:latest) deploymentconfig.apps.openshift.io/postgresql 1 1 1 config,image(postgresql:10) NAME TYPE FROM LATEST buildconfig.build.openshift.io/django-psql-example Source Git 1 But use the oc installed by yum repo works fine: 4.2.0-201908291419.git.1.f30753c.el7 [zhouying@dhcp-140-138 Downloads]$ oc get all NAME READY STATUS RESTARTS AGE pod/django-psql-example-1-7497p 1/1 Running 0 65s pod/django-psql-example-1-build 0/1 Completed 0 3m5s pod/django-psql-example-1-deploy 0/1 Completed 0 68s pod/postgresql-1-8vc9d 1/1 Running 0 2m54s pod/postgresql-1-deploy 0/1 Completed 0 3m2s NAME DESIRED CURRENT READY AGE replicationcontroller/django-psql-example-1 1 1 1 69s replicationcontroller/postgresql-1 1 1 1 3m3s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/django-psql-example ClusterIP 172.30.147.200 <none> 8080/TCP 3m7s service/postgresql ClusterIP 172.30.43.134 <none> 5432/TCP 3m5s NAME REVISION DESIRED CURRENT TRIGGERED BY deploymentconfig.apps.openshift.io/django-psql-example 1 1 1 config,image(django-psql-example:latest) deploymentconfig.apps.openshift.io/postgresql 1 1 1 config,image(postgresql:10)
Seems to me version discrepancies, use the latest available.
Confirmed with oc extracted from payload: 4.2.0-0.nightly-2019-09-02-172410, the issue has fixed: [root@dhcp-140-138 ~]# oc get all -n openshift-sdn NAME READY STATUS RESTARTS AGE pod/ovs-crq94 1/1 Running 1 21h pod/ovs-dptxw 1/1 Running 2 21h pod/ovs-kq64m 1/1 Running 1 21h pod/ovs-lgpk6 1/1 Running 1 21h pod/ovs-s8gt9 1/1 Running 2 21h pod/sdn-4nfp2 1/1 Running 1 21h pod/sdn-controller-24vtc 1/1 Running 1 21h pod/sdn-controller-d5dtb 1/1 Running 2 21h pod/sdn-controller-lrtkb 1/1 Running 2 21h pod/sdn-ft9d7 1/1 Running 1 21h pod/sdn-gbv77 1/1 Running 6 21h pod/sdn-p9t74 1/1 Running 4 21h pod/sdn-tgc84 1/1 Running 6 21h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/sdn ClusterIP None <none> 9101/TCP 23h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/ovs 5 5 5 5 5 kubernetes.io/os=linux 23h daemonset.apps/sdn 5 5 5 5 5 kubernetes.io/os=linux 23h daemonset.apps/sdn-controller 3 3 3 3 3 node-role.kubernetes.io/master= 23h
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922