Bug 1608448 - It's really hard to determine the status of a DC or deployment or stateful set or daemonset from 'kubectl|oc get'
Summary: It's really hard to determine the status of a DC or deployment or stateful se...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Master
Version: 3.11.0
Hardware: Unspecified
OS: Unspecified
medium
low
Target Milestone: ---
: 4.1.0
Assignee: Michal Fojtik
QA Contact: zhou ying
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-07-25 14:38 UTC by Clayton Coleman
Modified: 2019-06-04 10:40 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-04 10:40:22 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0758 0 None None None 2019-06-04 10:40:28 UTC

Description Clayton Coleman 2018-07-25 14:38:14 UTC
Figuring out whether a deployment, dc, statefulset, or ds is "ok" or "not ok" or "being updated" is too hard from kubectl/oc get.

I think we should add two columns immediately to the right of name, like pods, as "Ready" and "Status" that should rollup condition, readyReplicas, replicas, and unavaialbelReplicas info from status.

We can then potentially drop some of the other count columns.

This would massively improve the ability of an admin to do an "at a glance" of workloads.  Right now it sucks.

Comment 2 Michal Fojtik 2018-08-13 09:15:32 UTC
Not a 3.11 blocker, moving to 4.0 bucket

Comment 3 Juan Vallejo 2018-09-13 17:53:57 UTC
Upstream issue tracking this: https://github.com/kubernetes/kubernetes/issues/68623

Comment 4 Juan Vallejo 2019-01-29 16:21:55 UTC
Upstream PR https://github.com/kubernetes/kubernetes/pull/70466

Comment 6 Maciej Szulik 2019-04-24 14:08:54 UTC
We got this with 1.13 rebase, moving to qa.

Comment 7 zhou ying 2019-04-25 02:27:54 UTC
confirmed with latest OCP, the issue has fixed:
oc version:

Client Version: version.Info{Major:"4", Minor:"1+", GitVersion:"v4.1.0", GitCommit:"74c534b60", GitTreeState:"", BuildDate:"2019-04-21T21:13:18Z", GoVersion:"", Compiler:"", Platform:""} 

kubectl version

[root@dhcp-140-138 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}

[root@dhcp-140-138 ~]# kubectl get sts
NAME READY AGE
hello-statefulset 2/2 72s
[root@dhcp-140-138 ~]# kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
busybox 0/1 1 0 101m
[root@dhcp-140-138 ~]# oc get sts
NAME READY AGE
hello-statefulset 2/2 9m16s
[root@dhcp-140-138 ~]# oc get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
busybox 0/1 1 0 109m

Comment 9 errata-xmlrpc 2019-06-04 10:40:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758


Note You need to log in before you can comment on or make changes to this bug.