Bug 1540560 - [trello Z96KTq9O] oc status should display deployment/rs/sts consistently with dc/rc
Summary: [trello Z96KTq9O] oc status should display deployment/rs/sts consistently wit...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: oc
Version: 3.9.0
Hardware: Unspecified
OS: Unspecified
medium
low
Target Milestone: ---
: 4.3.0
Assignee: Maciej Szulik
QA Contact: Xingxing Xia
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-01-31 12:05 UTC by Xingxing Xia
Modified: 2020-01-23 11:04 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: Missing DaemonSets describe code. Consequence: DaemonSets were not printed properly in oc status. Fix: Add DaemonSets, Deployments and Deployment Configs in status code. Result: DaemonSets, Deployments and Deployment Configs are printed properly in oc status.
Clone Of:
Environment:
Last Closed: 2020-01-23 11:03:45 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift oc pull 117 0 None None None 2019-10-01 16:40:58 UTC
Red Hat Product Errata RHBA-2020:0062 0 None None None 2020-01-23 11:03:59 UTC

Description Xingxing Xia 2018-01-31 12:05:29 UTC
Description of problem:
oc status should display deployment/rs/sts like dc/rc with consistent format

Version-Release number of selected component (if applicable):
v3.9.0-0.33.0

How reproducible:
Always

Steps to Reproduce:
1. Create dc, rc of multiple replicas
$ oc run mydc --image=openshift/hello-openshift --replicas=4
$ oc run myrc --image=openshift/hello-openshift --replicas=4 --generator=run-controller/v1

2. Create deployment/replicaset/statefulset of multiple replicas
$ oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/deployment/tc536600/hello-deployment-1.yaml

$ oc create -f - <<API
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
  name: frontend
spec:
  replicas: 2
  selector:
    matchLabels:
      tier: frontend
  template:
    metadata:
      labels:
        tier: frontend
    spec:
      containers:
      - name: frontend
        image: openshift/hello-openshift
API

$ oc create -f - <<API
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: hello
spec:
  replicas: 2
  selector:
    matchLabels:
      app: hello
  template:
    metadata:
      labels:
        app: hello
    spec:
      containers:
      - image: aosqe/hello-openshift
        name: hello
API

3. When all pods are running, check oc status

Actual results:
3. It outputs dc and rc with summarizing words, while outputs all pods of deployment/rs/sts without summarizing:
In project xxia-proj on server https://localhost:8443

dc/mydc deploys docker.io/openshift/hello-openshift:latest
  deployment #1 deployed 13 minutes ago - 4 pods

rc/myrc runs openshift/hello-openshift
  rc/myrc created 2 minutes ago - 4 pods

pod/hello-1 runs aosqe/hello-openshift

pod/hello-openshift-d944866b4-6nnt8 runs openshift/hello-openshift

pod/hello-openshift-d944866b4-x6895 runs openshift/hello-openshift

pod/hello-openshift-d944866b4-h2s9c runs openshift/hello-openshift

pod/frontend-hbkrd runs openshift/hello-openshift

pod/hello-0 runs aosqe/hello-openshift

pod/frontend-jcg99 runs openshift/hello-openshift

pod/hello-openshift-d944866b4-sk5nv runs openshift/hello-openshift

View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.

Expected results:
3. It should display deployment/rs/sts like dc/rc with consistent format:
$controller-type/$controller-name runs $image-name
  $controller-type-related-words $age - $count pods

Additional info:

Comment 1 Juan Vallejo 2018-02-06 19:53:46 UTC
Origin PR: https://github.com/openshift/origin/pull/18439

Comment 2 Xingxing Xia 2018-02-09 08:09:09 UTC
PR just newly merged. Will verify when new upcoming OCP 3.9 version available

Comment 3 Xingxing Xia 2018-02-09 09:36:25 UTC
In advance launched Origin env to test the trello card. Now, when deployment (k8s) has associciated bc/svc/route, `oc status` displays nested format well:
http://hello-openshift-xxia-proj.router.default.svc.cluster.local to pod port 80 (svc/hello-openshift)
  deployment/hello-openshift deploys istag/ruby-ex:latest <-
    bc/ruby-ex source builds https://github.com/openshift/ruby-ex.git on istag/ruby-22-centos7:latest 
    deployment #3 running for 43 seconds - 4 pods
    deployment #2 deployed 2 minutes ago
    deployment #1 deployed 6 minutes ago

However, there are several problems:
A. when no svc, the display is not tested, and the beginning word is "rc", and old numbered RSs are seen. The steps to reproduce:
oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/deployment/tc536600/hello-deployment-1.yaml
oc expose deployment hello-openshift
oc set env deploy hello-openshift ENV1=VAL1 # trigger new deploy
oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git
oc set triggers deploy hello-openshift --from-image ruby-ex:latest --containers hello-openshift # trigger new deploy
oc delete svc hello-openshift # below will have no svc
oc status # get below output
rc/hello-openshift-86dfb4f8fb runs openshift/hello-openshift
  rs/hello-openshift-86dfb4f8fb created 18 minutes ago

rc/hello-openshift-7b667cc46f runs 172.30.230.8:5000/xxia-proj/ruby-ex@sha256:71d054b8c27cd916c895a477b343fffc96944ce0916fd6d42b41107ab62f8a56
  rs/hello-openshift-7b667cc46f created 16 minutes ago - 4 pods

rc/hello-openshift-d944866b4 runs openshift/hello-openshift
  rs/hello-openshift-d944866b4 created 22 minutes ago

B. When HPA is created and deployment is deleted, oc status does not show "hpa/... is attempting to scale Deployment... which doesn't exist", like result https://bugzilla.redhat.com/show_bug.cgi?id=1532289#c7

C. sts/ds still not nested in `oc status`, will you support nesting them?

Comment 4 Xingxing Xia 2018-02-09 09:45:14 UTC
(In reply to Xingxing Xia from comment #3)
> However, there are several problems:
> A. when no svc, the display is not tested, and the beginning word is "rc",
Supplement:
D: standalone replicaset has same beginning "rc" issue
oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/replicaSet/tc536601/replicaset.yaml
oc status # get output:
...
rc/frontend runs openshift/hello-openshift
  rs/frontend created 6 seconds ago - 0/3 pods

Comment 5 Michal Fojtik 2018-02-09 15:14:10 UTC
Fixed: https://github.com/openshift/origin/pull/18549

I need to dig deeper to HPA issue, can you please verify the original bug and create new for HPA? That one probably is not 3.9 blocker.

Comment 6 Xingxing Xia 2018-02-11 06:26:03 UTC
Because OCP puddle v3.9.0-0.43.0 is not built yet, again in advance launched Origin env (version v3.9.0-alpha.4+ffa6a47-270), now above point A and D are solved.
Will verify bug when OCP new puddle is ready per bug workflow.
For HPA issue, opened separate bug https://bugzilla.redhat.com/show_bug.cgi?id=1544183

Comment 7 XiaochuanWang 2018-02-23 03:41:15 UTC
Tested on oc v3.9.0-0.48.0
Follow comment 3.C: sts/ds still not nested in `oc status`, will you support nesting them?

The others are not reproduced.

Follow original steps: (not reproduced)
dc/mydc deploys docker.io/openshift/hello-openshift:latest 
  deployment #1 deployed 2 minutes ago - 4 pods

rc/myrc runs openshift/hello-openshift
  rc/myrc created 2 minutes ago - 4 pods

rs/frontend runs openshift/hello-openshift
  rs/frontend created 2 minutes ago - 2 pods

pod/hello-0 runs aosqe/hello-openshift

Follow comment 3.A: (not reproduced)
$ oc status
In project xiaocwan-p on server https://host-8-243-107.host.centralci.eng.rdu2.redhat.com:8443

svc/ruby-ex - 172.31.5.247:8080
  dc/ruby-ex deploys istag/ruby-ex:latest <-
    bc/ruby-ex source builds https://github.com/openshift/ruby-ex.git on istag/ruby-22-centos7:latest 
    deployment #1 deployed 18 seconds ago - 1 pod


Follow comment 3.B: (not reproduced)
Errors:
  * hpa/hello-openshift is attempting to scale DeploymentConfig/hello-openshift, which doesn't exist

Follow comment 3.C: (REPRODUCED)
# oc status -n kube-service-catalog
expect ds(apiserver and controller-manager) but they are not diplayed.

$ oc get sts
NAME      DESIRED   CURRENT   AGE
hello     2         2         1m
But `oc status` does not display sts

Follow comment 4: (not reproduced)
rs/frontend runs openshift/hello-openshift
  rs/frontend created 8 seconds ago - 3 pods

Comment 8 Juan Vallejo 2018-02-24 01:00:36 UTC
Comment 7 addressed in https://github.com/openshift/origin/pull/18723

Comment 9 XiaochuanWang 2018-02-27 02:48:59 UTC
Need to wait for version > v3.9.0-0.53.0

Comment 10 Xingxing Xia 2018-02-28 09:01:29 UTC
Verified in v3.9.1, now deploy/rs/sts all can nest well. Below is result for sts brought by comment 8 PR:
http://$ROUTE to pod port 8080 (svc/hello)
  statefulset/hello manages aosqe/hello-openshift
    created 30 minutes ago - 2 pods

However now only one issue, DS has still minor issue when it has svc, DS is not nested with its svc:
$ oc status # data see [1]
svc/hello-daemonset - 172.31.226.61:8080
  pod/hello-daemonset-rkwlm runs openshift/hello-openshift
  pod/hello-daemonset-75bnt runs openshift/hello-openshift
  pod/hello-daemonset-bbzhh runs openshift/hello-openshift
  pod/hello-daemonset-jt766 runs openshift/hello-openshift
  pod/hello-daemonset-x9s4f runs openshift/hello-openshift
  pod/hello-daemonset-qpcw4 runs openshift/hello-openshift

daemonset/hello-daemonset manages openshift/hello-openshift
  generation #1 running for 4 seconds - 0/7 pods growing to 7

[1] data
$ cat ds_svc.yaml
kind: List
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
  kind: DaemonSet
  metadata:
    labels:
      name: hello-daemonset
    name: hello-daemonset
  spec:
    selector:
      matchLabels:
        name: hello-daemonset
    template:
      metadata:
        labels:
          name: hello-daemonset
      spec:
        containers:
        - image: openshift/hello-openshift
          name: hello-openshift
          ports:
          - containerPort: 8080
            protocol: TCP
- apiVersion: v1
  kind: Service
  metadata:
    name: hello-daemonset
  spec:
    ports:
    - port: 8080
      protocol: TCP
      targetPort: 8080
    selector:
      name: hello-daemonset

$ oc create -f ds_svc.yaml

Comment 11 Juan Vallejo 2018-03-05 22:55:19 UTC
Origin PR for comment 10: https://github.com/openshift/origin/pull/18848

Comment 12 Xingxing Xia 2018-03-09 01:11:23 UTC
Commit lands in OCP v3.10.0-0.5.0, will verify when 3.10 puddle available.

Comment 13 Xingxing Xia 2018-03-12 09:06:21 UTC
Per comment 12, move to MODIFIED.
Because major issue of bug are fixed and issue of comment 10~12 is only minor issue of displaying ds when associated with svc, I'd like to lower the severity so that the card could move to Accepted

Comment 14 Xingxing Xia 2018-04-08 07:05:49 UTC
Checked comment 10 in v3.10.0-0.15.0, now the display for ds is:
# oc status
In project xxia-proj on server https://172.16.120.39:8443

svc/hello-daemonset - 172.30.172.17:8080
  daemonset/hello-daemonset manages openshift/hello-openshift
    generation #1 running for 4 minutes - 1/2 pods growing to 2
  pod/hello-daemonset-rhnbr runs openshift/hello-openshift
  pod/hello-daemonset-ptfjm runs openshift/hello-openshift

Above displays lines for pod replicas, this is unlike DC. The lines for pod replicas are better to be removed, to be consistent with DC.

Comment 15 Juan Vallejo 2018-04-09 17:48:05 UTC
> Above displays lines for pod replicas, this is unlike DC. The lines for pod replicas are better to be removed, to be consistent with DC.

Why? I think displaying pod replicas might be useful to know for a daemonset

Comment 16 Xingxing Xia 2018-04-10 02:58:32 UTC
(In reply to Juan Vallejo from comment #15)
> Why? I think displaying pod replicas might be useful to know for a daemonset

My "lines for pod replicas...better to be removed" does not mean THIS line:
    generation #1 running for 4 minutes - 1/2 pods growing to 2
This line is indeed useful and needs keep. Rather, it means THESE lines:
  pod/hello-daemonset-rhnbr runs openshift/hello-openshift
  pod/hello-daemonset-ptfjm runs openshift/hello-openshift
THESE lines don't appear for DC. Besides, if there are many nodes, there will be many ds pods, causing these lines will be more, like comment 10. Currently the fix just MOVES all displayed lines in comment 10 into nest, it still needs REMOVE the node number lines of "pod/hello-daemonset-...runs openshift/hello-openshift"

Comment 19 Xingxing Xia 2019-10-25 09:00:37 UTC
Verified in oc of openshift-clients-4.3.0-201910240917.git.1.265278a.el7.x86_64
$ oc status
In project xxia-proj on server https://$SERVER:6443

svc/hello-daemonset - 172.30.10.215:8080
  daemonset/hello-daemonset manages openshift/hello-openshift
    generation #1 running for about a minute - 2 pods
...

Comment 21 errata-xmlrpc 2020-01-23 11:03:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0062


Note You need to log in before you can comment on or make changes to this bug.