Bug 1386452

Summary: [userinterface_public_714]After Deployment is deleted its Replica Sets is treated as standalone Replica Sets
Product: OKD Reporter: Yadan Pei <yapei>
Component: Management ConsoleAssignee: Samuel Padgett <spadgett>
Status: CLOSED CURRENTRELEASE QA Contact: XiaochuanWang <xiaocwan>
Severity: medium Docs Contact:
Priority: low    
Version: 3.xCC: aos-bugs, jforrest, mmccomas, spadgett, wmeng, wsun, xiaocwan
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-09-20 13:37:51 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Attachments:
Description Flags
BeforeDeletingDeployment
none
AfterDeletingDeployment
none
replicasets shows after deployment deleted
none
Delete options required for cascade
none
http request for deleting deployment
none
request payload for DELETE deployment none

Description Yadan Pei 2016-10-19 02:08:07 UTC
Description of problem:
After Deployment is deleted, deployment are not shown on Applications -> Deployments page any more and its Replica Sets are shown in Replica Sets table 

Version-Release number of selected component (if applicable):
latest origin-web-console based at latest commit at 74bd3c4, pull request #671

How reproducible:
Always

Steps to Reproduce:
1.Create a Deployment, wait for all pods started
# cat >> hello-deployment-1.yaml << EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: hello-openshift
spec:
  replicas: 4
  template:
    metadata:
      labels:
        app: hello-openshift
    spec:
      containers:
      - name: hello-openshift
        image: openshift/hello-openshift
        ports:
        - containerPort: 80
  strategy:
    rollingUpdate:
      maxSurge: 3
      maxUnavailable: 2
    type: RollingUpdate
EOF
# oc get rs
NAME                         DESIRED   CURRENT   READY     AGE
hello-openshift-2049296760   4         4         4         10m
2.Go to Deployments page, Applications -> Deployments -> hello-openshift
3.Delete deployment/hello-openshift by clicking Actions -> Delete, confirm your deletion in dialog
4.Access deleted deployment/hello-openshift via URL /console/project/yapei-us/browse/deployment/hello-openshift?tab=details after deletion

Actual results:
4. Give error message "This deployment can not be found, it may have been deleted.Any remaining deployment history for this deployment will be shown", but its Replica Sets are not shown on this page
Instead, its Replica Sets are shown on Replica Sets table on Application -> Deployments page

Expected results:
4. Deleted deployment should be shown on Deployments page with a warnining icon, and its ReplicaSets should also be grouped under deployment instead of showing them in Replica Sets table

Additional info:

Comment 1 Yadan Pei 2016-10-19 02:08:49 UTC
Created attachment 1211952 [details]
BeforeDeletingDeployment

Comment 2 Yadan Pei 2016-10-19 02:09:11 UTC
Created attachment 1211953 [details]
AfterDeletingDeployment

Comment 3 Samuel Padgett 2016-10-19 12:33:55 UTC
Currently there is no way for us to know what deployment a replica set belonged to when the deployment is deleted. We will need upstream support before we can add that. See

https://github.com/kubernetes/kubernetes/issues/33845

Comment 4 Yadan Pei 2016-10-20 08:56:23 UTC
Isn't selectors in RS used to determine which deployment it belongs to? When I create deployment whose selectors matched with one rs, then rs is grouped to display under this deployment

Comment 5 Samuel Padgett 2016-10-24 19:03:10 UTC
We match deployment selector to the labels in the replica set pod template. But once the deployment is deleted, we no longer have the deployment selector to match.

Comment 6 Samuel Padgett 2016-10-24 19:43:23 UTC
Marking upcoming release since there is nothing we can do until

https://github.com/kubernetes/kubernetes/issues/33845

is fixed.

Comment 7 Yadan Pei 2016-10-31 09:41:32 UTC
Change priority to L since the fix will goes in upcoming release

Comment 8 Jessica Forrester 2017-01-03 14:46:45 UTC
https://github.com/kubernetes/kubernetes/pull/35676 went into kube 1.6 so this can't be fixed until 3.6

Comment 9 Jessica Forrester 2017-03-23 13:31:11 UTC
Still waiting on 1.6 rebase

Comment 10 Samuel Padgett 2017-05-10 19:06:30 UTC
With `oc delete --cascade=false`, the owner reference is removed from the replica set. So there is still no good way to know that this replica set was once part of a specific deleted deployment, even after the 1.6 rebase.

We have updated the web console to cascade the delete, however. See

https://github.com/openshift/origin-web-common/pull/64
https://github.com/openshift/origin-web-console/pull/1534

This means that the only time you should have orphaned replica sets is if you explicitly asked for them by using `--cascade=false` in the CLI. In these cases, I don't think it's unreasonable to treat them as standalone replica sets.

To restate, the fix we've made is to correctly support cascading delete in the web console so that the replica sets are not orphaned. If you use `oc delete --cascade=false`, however, you will still see the standalone replica sets.

Comment 11 Jessica Forrester 2017-05-10 19:10:51 UTC
origin bug, moving directly to ON_QA

Comment 12 XiaochuanWang 2017-05-11 05:29:29 UTC
Reproduced on latest Origin with manually vendor:
openshift/oc v3.6.0-alpha.1+561ef98-461-dirty
kubernetes v1.6.1+5115d708d7
features: Basic-Auth

Merged PR:
origin-web-console]# git log |grep "origin-web-common to v0.0."
    Bump origin-web-common to v0.0.20

Please refer to the screenshot "rs-after-deployment-deleted.png"
Is there any other related PR to be merged except for https://github.com/openshift/origin-web-console/pull/1534?

Comment 13 XiaochuanWang 2017-05-11 05:31:11 UTC
Created attachment 1277746 [details]
replicasets shows after deployment deleted

Comment 14 Samuel Padgett 2017-05-11 11:57:36 UTC
You shouldn't need to manually vendor. origin is updated after any PR merges in origin-web-console.

I can't reproduce what you're seeing. Is there any chance the deployment was created before you updated origin? If it's an old deployment, the replica set might not have the owner reference.

To check: before deleting the deployment, edit the replica set YAML and see if there is metadata.ownerReferences that points back to the deployment. If there isn't, cascading delete won't work.

Comment 15 XiaochuanWang 2017-05-12 05:33:19 UTC
This is still reproduced on latest origin: 
OpenShift Master:    v3.6.0-alpha.1+561ef98-461
Kubernetes Master:    v1.6.1+5115d708d7 

The metadata.ownerReferences is existed in rs:
    ownerReferences:
    - apiVersion: extensions/v1beta1
      blockOwnerDeletion: true
      controller: true
      kind: Deployment
      name: hello-openshift
      uid: fbe7ab95-36c2-11e7-ad3e-fa163eaefe1d

Also checked that PR (origin-web-console/pull/1534) has been merged into OCP, v3.6.74 but it also reproduce.
OpenShift Master:     v3.6.74
Kubernetes Master:    v1.6.1+5115d708d7

Comment 16 Samuel Padgett 2017-05-12 18:14:09 UTC
I noticed the vendor failed for this change (and everything since). Is it possible you don't have the fix?

https://ci.openshift.redhat.com/jenkins/job/vendor_origin_web_console/514/

Can you check if the `propogationPolicy: "Foreground"` is being passed in the delete options. You will need to use the browser development tools.

Comment 17 Samuel Padgett 2017-05-12 18:16:19 UTC
Created attachment 1278294 [details]
Delete options required for cascade

Comment 18 Samuel Padgett 2017-05-12 18:17:16 UTC
Can you verify you're seeing the delete options in the attached screenshot using the network tab in Chrome developer tools?

Comment 19 XiaochuanWang 2017-05-15 06:19:39 UTC
Created attachment 1278787 [details]
http request for deleting deployment

Comment 20 XiaochuanWang 2017-05-15 06:25:13 UTC
Please see the attachment "http request for deleting deployment", no request payload founded. 

Tested on Origin without and also with manual vendor version:
OpenShift Master: v3.6.0-alpha.1+561ef98-461-dirty
Kubernetes Master: v1.6.1+5115d708d7

The origin-web-console includes the code:
# git  log |grep "origin-web-common to v0.0."
    Bump origin-web-common to v0.0.20

Comment 21 Samuel Padgett 2017-05-15 15:44:03 UTC
I strongly suspect there was a problem with the manual vendor if there's no DELETE request body. It should always be there.

The Jenkins vendor job is working again. Please try again with latest origin master (without vendoring). The changes should be there now:

https://github.com/openshift/origin/commit/a740a1ac844b492947ad56d19ec6e4bd1ad83265

Comment 22 XiaochuanWang 2017-05-16 02:08:35 UTC
Created attachment 1279194 [details]
request payload for DELETE deployment

Comment 23 XiaochuanWang 2017-05-16 02:09:18 UTC
Thanks! Verified on latest Origin with AMI:
devenv-centos7_6170
openshift v3.6.0-alpha.1+97a9fcf-575

The replicaSet is gone when delete the deployment.
Attached the screenshot "attachment 1279194 [details]"