Bug 1383208

Summary: [userinterface_public_714]When removing volume info attached to k8s deployment, old Replica Sets will be deleted
Product: OKD Reporter: Yadan Pei <yapei>
Component: Management ConsoleAssignee: Samuel Padgett <spadgett>
Status: CLOSED NOTABUG QA Contact: Yadan Pei <yapei>
Severity: medium Docs Contact:
Priority: medium    
Version: 3.xCC: aos-bugs, mmccomas
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-10-10 13:43:18 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Yadan Pei 2016-10-10 08:07:19 UTC
Description of problem:
For k8s deployment object, when removing volume info attached to it, old Replica Sets will be deleted also. However for DeploymentConfiguraitons, removing volume info will not delete old ReplicationControllers

Version-Release number of selected component (if applicable):
openshift v1.4.0-alpha.0+e76e0e8 (latest origin-web-console, the latest commit is 6a78401, manually vendored)
kubernetes v1.4.0+776c994
etcd 3.1.0-alpha.1

How reproducible:
Always

Steps to Reproduce:
1.Create PV and PVC
# oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/persistent-volumes/nfs/nfs-recycle-rwo.json -n default
persistentvolume "nfs" created
# oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/persistent-volumes/nfs/claim-rwo.json
persistentvolumeclaim "nfsc" created
2. Create deployment
# # cat >> hello-deployment-1.yaml << EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: hello-openshift
spec:
  replicas: 4
  template:
    metadata:
      labels:
        app: hello-openshift
    spec:
      containers:
      - name: hello-openshift
        image: openshift/hello-openshift
        ports:
        - containerPort: 80
  strategy:
    rollingUpdate:
      maxSurge: 3
      maxUnavailable: 2
    type: RollingUpdate
EOF
# oc create -f hello-deployment-1.yaml
deployment "hello-openshift" created
3. Wait #1 update complete and all pods are ready
4. Go to Applications -> Deployments -> select 'hello-openshift' -> Actions -> Attach Storage page
Select persistent volume claim: nfsc
Click Attach
5. #2 update will be triggered and new Replica Sets will be created
6. Go to Applications -> Deployments -> select 'hello-openshift' page, check all versions of deployment
7. Delete volume attached to deployment/hello-openshift
# oc volume deployment/hello-openshift --all
deployments/hello-openshift
  pvc/nfsc (waiting for 5GiB allocation) as volume-wz6yg
# oc volume deployment/hello-openshift --remove --name=volume-wz6yg
deployment "hello-openshift" updated
8. #3 update will be triggered and new Replica Sets will be created
9. Go to Applications -> Deployments -> select 'hello-openshift' page, check all versions of deployment

Actual results:
6. Version #1 and #2, including Replica Sets are both shown correctly
9. Only #2 and #3 Replica Sets are shown, #1 Replica Sets are removed and not shown in table


Expected results:
9. All versions of Replica Sets should be shown #1, #2 and #3
#1 version of Replica Set should not be deleted

Additional info:
When use 'oc get rs' to check replica sets, only 2 Replica Sets remained
oc get rs
NAME                         DESIRED   CURRENT   READY     AGE
hello-openshift-2049296760   4         4         4         8m
hello-openshift-807487853    0         0         0         6m

Comment 1 Samuel Padgett 2016-10-10 13:43:18 UTC
Replica sets are reused when possible, so this is working as expected. The first replica set was reused for revision #3. Nothing was actually deleted.

There are upstream changes coming to see the full history with commands like `oc rollout history`. See

https://github.com/kubernetes/kubernetes/issues/33844