Bug 1686838 - clarify behaviour of --force and --cascade in oc replace
Summary: clarify behaviour of --force and --cascade in oc replace
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: oc
Version: 3.11.0
Hardware: Unspecified
OS: Linux
low
medium
Target Milestone: ---
: 3.11.z
Assignee: Maciej Szulik
QA Contact: zhou ying
URL:
Whiteboard:
: 1687902 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-03-08 13:16 UTC by daniel
Modified: 2023-10-06 18:10 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: Deployment Config controller had broken adoption mechanism responsible for identifying owned replication controllers. Consequence: oc replace without --force was seeing misbehavior. Fix: Fix the adoption mechanism. Result: oc replace should properly remove dependent objects.
Clone Of:
Environment:
Last Closed: 2020-06-17 20:21:25 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:2477 0 None None None 2020-06-17 20:21:43 UTC

Description daniel 2019-03-08 13:16:06 UTC
Description of problem:

looking at the man page of oc replace we can find the following:

~~~   
       --cascade=true
           If true, cascade the deletion of the resources managed by this resource (e.g. Pods created by a ReplicationController).  
           Default true.

       --force=false
           Only used when grace-period=0. If true, immediately remove resources from API and bypass graceful deletion. 
           Note that immediate deletion of some resources may result in inconsistency or data loss and requires confirmation.
~~~~

So the understanding of customer is the following:

--cascade=true should delete all deployments (as this is set by default according to man page it should not be needed to be added to the command) and without --force, in other words # oc replace -f <> should already delete all existing dc's


--force=true would only be needed to force the deletion if there would be issues deleting w/o it


However, it seems 

oc replace -f <> does not cascade through: 


~~~
[quicklab@master-0 dc-test]$ oc get all
NAME                              READY     STATUS    RESTARTS   AGE
pod/deployment-example-10-gxnwb   1/1       Running   0          27m

NAME                                          DESIRED   CURRENT   READY     AGE
replicationcontroller/deployment-example-1    0         0         0         38m
replicationcontroller/deployment-example-10   1         1         1         27m
replicationcontroller/deployment-example-2    0         0         0         37m
replicationcontroller/deployment-example-3    0         0         0         36m
replicationcontroller/deployment-example-4    0         0         0         35m
replicationcontroller/deployment-example-5    0         0         0         33m
replicationcontroller/deployment-example-6    0         0         0         31m
replicationcontroller/deployment-example-7    0         0         0         31m
replicationcontroller/deployment-example-8    0         0         0         29m
replicationcontroller/deployment-example-9    0         0         0         28m

NAME                         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/deployment-example   ClusterIP   172.30.27.23   <none>        8080/TCP   38m

NAME                                                    REVISION   DESIRED   CURRENT   TRIGGERED BY
deploymentconfig.apps.openshift.io/deployment-example   9          1         0         config

NAME                                                DOCKER REPO                                                TAGS      UPDATED
imagestream.image.openshift.io/deployment-example   docker-registry.default.svc:5000/test/deployment-example   latest    38 minutes ago
[quicklab@master-0 dc-test]$ 
[quicklab@master-0 dc-test]$ oc replace -f deployment-example.yaml
deploymentconfig.apps.openshift.io/deployment-example replaced
[quicklab@master-0 dc-test]$ 
[quicklab@master-0 dc-test]$ oc get all
NAME                              READY     STATUS    RESTARTS   AGE
pod/deployment-example-10-gxnwb   1/1       Running   0          28m

NAME                                          DESIRED   CURRENT   READY     AGE
replicationcontroller/deployment-example-1    0         0         0         39m
replicationcontroller/deployment-example-10   1         1         1         28m
replicationcontroller/deployment-example-2    0         0         0         38m
replicationcontroller/deployment-example-3    0         0         0         36m
replicationcontroller/deployment-example-4    0         0         0         35m
replicationcontroller/deployment-example-5    0         0         0         34m
replicationcontroller/deployment-example-6    0         0         0         32m
replicationcontroller/deployment-example-7    0         0         0         31m
replicationcontroller/deployment-example-8    0         0         0         29m
replicationcontroller/deployment-example-9    0         0         0         29m

NAME                         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/deployment-example   ClusterIP   172.30.27.23   <none>        8080/TCP   39m

NAME                                                    REVISION   DESIRED   CURRENT   TRIGGERED BY
deploymentconfig.apps.openshift.io/deployment-example   9          1         0         config,image(deployment-example:latest)

NAME                                                DOCKER REPO                                                TAGS      UPDATED
imagestream.image.openshift.io/deployment-example   docker-registry.default.svc:5000/test/deployment-example   latest    39 minutes ago
[quicklab@master-0 dc-test]$ 
[quicklab@master-0 dc-test]$ oc rollout history dc/deployment-example 
deploymentconfigs "deployment-example"
REVISION	STATUS		CAUSE
1		Complete	config change
2		Complete	config change
3		Complete	config change
4		Complete	config change
5		Complete	config change
6		Complete	config change
7		Complete	config change
8		Complete	config change
9		Complete	config change
10		Complete	config change

[quicklab@master-0 dc-test]$ 
~~~

so all is still there which seems to contradict the man page.
# oc replace --cascade=true -f deployment-example.yaml   leads to the same result

However when running 
$ oc replace --cascade=true --force=true -f deployment-example.yaml

we see what would be expected even w/o --force=true:

~~~
[quicklab@master-0 dc-test]$ oc get all 
NAME                              READY     STATUS    RESTARTS   AGE
pod/deployment-example-10-gxnwb   1/1       Running   0          34m

NAME                                          DESIRED   CURRENT   READY     AGE
replicationcontroller/deployment-example-1    0         0         0         45m
replicationcontroller/deployment-example-10   1         1         1         34m
replicationcontroller/deployment-example-2    0         0         0         44m
replicationcontroller/deployment-example-3    0         0         0         43m
replicationcontroller/deployment-example-4    0         0         0         42m
replicationcontroller/deployment-example-5    0         0         0         40m
replicationcontroller/deployment-example-6    0         0         0         38m
replicationcontroller/deployment-example-7    0         0         0         37m
replicationcontroller/deployment-example-8    0         0         0         36m
replicationcontroller/deployment-example-9    0         0         0         35m

NAME                         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/deployment-example   ClusterIP   172.30.27.23   <none>        8080/TCP   45m

NAME                                                    REVISION   DESIRED   CURRENT   TRIGGERED BY
deploymentconfig.apps.openshift.io/deployment-example   9          1         0         config,image(deployment-example:latest)

NAME                                                DOCKER REPO                                                TAGS      UPDATED
imagestream.image.openshift.io/deployment-example   docker-registry.default.svc:5000/test/deployment-example   latest    45 minutes ago
[quicklab@master-0 dc-test]$ 
[quicklab@master-0 dc-test]$ 
[quicklab@master-0 dc-test]$ oc replace --cascade=true --force=true -f deployment-example.yaml
deploymentconfig.apps.openshift.io "deployment-example" deleted
deploymentconfig.apps.openshift.io/deployment-example replaced
[quicklab@master-0 dc-test]$ oc get all 
NAME                              READY     STATUS              RESTARTS   AGE
pod/deployment-example-1-deploy   0/1       ContainerCreating   0          3s

NAME                                         DESIRED   CURRENT   READY     AGE
replicationcontroller/deployment-example-1   0         0         0         3s

NAME                         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/deployment-example   ClusterIP   172.30.27.23   <none>        8080/TCP   45m

NAME                                                    REVISION   DESIRED   CURRENT   TRIGGERED BY
deploymentconfig.apps.openshift.io/deployment-example   1          1         0         config,image(deployment-example:latest)

NAME                                                DOCKER REPO                                                TAGS      UPDATED
imagestream.image.openshift.io/deployment-example   docker-registry.default.svc:5000/test/deployment-example   latest    45 minutes ago
[quicklab@master-0 dc-test]$ oc rollout history dc/deployment-example 
deploymentconfigs "deployment-example"
REVISION	STATUS	CAUSE
1		Running	config change

[quicklab@master-0 dc-test]$ 

~~~




Version-Release number of selected component (if applicable):
oc v3.11.82
$ oc version 
oc v3.11.82
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://openshift.internal.311test2.lab.rdu2.cee.redhat.com:443
openshift v3.11.82
kubernetes v1.11.0+d4cacc0



How reproducible:

- see above

Steps to Reproduce:

- see above 

Actual results:

without --force=true dependent resources (at least for a dc) are not deleted which seems to contradict what is written in the man page as this makes one believe
--cascade=true is default and should already delete all dependent resources


Expected results:
Clarify if man page is wrong or oc behaviour and correct the one or the other please 

Additional info:

Comment 1 Maciej Szulik 2019-03-08 14:19:55 UTC
In 3.11 the deletion of dependant is happening on the server, it's garbage collection responsibility.
Client marks object for removal including the decision how to deal with its dependants (pods and 
replication controllers for deployments). Then the GC controller periodically looks for those objects
and removes them. If the dependants tree is small the time between oc delete and resources being gone
should be relatively small, but with bigger trees it might make same time. Can you verify if the objects
will actually be removed? Also since you're invoking oc replace you might not see the moment when old
objects are removed and new created in place of the old ones. Finally, there's a question if GC is 
working as it should, which can be easily verified through oc delete with the exact same set of
flags as you use for oc replace, since they share the code responsible for removing objects.

Comment 2 daniel 2019-03-11 12:13:32 UTC
(In reply to Maciej Szulik from comment #1)
> In 3.11 the deletion of dependant is happening on the server, it's garbage
> collection responsibility.
> Client marks object for removal including the decision how to deal with its
> dependants (pods and 
> replication controllers for deployments). Then the GC controller
> periodically looks for those objects
> and removes them. If the dependants tree is small the time between oc delete
> and resources being gone
> should be relatively small, but with bigger trees it might make same time.
> Can you verify if the objects
> will actually be removed? 

Well, I run a  # oc replace -f deployment-example.yaml last Friday afternoon and checking some minutes ago, still shows everything, i.e.
all dc versions, rc versions while the expectation would have been as --cascade=true per manepage anyway would do that. And waiting roughly >50h 
should actually be sufficient, or am I missing something ?

> Also since you're invoking oc replace you might
> not see the moment when old
> objects are removed and new created in place of the old ones.

Well, but again, from manpage --cascade=true is set per default which should as per mapage delete all dependant (dc/rc/pods) resources. Pods are newly started,
that's fine, but at least my understanding is that dc and rc versions are deleted as well. This works if I do run `$ oc replace --force=true -f deployment-example.yaml`.
All dc and rc versions are gone. But from how I understand the man page, the force should not be necessary to get those removed. But perhaps I misunderstood the man page ?

~~~~
[quicklab@master-0 dc-test]$ oc replace -f deployment-example.yaml
deploymentconfig.apps.openshift.io/deployment-example replaced
[quicklab@master-0 dc-test]$ 
[quicklab@master-0 dc-test]$ oc get all
NAME                              READY     STATUS    RESTARTS   AGE
pod/deployment-example-11-mgwtc   1/1       Running   0          5m

NAME                                          DESIRED   CURRENT   READY     AGE
replicationcontroller/deployment-example-1    0         0         0         11m
replicationcontroller/deployment-example-10   0         0         0         6m
replicationcontroller/deployment-example-11   1         1         1         5m
replicationcontroller/deployment-example-2    0         0         0         11m
replicationcontroller/deployment-example-3    0         0         0         10m
replicationcontroller/deployment-example-4    0         0         0         10m
replicationcontroller/deployment-example-5    0         0         0         9m
replicationcontroller/deployment-example-6    0         0         0         8m
replicationcontroller/deployment-example-7    0         0         0         7m
replicationcontroller/deployment-example-8    0         0         0         7m
replicationcontroller/deployment-example-9    0         0         0         7m

NAME                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/deployment-example   ClusterIP   172.30.139.165   <none>        8080/TCP   11m

NAME                                                    REVISION   DESIRED   CURRENT   TRIGGERED BY
deploymentconfig.apps.openshift.io/deployment-example   11         1         1         config,image(deployment-example:latest)

NAME                                                DOCKER REPO                                                TAGS      UPDATED
imagestream.image.openshift.io/deployment-example   docker-registry.default.svc:5000/test/deployment-example   latest    11 minutes ago

-----

[quicklab@master-0 dc-test]$ oc replace --force=true -f deployment-example.yaml
deploymentconfig.apps.openshift.io "deployment-example" deleted
deploymentconfig.apps.openshift.io/deployment-example replaced
[quicklab@master-0 dc-test]$ 
[quicklab@master-0 dc-test]$ oc get all
NAME                              READY     STATUS              RESTARTS   AGE
pod/deployment-example-1-deploy   0/1       ContainerCreating   0          <invalid>
pod/deployment-example-11-mgwtc   1/1       Terminating         0          6m

NAME                                         DESIRED   CURRENT   READY     AGE
replicationcontroller/deployment-example-1   0         0         0         <invalid>

NAME                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/deployment-example   ClusterIP   172.30.139.165   <none>        8080/TCP   13m

NAME                                                    REVISION   DESIRED   CURRENT   TRIGGERED BY
deploymentconfig.apps.openshift.io/deployment-example   1          1         0         config,image(deployment-example:latest)

NAME                                                DOCKER REPO                                                TAGS      UPDATED
imagestream.image.openshift.io/deployment-example   docker-registry.default.svc:5000/test/deployment-example   latest    13 minutes ago
[quicklab@master-0 dc-test]$ 


~~~~

So expectation on the above would be the same result, possibly with force a bit faster, but still all dc,rc versions cleaned up 

 
> Finally, there's a question if GC is 
> working as it should, which can be easily verified through oc delete with
> the exact same set of
> flags as you use for oc replace, since they share the code responsible for
> removing objects.

Well, it seems to behave as intended :

~~~
[quicklab@master-0 dc-test]$ oc get all
NAME                              READY     STATUS    RESTARTS   AGE
pod/deployment-example-10-p9jch   1/1       Running   0          1m

NAME                                          DESIRED   CURRENT   READY     AGE
replicationcontroller/deployment-example-1    0         0         0         5m
replicationcontroller/deployment-example-10   1         1         1         1m
replicationcontroller/deployment-example-2    0         0         0         4m
replicationcontroller/deployment-example-3    0         0         0         4m
replicationcontroller/deployment-example-4    0         0         0         3m
replicationcontroller/deployment-example-5    0         0         0         3m
replicationcontroller/deployment-example-6    0         0         0         3m
replicationcontroller/deployment-example-7    0         0         0         2m
replicationcontroller/deployment-example-8    0         0         0         2m
replicationcontroller/deployment-example-9    0         0         0         1m

NAME                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/deployment-example   ClusterIP   172.30.165.227   <none>        8080/TCP   5m

NAME                                                    REVISION   DESIRED   CURRENT   TRIGGERED BY
deploymentconfig.apps.openshift.io/deployment-example   10         1         1         config,image(deployment-example:latest)

NAME                                                DOCKER REPO                                                TAGS      UPDATED
imagestream.image.openshift.io/deployment-example   docker-registry.default.svc:5000/test/deployment-example   latest    5 minutes ago
[quicklab@master-0 dc-test]$ 
[quicklab@master-0 dc-test]$ 
[quicklab@master-0 dc-test]$ oc get -o yaml dc/deployment-example --export > deployment-example.yaml    
[quicklab@master-0 dc-test]$ 
[quicklab@master-0 dc-test]$ oc delete -f deployment-example.yaml
deploymentconfig.apps.openshift.io "deployment-example" deleted
[quicklab@master-0 dc-test]$ oc get all
NAME                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/deployment-example   ClusterIP   172.30.165.227   <none>        8080/TCP   6m

NAME                                                DOCKER REPO                                                TAGS      UPDATED
imagestream.image.openshift.io/deployment-example   docker-registry.default.svc:5000/test/deployment-example   latest    6 minutes ago
[quicklab@master-0 dc-test]$ 

~~~

se here all dc and rc versions are removed

Comment 3 Maciej Szulik 2019-03-14 15:23:21 UTC
*** Bug 1687902 has been marked as a duplicate of this bug. ***

Comment 4 Maciej Szulik 2019-03-29 15:47:32 UTC
So it looks like there was a bug in the DC adoption mechanism which might have caused this issue. 
See https://bugzilla.redhat.com/show_bug.cgi?id=1620608 and https://github.com/openshift/origin/pull/22324 for fix.
Moving to qa since that merged already.

Comment 7 Xingxing Xia 2019-04-21 10:42:52 UTC
(In reply to Xingxing Xia from comment #6)
> 2. I read carefully this bug and bug 1687902, the reported behavior is
> expected IMO (see next comment).
When `--force=true` is used, the command will delete the DC,
    if --cascade=true is also set (the default), the command will also delete DC's managed resources too, and new DC is created.
    if --cascade=false is set, the deleted DC's managed resources remain, and new DC is created.


When `--force=false` is used, the command doesn't differ no matter --cascade is true or false. Whether "we get an additional" RC (as mentioned in below reference) depends on whether deployment-example.yaml modified the pod template (i.e. the part under .spec.template in the DC yaml) or not. If yes, it satisfies ConfigChange of DC thus additional RC is triggered. If no, DC is only updated without new RC.
(In reply to daniel from comment https://bugzilla.redhat.com/show_bug.cgi?id=1687902#c0)
> B) for 1-3,5-6): when replacing w/o --force we get an additional dc (#11)

Comment 8 Maciej Szulik 2019-04-24 11:31:10 UTC
Force only applies when you also specify --grace-period=0, did you get "warning: --force is ignored because --grace-period is not 0" when applying force w/o grace-period?

Comment 9 Xingxing Xia 2019-04-25 04:49:22 UTC
No, didn't see the warning.
$ oc new-project xxia1-proj1
$ oc new-app openshift/deployment-example # Then wait for app pod Running
$ oc set env dc deployment-example --env=REVISION=2 # Get another deployment. Then wait for new app pod Running
$ oc label --dry-run -o yaml dc deployment-example added-modification=anyvalue-1 > deployment-example.yaml # Modify the DC (non-pod-template part)
$ oc replace -f deployment-example.yaml --force=true                                                                                                
deploymentconfig.apps.openshift.io "deployment-example" deleted
deploymentconfig.apps.openshift.io/deployment-example replaced

Comment 12 Michal Fojtik 2020-05-19 13:12:56 UTC
This bug hasn't had any engineering activity in the last ~30 days. Maybe the problem got resolved, was a duplicate of something else, or became less pressing for some reason - or maybe it's still relevant but just hasn't been looked at yet.

As such, we're marking this bug as "LifecycleStale".

If you have further information on the current state of the bug, please update it and remove the "LifecycleStale" keyword, otherwise this bug will be automatically closed in 7 days. The information can be, for example, that the problem still occurs, that you still want the feature, that more information is needed, or that the bug is (for whatever reason) no longer relevant.

Comment 14 Maciej Szulik 2020-05-20 09:22:00 UTC
I think we diverged with this bug to a different topic. Can we state the original problem of customer is solved now?

Comment 15 zhou ying 2020-05-21 07:29:51 UTC
confirmed with 4.5.0-0.nightly-2020-05-19-041951, without --force the DC not deleted:

1)oc new-project zhouy
2)oc new-app openshift/deployment-example
3)oc set env dc deployment-example --env=REVISION=2
4)oc label --dry-run -o yaml dc deployment-example added-modification=anyvalue-1 >/tmp/deployment-example.yaml
5) oc get all :

[root@dhcp-140-138 ~]# oc get all 
NAME                              READY   STATUS      RESTARTS   AGE
pod/deployment-example-1-deploy   0/1     Completed   0          2m11s
pod/deployment-example-2-deploy   0/1     Completed   0          68s
pod/deployment-example-2-fbbpr    1/1     Running     0          64s

NAME                                         DESIRED   CURRENT   READY   AGE
replicationcontroller/deployment-example-1   0         0         0       2m12s
replicationcontroller/deployment-example-2   1         1         1       69s

NAME                         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/deployment-example   ClusterIP   172.30.128.99   <none>        8080/TCP   2m13s

NAME                                                    REVISION   DESIRED   CURRENT   TRIGGERED BY
deploymentconfig.apps.openshift.io/deployment-example   2          1         1         config,image(deployment-example:latest)

NAME                                                IMAGE REPOSITORY                                                            TAGS     UPDATED
imagestream.image.openshift.io/deployment-example   image-registry.openshift-image-registry.svc:5000/zhouy/deployment-example   latest   2 minutes ago



6) [root@dhcp-140-138 ~]# oc replace -f /tmp/deployment-example.yaml 
deploymentconfig.apps.openshift.io/deployment-example replaced
[root@dhcp-140-138 ~]# oc get all 
NAME                              READY   STATUS      RESTARTS   AGE
pod/deployment-example-1-deploy   0/1     Completed   0          3m58s
pod/deployment-example-2-deploy   0/1     Completed   0          2m55s
pod/deployment-example-2-fbbpr    1/1     Running     0          2m51s

NAME                                         DESIRED   CURRENT   READY   AGE
replicationcontroller/deployment-example-1   0         0         0       3m58s
replicationcontroller/deployment-example-2   1         1         1       2m55s

NAME                         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/deployment-example   ClusterIP   172.30.128.99   <none>        8080/TCP   4m

NAME                                                    REVISION   DESIRED   CURRENT   TRIGGERED BY
deploymentconfig.apps.openshift.io/deployment-example   2          1         1         config,image(deployment-example:latest)

NAME                                                IMAGE REPOSITORY                                                            TAGS     UPDATED
imagestream.image.openshift.io/deployment-example   image-registry.openshift-image-registry.svc:5000/zhouy/deployment-example   latest   4 minutes ago

the default '--cascade=true' does not work.

Comment 16 zhou ying 2020-05-21 07:32:50 UTC
Only use --force will delete the DC , but not get any worning:

[root@dhcp-140-138 ~]# oc get all 
NAME                           READY   STATUS      RESTARTS   AGE
pod/hello-openshift-1-deploy   0/1     Completed   0          5m27s
pod/hello-openshift-2-deploy   0/1     Completed   0          2m5s
pod/hello-openshift-2-g5wj8    1/1     Running     0          2m1s

NAME                                      DESIRED   CURRENT   READY   AGE
replicationcontroller/hello-openshift-1   0         0         0       5m28s
replicationcontroller/hello-openshift-2   1         1         1       2m6s

NAME                      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)             AGE
service/hello-openshift   ClusterIP   172.30.93.76   <none>        8080/TCP,8888/TCP   5m29s

NAME                                                 REVISION   DESIRED   CURRENT   TRIGGERED BY
deploymentconfig.apps.openshift.io/hello-openshift   2          1         1         config,image(hello-openshift:latest)

NAME                                             IMAGE REPOSITORY                                                          TAGS     UPDATED
imagestream.image.openshift.io/hello-openshift   image-registry.openshift-image-registry.svc:5000/dftest/hello-openshift   latest   5 minutes ago
[root@dhcp-140-138 ~]# oc label --dry-run=client -o yaml dc/hello-openshift added-modification=anyvalue-1 >/tmp/hello-openshift.yaml
[root@dhcp-140-138 ~]# oc replace -f /tmp/hello-openshift.yaml --force=true
deploymentconfig.apps.openshift.io "hello-openshift" deleted
deploymentconfig.apps.openshift.io/hello-openshift replaced
[root@dhcp-140-138 ~]# oc get all 
NAME                           READY   STATUS      RESTARTS   AGE
pod/hello-openshift-1-deploy   0/1     Completed   0          14s
pod/hello-openshift-1-qgwj4    1/1     Running     0          12s

NAME                                      DESIRED   CURRENT   READY   AGE
replicationcontroller/hello-openshift-1   1         1         1       14s

NAME                      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)             AGE
service/hello-openshift   ClusterIP   172.30.93.76   <none>        8080/TCP,8888/TCP   7m11s

NAME                                                 REVISION   DESIRED   CURRENT   TRIGGERED BY
deploymentconfig.apps.openshift.io/hello-openshift   1          1         1         config,image(hello-openshift:latest)

NAME                                             IMAGE REPOSITORY                                                          TAGS     UPDATED
imagestream.image.openshift.io/hello-openshift   image-registry.openshift-image-registry.svc:5000/dftest/hello-openshift   latest   7 minutes ago

Comment 17 Maciej Szulik 2020-05-21 10:01:18 UTC
(In reply to zhou ying from comment #16)
> Only use --force will delete the DC , but not get any worning:

That is fine, warning is to let you know, force is like hammer it will always pass.

Comment 21 zhou ying 2020-06-10 07:03:19 UTC
Without the `--force=true`, just replace , no delete dc. so no  differ with --cascade is true or false
[root@dhcp-140-138 ~]# oc get all 
NAME                          READY     STATUS    RESTARTS   AGE
pod/hello-openshift-2-8sjbk   1/1       Running   0          1m

NAME                                      DESIRED   CURRENT   READY     AGE
replicationcontroller/hello-openshift-1   0         0         0         3m
replicationcontroller/hello-openshift-2   1         1         1         1m

NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
service/hello-openshift   ClusterIP   172.30.216.222   <none>        8080/TCP,8888/TCP   3m

NAME                                                 REVISION   DESIRED   CURRENT   TRIGGERED BY
deploymentconfig.apps.openshift.io/hello-openshift   2          1         1         config,image(hello-openshift:latest)

NAME                                             DOCKER REPO                                              TAGS      UPDATED
imagestream.image.openshift.io/hello-openshift   docker-registry.default.svc:5000/zhouy/hello-openshift   latest    3 minutes ago
[root@dhcp-140-138 ~]# oc get po 
NAME                      READY     STATUS    RESTARTS   AGE
hello-openshift-2-8sjbk   1/1       Running   0          1m
[root@dhcp-140-138 ~]# oc replace -f /tmp/hello-openshift.yaml 
deploymentconfig.apps.openshift.io/hello-openshift replaced

When use '--force=true`, the dc will be deleted, and now dc will created:
[root@dhcp-140-138 ~]# oc get all 
NAME                          READY     STATUS    RESTARTS   AGE
pod/hello-openshift-2-8sjbk   1/1       Running   0          4m

NAME                                      DESIRED   CURRENT   READY     AGE
replicationcontroller/hello-openshift-1   0         0         0         6m
replicationcontroller/hello-openshift-2   1         1         1         4m

NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
service/hello-openshift   ClusterIP   172.30.216.222   <none>        8080/TCP,8888/TCP   6m

NAME                                                 REVISION   DESIRED   CURRENT   TRIGGERED BY
deploymentconfig.apps.openshift.io/hello-openshift   2          1         1         config,image(hello-openshift:latest)

NAME                                             DOCKER REPO                                              TAGS      UPDATED
imagestream.image.openshift.io/hello-openshift   docker-registry.default.svc:5000/zhouy/hello-openshift   latest    6 minutes ago

[root@dhcp-140-138 ~]# oc label --dry-run -o yaml dc hello-openshift added-modification2=anyvalue-2 >/tmp/hello-openshift2.yaml
[root@dhcp-140-138 ~]# oc replace -f /tmp/hello-openshift2.yaml  --force=true
deploymentconfig.apps.openshift.io "hello-openshift" deleted
deploymentconfig.apps.openshift.io/hello-openshift replaced
[root@dhcp-140-138 ~]# oc get all 
NAME                          READY     STATUS    RESTARTS   AGE
pod/hello-openshift-1-jgtmr   1/1       Running   0          15s

NAME                                      DESIRED   CURRENT   READY     AGE
replicationcontroller/hello-openshift-1   1         1         1         18s

NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
service/hello-openshift   ClusterIP   172.30.216.222   <none>        8080/TCP,8888/TCP   8m

NAME                                                 REVISION   DESIRED   CURRENT   TRIGGERED BY
deploymentconfig.apps.openshift.io/hello-openshift   1          1         1         config,image(hello-openshift:latest)

NAME                                             DOCKER REPO                                              TAGS      UPDATED
imagestream.image.openshift.io/hello-openshift   docker-registry.default.svc:5000/zhouy/hello-openshift   latest    8 minutes ago


Confirmed with oc version:
[root@dhcp-140-138 ~]#  oc version 
oc v3.11.232
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO

Comment 23 errata-xmlrpc 2020-06-17 20:21:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2477


Note You need to log in before you can comment on or make changes to this bug.