Bug 1459008 - 'oc rollout pause' operation will erase 'Running' pods
'oc rollout pause' operation will erase 'Running' pods
Product: OpenShift Container Platform
Classification: Red Hat
Component: Deployments (Show other bugs)
x86_64 Linux
unspecified Severity medium
: ---
: 3.4.z
Assigned To: Michail Kargakis
zhou ying
Depends On: 1441984
  Show dependency treegraph
Reported: 2017-06-06 02:19 EDT by ge liu
Modified: 2017-10-25 09:02 EDT (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1441984
Last Closed: 2017-10-25 09:02:19 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Comment 1 ge liu 2017-06-06 02:20:46 EDT
Reproduce in OCP version:
openshift v3.4.1.18
kubernetes v1.4.0+776c994
etcd 3.1.0-rc.0
Comment 2 Tomáš Nožička 2017-06-06 06:29:51 EDT
geliu@redhat.com I am unable to reproduce this. 

All pods keep running just fine for me after `oc rollout pause dc/<dc-name>. 

Please provide detailed instructions on how to reproduce it if you want to keep it open.
Comment 4 ge liu 2017-06-06 22:31:59 EDT
@tnozicka, I reproduce it again on newest 3.4 OCP,

openshift v3.4.1.32
kubernetes v1.4.0+776c994
etcd 3.1.0-rc.0


0. Deployment conf file: hello.yaml

apiVersion: extensions/v1beta1
kind: Deployment
  name: hello-openshift
  replicas: 2
        app: hello-openshift
      - name: hello-openshift
        image: openshift/hello-openshift
        - containerPort: 80
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate

1. # oc create -f hello.yaml 
deployment "hello-openshift" created
# oc get pods
NAME                               READY     STATUS    RESTARTS   AGE
hello-openshift-2049296760-czvlh   1/1       Running   0          5s
hello-openshift-2049296760-wqeue   1/1       Running   0          5s

2. # oc edit deployment
deployment "hello-openshift" edited

change: "- image: openshift/hello-openshift" to "- image: openshift/nonexist"

3. # oc get pods
NAME                               READY     STATUS             RESTARTS   AGE
hello-openshift-2049296760-wqeue   1/1       Running            0          48s
hello-openshift-2758264543-m7i3l   0/1       ImagePullBackOff   0          19s
hello-openshift-2758264543-x728d   0/1       ImagePullBackOff   0          19s

4. # oc rollout pause deployment/hello-openshift
deployment "hello-openshift" paused
# oc get pods
NAME                               READY     STATUS             RESTARTS   AGE
hello-openshift-2758264543-m7i3l   0/1       ImagePullBackOff   0          32s
hello-openshift-2758264543-x728d   0/1       ImagePullBackOff   0          32s

5. # oc rollout  resume deployment/hello-openshift
deployment "hello-openshift" resumed
[root@host-8-174-99 tmp]# oc get pods
NAME                               READY     STATUS             RESTARTS   AGE
hello-openshift-2758264543-m7i3l   0/1       ImagePullBackOff   0          2m
hello-openshift-2758264543-x728d   0/1       ImagePullBackOff   0          2m
Comment 6 Michail Kargakis 2017-06-07 03:52:09 EDT
The fix is included in v3.4.1.33
Comment 7 ge liu 2017-06-07 04:28:16 EDT
@mkargaki, is not available in the repos: http://download-node-02.eng.bos.redhat.com/rcm-guest/puddles/RHAOS/AtomicOpenShift-signed/3.4/, so I will verify it after it ready, thx
Comment 8 ge liu 2017-06-08 04:09:46 EDT
Verified on OCP env:

openshift v3.3.1.35
kubernetes v1.3.0+52492b4
etcd 2.3.0+git
Comment 9 ge liu 2017-06-08 04:15:56 EDT
Sorry for typo at comment8, Verified it on OCP 3.4 env:

# openshift version
openshift v3.4.1.33
kubernetes v1.4.0+776c994
etcd 3.1.0-rc.0
Comment 12 errata-xmlrpc 2017-10-25 09:02:19 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.