Bug 1247049 - [origin_platformman_444] updatePercent does't work in deploymentConfig
Summary: [origin_platformman_444] updatePercent does't work in deploymentConfig
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: OKD
Classification: Red Hat
Component: Deployments
Version: 3.x
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Dan Mace
QA Contact: Yan Du
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-07-27 08:39 UTC by Yan Du
Modified: 2015-09-08 20:13 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-09-08 20:13:51 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Yan Du 2015-07-27 08:39:35 UTC
Description of problem:
Create a deploymentconfig, try to update the updatePercent in dc, trigger a new deployment, and monitor the process of all the pods scaling up and down

Version-Release number of selected component (if applicable):
# oc version
oc v1.0.3-143-ge44836a
kubernetes v1.0.0


How reproducible:
always



Steps to Reproduce:
1. Create a dc
[root@ip-10-109-177-108 sample-app]# oc new-app openshift/deployment-example
imagestreams/deployment-example
2. Scale up to 3
[root@ip-10-109-177-108 examples]# oc scale dc recreate-example --replicas=3
scaled
3. Modify updatePercent in dc, eg -100
  strategy:
    resources: {}
    rollingParams:
      intervalSeconds: 1
      timeoutSeconds: 600
      updatePercent: -100
      updatePeriodSeconds: 1
    type: Rolling


4. Trigger a new deployment
[root@ip-10-109-177-108 sample-app]# oc deploy deployment-example --latest
Started deployment #2




Actual results:
When I monitor the pod status, found that there is no pod down, and just new pod scale up, after one new pod ready, then one old pod will be down. The rest can be done in the same manner. 
[root@ip-10-109-177-108 sample-app]# oc get pod
NAME                          READY     STATUS    RESTARTS   AGE
deployment-example-1-2zikc    1/1       Running   0          1m
deployment-example-1-82u9s    1/1       Running   0          2m
deployment-example-1-mqvsc    1/1       Running   0          1m
deployment-example-2-deploy   0/1       Running   0          4s
[root@ip-10-109-177-108 sample-app]# oc get pod
NAME                          READY     STATUS    RESTARTS   AGE
deployment-example-1-2zikc    1/1       Running   0          2m
deployment-example-1-82u9s    1/1       Running   0          2m
deployment-example-1-mqvsc    1/1       Running   0          2m
deployment-example-2-4kkg3    0/1       Running   0          11s
deployment-example-2-deploy   1/1       Running   0          15s
[root@ip-10-109-177-108 sample-app]# oc get pod
NAME                          READY     STATUS    RESTARTS   AGE
deployment-example-1-2zikc    1/1       Running   0          2m
deployment-example-1-82u9s    1/1       Running   0          2m
deployment-example-1-mqvsc    1/1       Running   0          2m
deployment-example-2-4kkg3    1/1       Running   0          16s
deployment-example-2-deploy   1/1       Running   0          20s
[root@ip-10-109-177-108 sample-app]# oc get pod
NAME                          READY     STATUS    RESTARTS   AGE
deployment-example-1-2zikc    1/1       Running   0          2m
deployment-example-1-82u9s    1/1       Running   0          2m
deployment-example-2-4kkg3    1/1       Running   0          18s
deployment-example-2-deploy   1/1       Running   0          22s
[root@ip-10-109-177-108 sample-app]# oc get pod
NAME                          READY     STATUS    RESTARTS   AGE
deployment-example-1-2zikc    1/1       Running   0          2m
deployment-example-1-82u9s    1/1       Running   0          2m
deployment-example-2-4kkg3    1/1       Running   0          22s
deployment-example-2-deploy   1/1       Running   0          26s
deployment-example-2-jqxfc    0/1       Running   0          4s
[root@ip-10-109-177-108 sample-app]# oc get pod
NAME                          READY     STATUS    RESTARTS   AGE
deployment-example-1-2zikc    1/1       Running   0          2m
deployment-example-2-4kkg3    1/1       Running   0          28s
deployment-example-2-deploy   1/1       Running   0          32s
deployment-example-2-el04n    0/1       Pending   0          4s
deployment-example-2-jqxfc    1/1       Running   0          10s
[root@ip-10-109-177-108 sample-app]# oc get pod
NAME                          READY     STATUS    RESTARTS   AGE
deployment-example-2-4kkg3    1/1       Running   0          36s
deployment-example-2-deploy   1/1       Running   0          40s
deployment-example-2-el04n    1/1       Running   0          12s
deployment-example-2-jqxfc    1/1       Running   0          18s
[root@ip-10-109-177-108 sample-app]# oc get pod
NAME                         READY     STATUS    RESTARTS   AGE
deployment-example-2-4kkg3   1/1       Running   0          44s
deployment-example-2-el04n   1/1       Running   0          20s
deployment-example-2-jqxfc   1/1       Running   0          26s


Expected results:
All the three old pods should be down since I define a negative number (-100) updatePercent in dc, and after that, new pod start up till arrive desired replica


Additional info:

Comment 1 Dan Mace 2015-07-27 17:06:40 UTC
Yan Du,

What does your deployment log look like in this case? Here's what I see in a similar test case:

>>>>>>>
I0727 16:56:27.940919       1 deployer.go:197] Deploying from test/deploytester-1 to test/deploytester-2 (replicas: 3)
I0727 16:56:27.963429       1 rolling.go:224] Starting rolling update from test/deploytester-1 to test/deploytester-2 (desired replicas: 3, updatePeriodSeconds=1s, intervalSeconds=1s, timeoutSeconds=60s, updatePercent=-100%)
I0727 16:56:27.966254       1 rolling.go:279] RollingUpdater: Continuing update with existing controller deploytester-2.
I0727 16:56:27.966272       1 rolling.go:279] RollingUpdater: Scaling up deploytester-2 from 0 to 3, scaling down deploytester-1 from 3 to 0 (scale down first by 3 each interval)
I0727 16:56:27.966280       1 rolling.go:279] RollingUpdater: Scaling deploytester-1 down to 0
I0727 16:56:31.018168       1 rolling.go:279] RollingUpdater: Scaling deploytester-2 up to 3
I0727 16:56:33.096428       1 lifecycle.go:277] Waiting 60 seconds for pods owned by deployment "test/deploytester-2" to become ready (checking every 1 seconds; 0 pods previously accepted)
I0727 16:57:00.096682       1 lifecycle.go:298] All pods ready for test/deploytester-2
<<<<<<<


If the pods are deleted and created individually during a replication controller scale up, you might be able to watch them come and go even though the scale operation appears to be atomic. What I'd like to establish is whether the deployer pod did the right thing (scale old to 0, scale new to 3). If it still worked in increments of 1, I'd also like to rule out the possibilty that you're not using the latest deployer docker image.

Comment 2 Yan Du 2015-07-28 09:08:45 UTC
Hi, Dan

Retest on fedora_2061 and use the latest deployer docker image, issue could not be reproduced. And rolling update works well.
oc v1.0.3-149-g0d62650
kubernetes v1.0.0

Could you please move it ON_QA to verify it? Thanks a lot


Note You need to log in before you can comment on or make changes to this bug.