Bug 1369910

Summary: Build/Deployment fails when new S2I build attempts to deploy a service for which the former deployment has 0 replicas
Product: OpenShift Container Platform Reporter: Miheer Salunke <misalunk>
Component: openshift-controller-managerAssignee: Michail Kargakis <mkargaki>
Status: CLOSED NOTABUG QA Contact: zhou ying <yinzhou>
Severity: medium Docs Contact:
Priority: medium    
Version: 3.1.0CC: aos-bugs, misalunk, pweil
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-10-10 14:21:58 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Miheer Salunke 2016-08-24 17:02:19 UTC
Description of problem:
Build/Deployment fails when new S2I build attempts to deploy a service for which the former deployment has 0 replicas



Version-Release number of selected component (if applicable):
3.1

How reproducible:
Always

Steps to Reproduce:
1. Build and deploy a service using S2I
2. Scale the deployment to 0
3. Build and deploy again


Actual results:
- Build shows as failed on console (even though it passed in the builds section)
- The following error is seen: "one of maxSurge or maxUnavailable must be specified"


Expected results:
- Build passes, a new deployment is shown as successful (whether or not it deploys 0 or 1+ replicas)


Additional info:
The fix for this is to scale the two deployments (old and new) up to 1 (using the oc command) and then redeploying.

Is this a known issue? Is there an ETA on a fix?

Related issues -

https://github.com/openshift/origin/pull/6937
https://bugzilla.redhat.com/show_bug.cgi?id=1293859

Comment 1 Michail Kargakis 2016-08-24 17:19:59 UTC
Why are you scaling to zero? Can you post the config you are using? Do you want to run some sort of migration before scaling back new pods? Have you considered using the Recreate strategy (scales old pods down to zero, there is a midhook you can use here, then scales up new pods to the desired count).

Have you also tried test deployments (if that's what you need) ? It runs a deployment and when it succeeds, the deployment is automatically scaled down to zero.

Comment 2 Michail Kargakis 2016-08-31 10:40:08 UTC
This is fixed in 3.2. Is there something we need to do for 3.1?