Bug 1427992

Summary: replicationcontrollers - not yet ready to handle request; Current resource version
Product: OpenShift Container Platform Reporter: Ruben Romero Montes <rromerom>
Component: openshift-controller-managerAssignee: Michal Fojtik <mfojtik>
Status: CLOSED ERRATA QA Contact: zhou ying <yinzhou>
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.4.1CC: aos-bugs, fmarchio, haowang, mfojtik, pdwyer, rromerom
Target Milestone: ---   
Target Release: 3.7.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Cause: Shortly after the OpenShift starts the caches might not yet be synchronised. Consequence: Scaling the replication controllers might fail. Fix: Retry the scaling when we get a cache miss. Result: The replciation controllers are scaled properly.
Story Points: ---
Clone Of:
: 1454316 (view as bug list) Environment:
Last Closed: 2017-11-28 21:53:01 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1454316    
Attachments:
Description Flags
master_logs none

Description Ruben Romero Montes 2017-03-01 16:33:15 UTC
Description of problem:
After a new installation of an OCP 3.4 there is a strange behaviour of the deployment pods.
Randomly the deployer pod fails to run with the following error:

oc logs busybox-1-deploy
--> Scaling busybox-1 to 1
error: couldn't scale busybox-1 to 1: Scaling the resource failed with: replicationcontrollers "busybox-1" is forbidden: not yet ready to handle request; Current resource version 1868422
[root@ocpmastprd1 ~]# oc get po -o wide
NAME               READY     STATUS    RESTARTS   AGE       IP             NODE
busybox-1-deploy   0/1       Error     0          1m        10.240.6.76    ocpnodinfprd1.spb.lan

We have found out that it is not related to the node itself as some deployments have worked for the same image and on the same node.
When the deploy pod succeeds then all the application pods are running without problems.

Maybe it is related to the ose-deployer image or to the docker configuration. I don't know how to gather more information.

I have been addressed to this kubernetes issue, but I don't know if it is related: https://github.com/kubernetes/kubernetes/issues/35068

Version-Release number of selected component (if applicable):
oc v3.4.1.7
kubernetes v1.4.0+776c994
features: Basic-Auth GSSAPI Kerberos SPNEGO

How reproducible: Unknown


Steps to Reproduce:
1. 
2.
3.

Actual results:
The deploy fails

Expected results:


Additional info:

Comment 1 Ruben Romero Montes 2017-03-01 16:36:08 UTC
Sorry for my eager submit.

I have asked the customer for the sosreport and I will attach it as soon as possible.

Thanks,
Ruben

Comment 3 Ruben Romero Montes 2017-03-02 14:53:01 UTC
Created attachment 1259177 [details]
master_logs

Comment 6 Michal Fojtik 2017-03-07 11:50:16 UTC
The PR: https://github.com/openshift/origin/pull/13279

Comment 7 Michal Fojtik 2017-03-30 13:35:04 UTC
The PR was merged to master.

Comment 20 errata-xmlrpc 2017-11-28 21:53:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:3188