Bug 1427992 - replicationcontrollers - not yet ready to handle request; Current resource version
Summary: replicationcontrollers - not yet ready to handle request; Current resource ve...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: openshift-controller-manager
Version: 3.4.1
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 3.7.0
Assignee: Michal Fojtik
QA Contact: zhou ying
URL:
Whiteboard:
Depends On:
Blocks: 1454316
TreeView+ depends on / blocked
 
Reported: 2017-03-01 16:33 UTC by Ruben Romero Montes
Modified: 2020-07-16 09:15 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: Shortly after the OpenShift starts the caches might not yet be synchronised. Consequence: Scaling the replication controllers might fail. Fix: Retry the scaling when we get a cache miss. Result: The replciation controllers are scaled properly.
Clone Of:
: 1454316 (view as bug list)
Environment:
Last Closed: 2017-11-28 21:53:01 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
master_logs (15.29 MB, application/zip)
2017-03-02 14:53 UTC, Ruben Romero Montes
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:3188 0 normal SHIPPED_LIVE Moderate: Red Hat OpenShift Container Platform 3.7 security, bug, and enhancement update 2017-11-29 02:34:54 UTC

Description Ruben Romero Montes 2017-03-01 16:33:15 UTC
Description of problem:
After a new installation of an OCP 3.4 there is a strange behaviour of the deployment pods.
Randomly the deployer pod fails to run with the following error:

oc logs busybox-1-deploy
--> Scaling busybox-1 to 1
error: couldn't scale busybox-1 to 1: Scaling the resource failed with: replicationcontrollers "busybox-1" is forbidden: not yet ready to handle request; Current resource version 1868422
[root@ocpmastprd1 ~]# oc get po -o wide
NAME               READY     STATUS    RESTARTS   AGE       IP             NODE
busybox-1-deploy   0/1       Error     0          1m        10.240.6.76    ocpnodinfprd1.spb.lan

We have found out that it is not related to the node itself as some deployments have worked for the same image and on the same node.
When the deploy pod succeeds then all the application pods are running without problems.

Maybe it is related to the ose-deployer image or to the docker configuration. I don't know how to gather more information.

I have been addressed to this kubernetes issue, but I don't know if it is related: https://github.com/kubernetes/kubernetes/issues/35068

Version-Release number of selected component (if applicable):
oc v3.4.1.7
kubernetes v1.4.0+776c994
features: Basic-Auth GSSAPI Kerberos SPNEGO

How reproducible: Unknown


Steps to Reproduce:
1. 
2.
3.

Actual results:
The deploy fails

Expected results:


Additional info:

Comment 1 Ruben Romero Montes 2017-03-01 16:36:08 UTC
Sorry for my eager submit.

I have asked the customer for the sosreport and I will attach it as soon as possible.

Thanks,
Ruben

Comment 3 Ruben Romero Montes 2017-03-02 14:53:01 UTC
Created attachment 1259177 [details]
master_logs

Comment 6 Michal Fojtik 2017-03-07 11:50:16 UTC
The PR: https://github.com/openshift/origin/pull/13279

Comment 7 Michal Fojtik 2017-03-30 13:35:04 UTC
The PR was merged to master.

Comment 20 errata-xmlrpc 2017-11-28 21:53:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:3188


Note You need to log in before you can comment on or make changes to this bug.