Bug 1452225
| Summary: | OCP 3.6 - docker-registry keeps restarting and CrashLoopBackOff when deploying 100+ number of pause-pods | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Walid A. <wabouham> |
| Component: | Networking | Assignee: | Ben Bennett <bbennett> |
| Status: | CLOSED DUPLICATE | QA Contact: | Meng Bo <bmeng> |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 3.6.0 | CC: | aos-bugs, jeder, mfojtik, mifiedle, wabouham |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Linux | ||
| Whiteboard: | aos-scalability-36 | ||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2017-06-20 18:42:43 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Walid A.
2017-05-18 15:45:41 UTC
Does scaling up registry fix this problem? Scaling docker-registry to 3 replicas does not seem to help. I am hitting the same issues with docker-registry going into CrashLoopbackOff and restarting during the test. This on latest ocp version 3.6.79-1 # oc get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE clusterproject0 pausepods0 1/1 Running 0 2m clusterproject0 pausepods1 1/1 Running 0 2m clusterproject0 pausepods10 1/1 Running 0 2m clusterproject0 pausepods11 1/1 Running 0 2m clusterproject0 pausepods12 1/1 Running 0 2m clusterproject0 pausepods13 1/1 Running 0 2m clusterproject0 pausepods14 1/1 Running 0 2m clusterproject0 pausepods15 1/1 Running 0 1m clusterproject0 pausepods16 1/1 Running 0 1m clusterproject0 pausepods17 1/1 Running 0 1m clusterproject0 pausepods18 1/1 Running 0 1m clusterproject0 pausepods19 1/1 Running 0 1m clusterproject0 pausepods2 1/1 Running 0 2m clusterproject0 pausepods20 1/1 Running 0 1m clusterproject0 pausepods21 1/1 Running 0 1m clusterproject0 pausepods22 1/1 Running 0 1m clusterproject0 pausepods23 1/1 Running 0 1m clusterproject0 pausepods24 1/1 Running 0 1m clusterproject0 pausepods25 1/1 Running 0 1m clusterproject0 pausepods26 1/1 Running 0 1m clusterproject0 pausepods27 1/1 Running 0 1m clusterproject0 pausepods28 1/1 Running 0 1m clusterproject0 pausepods29 1/1 Running 0 1m clusterproject0 pausepods3 1/1 Running 0 2m clusterproject0 pausepods30 1/1 Running 0 1m clusterproject0 pausepods31 1/1 Running 0 1m clusterproject0 pausepods32 1/1 Running 0 1m clusterproject0 pausepods33 1/1 Running 0 1m clusterproject0 pausepods34 1/1 Running 0 1m clusterproject0 pausepods35 1/1 Running 0 1m clusterproject0 pausepods36 1/1 Running 0 1m clusterproject0 pausepods37 1/1 Running 0 1m clusterproject0 pausepods38 1/1 Running 0 1m clusterproject0 pausepods39 1/1 Running 0 1m clusterproject0 pausepods4 1/1 Running 0 2m clusterproject0 pausepods5 1/1 Running 0 2m clusterproject0 pausepods6 1/1 Running 0 2m clusterproject0 pausepods7 1/1 Running 0 2m clusterproject0 pausepods8 1/1 Running 0 2m clusterproject0 pausepods9 1/1 Running 0 2m default docker-registry-1-6wm34 0/1 CrashLoopBackOff 6 10h default docker-registry-1-6xsc8 0/1 CrashLoopBackOff 6 17m default docker-registry-1-w2kj9 0/1 CrashLoopBackOff 6 17m default registry-console-3-7j9q7 0/1 Running 4 26m default router-1-g1zz7 1/1 Running 0 10h # # attaching latest logs My hunch is that this is a dupe of https://bugzilla.redhat.com/show_bug.cgi?id=1454948 *** This bug has been marked as a duplicate of bug 1454948 *** |