Bug 1508378
Summary: | Can't deploy Node.js + MongoDB app | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Agustin <atamagno.test> |
Component: | Storage | Assignee: | Hemant Kumar <hekumar> |
Status: | CLOSED ERRATA | QA Contact: | Chao Yang <chaoyang> |
Severity: | urgent | Docs Contact: | |
Priority: | unspecified | ||
Version: | unspecified | CC: | aos-bugs, aos-storage-staff, trankin |
Target Milestone: | --- | ||
Target Release: | 3.9.z | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2019-07-05 06:58:57 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Agustin
2017-11-01 10:33:07 UTC
We have implemented a generic recovery mechanism in Openshift 3.9, which will detect volumes stuck on another instance (and if there is no pod that is actively using the volume on that instance) and detach them if necessary. One easy way to reproduce this problem is (before 3.9): 1. Create a standalone pod (no deployments, rc etc) with volumes. 2. Shutdown the node. 3. Now wait for the pod on the node to be deleted. 4. Once pod is deleted (spam kubectl get pods) but before controller-manager could detach the volume (there is minimum of 6 minute delay), restart the controller-manager. 5. Above action will cause volume information to be wiped from controller-manager 6. Now try to attach same PVC in another pod (may be scheduled on different node). The pod will stuck in "ContainerCreating" state in 3.7 but not on 3.9 There are few other ways to reproduce this error but this is perhaps easiest. It is passed on oc v3.9.84 kubernetes v1.9.1+a0ce1bc657 features: Basic-Auth GSSAPI Kerberos SPNEGO Server https://ip-172-18-15-202.ec2.internal:8443 openshift v3.9.84 kubernetes v1.9.1+a0ce1bc657 1.create pvc/pod as standalone [root@ip-172-18-15-202 ~]# oc get pods NAME READY STATUS RESTARTS AGE mypod 1/1 Running 1 5m 2.Shutdown node server 3.Pod is deleted [root@ip-172-18-15-202 ~]# oc get pods No resources found. 4.Restart controller service [root@ip-172-18-15-202 ~]# systemctl restart atomic-openshift-master-controllers.service 5.Recreate a new pod with above pvc 6.Pod is running Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:1642 |