Bug 1406338
| Summary: | Sometime pod ip is disappeared after assign | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | DeShuai Ma <dma> |
| Component: | Networking | Assignee: | Dan Winship <danw> |
| Status: | CLOSED DUPLICATE | QA Contact: | Meng Bo <bmeng> |
| Severity: | low | Docs Contact: | |
| Priority: | low | ||
| Version: | 3.4.0 | CC: | aos-bugs, atragler, bbennett, dma, mleitner, sukulkar, wmeng |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2017-06-23 12:40:40 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
DeShuai Ma
2016-12-20 10:06:34 UTC
I don't see anything wrong here. The <none>s are for ContainerCreating or Error states. If you have more information indicating that there was a networking problem, please let me know and provide the logs. Thanks! The issue is when get po by `-o wide`, most pod which status is complete or failed, they all can display the ip info. But it doesn't has ip info in my case. It's easy to reproduce use the template I paste in above. Case #1 I try again in the same env use same template and same step., sometime the completed pod has the ip but sometime doesn't. #In namespace dma1, 'hello-pod-1' has ipinfo [root@gpei-public-master-etcd-zone1-1 ~]# oc get po -n dma1 -o wide NAME READY STATUS RESTARTS AGE IP NODE hello-pod-1 0/1 Completed 0 10m 10.2.6.15 gpei-public-node-zone1-primary-1 hello-pod-2 0/1 Error 0 10m <none> gpei-public-node-zone2-primary-1 #In namespace dma2, 'hello-pod-1' has no ipinfo [root@gpei-public-master-etcd-zone1-1 ~]# oc get po -n dma2 -o wide NAME READY STATUS RESTARTS AGE IP NODE hello-pod-1 0/1 Completed 0 4m <none> gpei-public-node-zone1-primary-1 hello-pod-2 0/1 Error 0 4m <none> gpei-public-node-zone2-primary-1 Case #2 If I add sleep some sec before the container exit directly, wait pod become complete/Error, then get pod they have can display ip info. [root@gpei-public-master-etcd-zone1-1 ~]# oc get po -n dma6 -o wide NAME READY STATUS RESTARTS AGE IP NODE hello-pod-1 0/1 Completed 0 2m 10.2.6.20 gpei-public-node-zone1-primary-1 hello-pod-2 0/1 Error 0 2m 10.2.10.83 gpei-public-node-zone2-primary-1 Summary: There is maybe some sync consistent issue if container exit quickly. Test on upstream, same issue. Create upstream issue: https://github.com/kubernetes/kubernetes/issues/39138 Same with upstream issue: https://github.com/kubernetes/kubernetes/issues/39113 The problem appears to be that the CNI plugin takes longer to respond and if the pod terminates quickly enough the IP address may not be written in. *** This bug has been marked as a duplicate of bug 1449373 *** |