Bug 1389213
| Summary: | Cannot merge network / make project global via the oadm pod-network tool | ||||||
|---|---|---|---|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Meng Bo <bmeng> | ||||
| Component: | Networking | Assignee: | Ravi Sankar <rpenta> | ||||
| Status: | CLOSED ERRATA | QA Contact: | zhaozhanqi <zzhao> | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | high | ||||||
| Version: | 3.4.0 | CC: | aos-bugs, bbennett, bmeng, dcbw, tdawson, xtian | ||||
| Target Milestone: | --- | Keywords: | Regression, Reopened | ||||
| Target Release: | --- | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||
| Doc Text: |
Cause: Wrong field passed to UpdatePod
Consequence: The network namespace was not correctly merged because the string passed is invalid.
Fix: Passed the correct field
Result: The network namespaces are correctly merged.
|
Story Points: | --- | ||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2017-01-18 12:47:02 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
|
Description
Meng Bo
2016-10-27 08:13:53 UTC
Created attachment 1214504 [details]
Full dump of openflow
Can't move to Modified until it has merged. Commit pushed to master at https://github.com/openshift/origin https://github.com/openshift/origin/commit/0f6ac87eab1cc4d8c004c54cb8d581820ea122c8 Bug 1389213 - Fix join/isolate project network Pass kubeletTypes.ContainerID.ID instead of kubeletTypes.ContainerID.String() to UpdatePod(), Otherwise docker client fails with error: no such container '://<id>' Tested this issue on
# oc get netnamespaces
NAME NETID
default 0
kube-system 6030722
network-diag-global-ns-546o4 0
network-diag-global-ns-uqusd 0
network-diag-ns-3c96g 2817809
network-diag-ns-fybiu 15657557
openshift 14746615
openshift-infra 12161229
z2 6894009
zzhao 6894009
[root@minion1 subdomain]# oc get pod -n zzhao -o json | grep -i ip
"hostIP": "10.66.140.17",
"podIP": "10.128.0.30",
[root@minion1 subdomain]# oc get pod -n z2 -o json | grep -i ip
"hostIP": "10.66.140.17",
"podIP": "10.128.0.29",
[root@minion1 subdomain]# oc rsh caddy-docker
/srv # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0@if672: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue state UP
link/ether 3e:7b:e7:d6:b5:8f brd ff:ff:ff:ff:ff:ff
inet 10.128.0.30/23 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::3c7b:e7ff:fed6:b58f/64 scope link
valid_lft forever preferred_lft forever
/srv # ping 10.128.0.29
PING 10.128.0.29 (10.128.0.29): 56 data bytes
^C
--- 10.128.0.29 ping statistics ---
11 packets transmitted, 0 packets received, 100% packet loss
/srv #
Checking the openflow:
cookie=0x0, duration=783.261s, table=7, n_packets=0, n_bytes=0, priority=100,ip,reg0=0,nw_dst=10.128.0.29 actions=output:30
cookie=0x0, duration=783.258s, table=7, n_packets=0, n_bytes=0, priority=100,ip,reg0=0xd48d17,nw_dst=10.128.0.29 actions=output:30
cookie=0x0, duration=770.072s, table=7, n_packets=0, n_bytes=0, priority=100,ip,reg0=0,nw_dst=10.128.0.30 actions=output:31
cookie=0x0, duration=770.069s, table=7, n_packets=0, n_bytes=0, priority=100,ip,reg0=0x6931b9,nw_dst=10.128.0.30 actions=output:31
sorry, forgot to paste the openshift version: # openshift version openshift v3.4.0.21+ca4702d kubernetes v1.4.0+776c994 etcd 3.1.0-rc.0 @zhaozhanqi I did the same experiment but unable to reproduce the issue on my local environment. - Created 2 projects, 1 caddy-docker pod on each project and then tested join/isolate network functionality. This worked as expected. Not sure what has triggered this issue, Do you have reproduction steps? Sorry for did not see the needinfo, the manage network feature works well on the latest build v3.4.0.23. Change the bug status to VERIFIED. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:0066 |