Bug 1468207 - The openflow for the obsoleted node should be cleared after the node come back
Summary: The openflow for the obsoleted node should be cleared after the node come back
Keywords:
Status: CLOSED DUPLICATE of bug 1311849
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 3.6.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Dan Winship
QA Contact: Meng Bo
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-07-06 10:29 UTC by Yan Du
Modified: 2017-07-06 12:38 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-07-06 12:38:59 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Yan Du 2017-07-06 10:29:06 UTC
Description of problem:
The openflow for the obsoleted node should be cleared after the node come back


Version-Release number of selected component (if applicable):
openshift v3.6.135
kubernetes v1.6.1+5115d708d7
etcd 3.2.1
ovs-ofctl (Open vSwitch) 2.6.1
OpenFlow versions 0x1:0x4

How reproducible:
Always

Steps to Reproduce:
1. setup OCP env with multi nodes
[root@host-8-174-52 ~]# oc get hostsubnet
NAME                                               HOST                                               HOST IP         SUBNET
host-8-174-34.host.centralci.eng.rdu2.redhat.com   host-8-174-34.host.centralci.eng.rdu2.redhat.com   172.16.120.16   10.129.0.0/23
host-8-174-52.host.centralci.eng.rdu2.redhat.com   host-8-174-52.host.centralci.eng.rdu2.redhat.com   172.16.120.15   10.128.0.0/23

2. delete one node
[root@host-8-174-52 ~]# oc delete node host-8-174-34.host.centralci.eng.rdu2.redhat.com
node "host-8-174-34.host.centralci.eng.rdu2.redhat.com" deleted

3. ssh into the deleted node, and check the openflow
[root@host-8-174-34 ~]# ovs-ofctl dump-flows br0 -O OpenFlow13 | grep 10.129.0.0
 cookie=0x0, duration=52.582s, table=0, n_packets=0, n_bytes=0, priority=200,arp,in_port=1,arp_spa=10.128.0.0/14,arp_tpa=10.129.0.0/23 actions=move:NXM_NX_TUN_ID[0..31]->NXM_NX_REG0[],goto_table:10
 cookie=0x0, duration=52.578s, table=0, n_packets=0, n_bytes=0, priority=200,ip,in_port=1,nw_src=10.128.0.0/14,nw_dst=10.129.0.0/23 actions=move:NXM_NX_TUN_ID[0..31]->NXM_NX_REG0[],goto_table:10
 cookie=0x0, duration=52.530s, table=30, n_packets=18, n_bytes=756, priority=200,arp,arp_tpa=10.129.0.0/23 actions=goto_table:40
 cookie=0x0, duration=52.517s, table=30, n_packets=31868, n_bytes=182318798, priority=200,ip,nw_dst=10.129.0.0/23 actions=goto_table:70

4. Add the deleted node back by restart node service
[root@host-8-174-34 ~]# systemctl restart atomic-openshift-node
5. Wait a few minutes and check the openflow again


Actual results:
The openflow for the obsoleted node still exist after the node come back
step5:
[root@host-8-174-34 ~]# ovs-ofctl dump-flows br0 -O OpenFlow13 | grep 10.129.0.0
 cookie=0x0, duration=94.790s, table=0, n_packets=0, n_bytes=0, priority=200,arp,in_port=1,arp_spa=10.128.0.0/14,arp_tpa=10.129.0.0/23 actions=move:NXM_NX_TUN_ID[0..31]->NXM_NX_REG0[],goto_table:10
 cookie=0x0, duration=94.786s, table=0, n_packets=0, n_bytes=0, priority=200,ip,in_port=1,nw_src=10.128.0.0/14,nw_dst=10.129.0.0/23 actions=move:NXM_NX_TUN_ID[0..31]->NXM_NX_REG0[],goto_table:10
 cookie=0x0, duration=94.738s, table=30, n_packets=18, n_bytes=756, priority=200,arp,arp_tpa=10.129.0.0/23 actions=goto_table:40
 cookie=0x0, duration=94.725s, table=30, n_packets=31868, n_bytes=182318798, priority=200,ip,nw_dst=10.129.0.0/23 actions=goto_table:70



Expected results:
The openflow for the obsoleted node should be cleared after the node come back

Additional info:

Comment 1 Dan Winship 2017-07-06 12:38:59 UTC
This has never been supported. If you delete a node and then want to readd it again, you have to reboot the machine (or at least restart all services relevant to OpenShift). See also https://trello.com/c/lCKMyDfs

*** This bug has been marked as a duplicate of bug 1311849 ***


Note You need to log in before you can comment on or make changes to this bug.