Description of problem: After delete the node, the openflow rules for node is still there. Version-Release number of selected component (if applicable): oc v1.0.6-997-gff3b522 kubernetes v1.2.0-alpha.1-1107-g4c8e6f4 How reproducible: Always Steps to Reproduce: 1. Set up multi-node env with multi-tenant plugin [root@master ~]# oc get node NAME LABELS STATUS AGE node1.bmeng.local kubernetes.io/hostname=node1.bmeng.local Ready 13m node2.bmeng.local kubernetes.io/hostname=node2.bmeng.local Ready 13m node3.bmeng.local kubernetes.io/hostname=node3.bmeng.local Ready 13m Host/IP map is as below: master.bmeng.local 10.66.128.62 node1.bmeng.local 10.66.128.60 node2.bmeng.local 10.66.128.61 node3.bmeng.local 10.66.128.57 2. Check the openflow list on node1/node2 # ovs-ofctl dump-flows br0 -O OpenFlow13 cookie=0xa428039, duration=2713.675s, table=7, n_packets=0, n_bytes=0, priority=100,ip,nw_dst=10.1.2.0/24 actions=move:NXM_NX_REG0[]->NXM_NX_TUN_ID[0..31],set_field:10.66.128.57->tun_dst,output:1 cookie=0xa42803d, duration=2713.695s, table=7, n_packets=0, n_bytes=0, priority=100,ip,nw_dst=10.1.0.0/24 actions=move:NXM_NX_REG0[]->NXM_NX_TUN_ID[0..31],set_field:10.66.128.61->tun_dst,output:1 cookie=0xa428039, duration=2713.670s, table=8, n_packets=0, n_bytes=0, priority=100,arp,arp_tpa=10.1.2.0/24 actions=move:NXM_NX_REG0[]->NXM_NX_TUN_ID[0..31],set_field:10.66.128.57->tun_dst,output:1 cookie=0xa42803d, duration=2713.683s, table=8, n_packets=0, n_bytes=0, priority=100,arp,arp_tpa=10.1.0.0/24 actions=move:NXM_NX_REG0[]->NXM_NX_TUN_ID[0..31],set_field:10.66.128.61->tun_dst,output:1 3. Delete one node [root@master ~]# oc delete node node3.bmeng.local node "node3.bmeng.local" deleted [root@master ~]# oc get node NAME LABELS STATUS AGE node1.bmeng.local kubernetes.io/hostname=node1.bmeng.local Ready 39m node2.bmeng.local kubernetes.io/hostname=node2.bmeng.local Ready 39m 4. Check the openflow list again Actual results: 4. Same as step2, the openflow rules for node3 (10.66.128.57) still exists. Expected results: The openflow rules for node3 should be cleaned up, since node3 is already deleted and not in service any more
Can't reproduce. Can you attach the journal output from node1 or node2?
Hi, Dan Actually I have tried many times, and I found the behavior is a little wired. If I delete the node at the first time, then the openflow rules is still there even the node have been deleted. Then I restart the node to add the node again to master, try to delete the node again then check the openflow rules, the rules disappear. And here is the log: http://fpaste.org/284758/17727144/
so, a bunch more changes have been made since then... is this still reproducible?
Yes, Test with the latest origin code. It is still could be reproduced when the first time to delete the node. 10.66.128.62 master.bmeng.local 10.66.128.57 node1.bmeng.local 10.66.128.1 node2.bmeng.local # oc delete node node2.bmeng.local --config=/tmp/admin.kubeconfig node "node2.bmeng.local" deleted # oc get node --config=/tmp/admin.kubeconfig NAME LABELS STATUS AGE node1.bmeng.local kubernetes.io/hostname=node1.bmeng.local Ready 50s # ovs-ofctl dump-flows br0 -O OpenFlow13 | grep 10.66 cookie=0xa428001, duration=195.442s, table=0, n_packets=11187, n_bytes=790366, tun_src=10.66.128.1 actions=goto_table:1 cookie=0xa428001, duration=195.440s, table=8, n_packets=10007, n_bytes=740518, priority=100,ip,nw_dst=10.1.1.0/24 actions=move:NXM_NX_REG0[]->NXM_NX_TUN_ID[0..31],set_field:10.66.128.1->tun_dst,output:1 cookie=0xa428001, duration=195.438s, table=9, n_packets=2, n_bytes=84, priority=100,arp,arp_tpa=10.1.1.0/24 actions=move:NXM_NX_REG0[]->NXM_NX_TUN_ID[0..31],set_field:10.66.128.1->tun_dst,output:1
Fixed in https://github.com/openshift/openshift-sdn/pull/241
Merged in origin: https://github.com/openshift/origin/pull/8468
Verified on OSE build v3.2.0.17, issue has been fixed.