Bug 1544903

Summary: Push image still failed, no route to host
Product: OpenShift Container Platform Reporter: Max Whittingham <mwhittin>
Component: NetworkingAssignee: Dan Winship <danw>
Status: CLOSED ERRATA QA Contact: Meng Bo <bmeng>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 3.7.1CC: aos-bugs, bbennett, eparis, gucore, jfiala, ssadeghi, wmeng
Target Milestone: ---Keywords: OpsBlocker
Target Release: 3.9.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Cause: In some (as-yet-undetermined) circumstances, nodes were apparently receiving a duplicate out-of-order HostSubnet "deleted" event from the master. Consequence: When processing the duplicate event, the node could end up deleting OVS flows corresponding to an active node, causing pods on the two nodes to be unable to communicate with each other. (This was most noticeable when it happened to a node hosting the registry.) Fix: The HostSubnet event-processing code will now notice that the event is a duplicate and ignore it. Result: OVS flows are not deleted, and pods can communicate.
Story Points: ---
Clone Of:
: 1546169 (view as bug list) Environment:
Last Closed: 2018-03-28 14:28:16 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1546169, 1546170, 1547599    
Attachments:
Description Flags
check-sdn.sh
none
flush-infra.sh none

Description Max Whittingham 2018-02-13 17:38:46 UTC
Description of problem:
We've been seeing periodic but pretty consistent problems both pushing and pulling from the registry with the error 'No route to host'

Version-Release number of selected component (if applicable):
3.7.23-1

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Eric Paris 2018-02-13 18:20:18 UTC
us-west-2 should now be happy/working fine. we are still root causing/final solution and will update with how we got it working....

Comment 3 Eric Paris 2018-02-14 02:29:51 UTC
Created attachment 1395746 [details]
check-sdn.sh

I run this script with:
ansible 'starter*infra*' -u root -m script -a check-sdn.sh
If the script exits with a 'FAIL' that means the OVS rules are messed up. It can affect other communication paths, but since the most common is infra<->making sure those stay pretty clean is more important. On us-west-2 we saw compute nodes unable to pull from the registry because the infra nodes rule sets were messed up.

Comment 4 Eric Paris 2018-02-14 02:33:00 UTC
Created attachment 1395747 [details]
flush-infra.sh

Running from a master with affected infra nodes this script will drain the infra node, delete all of the containers and cruft left behind, and then start the infra node again. This results in a new clean OVS ruleset

Comment 6 Dan Winship 2018-02-16 13:10:57 UTC
fixed by https://github.com/openshift/origin/pull/18617

Comment 10 Meng Bo 2018-03-07 03:06:36 UTC
Tested on v3.9.3
There is no replay of DeleteHostSubnetRules event when deleting the node.

Comment 13 errata-xmlrpc 2018-03-28 14:28:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0489