Bug 1425205 - [dev-preview] "Error syncing pod, skipping: failed to "TeardownNetwork"" warning when deploying templates
Summary: [dev-preview] "Error syncing pod, skipping: failed to "TeardownNetwork"" warn...
Keywords:
Status: CLOSED DUPLICATE of bug 1417234
Alias: None
Product: OpenShift Online
Classification: Red Hat
Component: Networking
Version: 3.x
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Dan Williams
QA Contact: Meng Bo
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-02-20 20:13 UTC by Will Gordon
Modified: 2021-03-11 14:58 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-03-01 19:15:11 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Will Gordon 2017-02-20 20:13:15 UTC
Description of problem:
Deploying default templates in OpenShift Dev Preview, e.g., nodejs-mongo-persistent or laravel-mysql-example causing confusing error messages in Event Monitoring.

The error message is: 

Error syncing pod, skipping: failed to "SetupNetwork" for "mysql-1-deploy_wgordon-test" with SetupNetworkError: "Failed to setup network for pod \"mysql-1-deploy_wgordon-test(808a01b7-f7a7-11e6-9235-0e63b9c1c48f)\" using network plugins \"cni\": CNI request failed with status 400: 'Failed to ensure that nat chain POSTROUTING jumps to MASQUERADE: error checking rule: exit status 4: iptables: Resource temporarily unavailable.\n\n'; Skipping pod"

This seems to cause no actual degradation of service, and only leads to the assumption that our default templates are somehow broken.

Version-Release number of selected component (if applicable):
v3.4.1.2 (online version 3.4.0.13)

How reproducible:
always


Steps to Reproduce:
1. Start a new project
2. Select "nodejs-mongo-persistent" to add to your project
3. Watch the Monitoring > Events page

Actual results:
Successful deployment with error message that apparently has no bearing on the outcome

Expected results:
Always successful, or errors with a "real" error message that requires the template owner to fix

Additional info:
Occurs in both -build and -deploy pods

Comment 1 Dan Williams 2017-03-01 19:09:59 UTC
I have gotten a patch accepted upstream to iptables that will prevent this error report from iptables when checking rules.

But this patch doesn't solve the problem of contention in the kernel that is the actual cause here.  We somehow need to figure out what two iptables processes are doing stuff at the same time, and which one isn't properly waiting for the other to finish.

Are you able to reliably reproduce this issue?


Note You need to log in before you can comment on or make changes to this bug.