Bug 1852977
| Summary: | [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client [Top Level] [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce multiple ingress policies with in | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Corey Daley <cdaley> |
| Component: | Networking | Assignee: | Victor Pickard <vpickard> |
| Networking sub component: | ovn-kubernetes | QA Contact: | zhaozhanqi <zzhao> |
| Status: | CLOSED DUPLICATE | Docs Contact: | |
| Severity: | high | ||
| Priority: | high | CC: | anbhat, aojeagar, bbennett, danw, deads, trozet, vpickard |
| Version: | 4.4 | ||
| Target Milestone: | --- | ||
| Target Release: | 4.6.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: |
[sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client [Top Level] [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
|
|
| Last Closed: | 2020-09-09 13:24:32 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Comment 2
David Eads
2020-08-25 14:41:31 UTC
*** Bug 1852982 has been marked as a duplicate of this bug. *** *** Bug 1852980 has been marked as a duplicate of this bug. *** *** Bug 1852979 has been marked as a duplicate of this bug. *** [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client [Top Level] [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy] [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client [Top Level] [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy] [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client [Top Level] [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] From the build logs of the above run, I see one instance of the "transport is closing" logs. I also see more of the "failed to sync secret cache: timed out waiting for the condition" logs. I'm not sure what the root cause of that is yet, I'll ask Tim if he thinks this is related. Sep 8 16:50:28.593: FAIL: unable to cleanup policy allow-ns-b-via-namespace-selector-or-client-b-via-pod-selector: rpc error: code = Unavailable desc = transport is closing Sep 08 16:48:54.643 W ns/e2e-network-policy-b-5929 pod/client-a-56l7x node/ip-10-0-150-198.us-east-2.compute.internal reason/FailedMount MountVolume.SetUp failed for volume "default-token-jwm6w" : failed to sync secret cache: timed out waiting for the condition Sep 08 16:46:54.502 W ns/e2e-network-policy-1947 pod/server-jvmg8 node/ip-10-0-186-134.us-east-2.compute.internal reason/FailedMount MountVolume.SetUp failed for volume "default-token-fzzmr" : failed to sync secret cache: timed out waiting for the condition Sep 08 16:46:54.646 W ns/e2e-network-policy-223 pod/server-52858 node/ip-10-0-150-198.us-east-2.compute.internal reason/FailedMount MountVolume.SetUp failed for volume "default-token-f5rmp" : failed to sync secret cache: timed out waiting for the condition Sep 08 16:46:55.648 W ns/e2e-nettest-8788 pod/netserver-0 node/ip-10-0-150-198.us-east-2.compute.internal reason/FailedMount MountVolume.SetUp failed for volume "default-token-gpmgz" : failed to sync secret cache: timed out waiting for the condition ... and more of the same Sep 08 17:12:01.160 W ns/e2e-network-policy-7443 pod/client-can-connect-81-dcvdj node/ip-10-0-150-198.us-east-2.compute.internal reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_client-can-connect-81-dcvdj_e2e-network-policy-7443_cf15b0ab-4a1c-499e-996a-e59c5afeb919_0(aba5d6170b2e7fea2fab321a83fb9071135ed212ecc6b325e07e9e6c2880d141): [e2e-network-policy-7443/client-can-connect-81-dcvdj:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[e2e-network-policy-7443/client-can-connect-81-dcvdj] failed to get pod annotation: timed out waiting for the condition\n' Sep 08 17:12:01.831 W ns/e2e-network-policy-437 pod/client-can-connect-81-fs65n node/ip-10-0-150-198.us-east-2.compute.internal reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_client-can-connect-81-fs65n_e2e-network-policy-437_5859668f-062d-4eb1-8217-201ed0c2817e_0(93cabdece60ba1bbd2d49cc4fe5e452d5c7fe5ebd951f768a3104a0988094e2a): [e2e-network-policy-437/client-can-connect-81-fs65n:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[e2e-network-policy-437/client-can-connect-81-fs65n] failed to get pod annotation: timed out waiting for the condition\n' Sep 08 17:12:02.647 W ns/e2e-network-policy-9153 pod/client-can-connect-81-mzqh8 node/ip-10-0-248-176.us-east-2.compute.internal reason/FailedCreatePodSandBox Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_client-can-connect-81-mzqh8_e2e-network-policy-9153_5569a5b8-4c29-47a0-9ca5-0df005717e00_0(c1b9ed2e8cd0d9e2044319092f7dc21e0be8391554bf4e970c26e1521b315e51): [e2e-network-policy-9153/client-can-connect-81-mzqh8:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[e2e-network-policy-9153/client-can-connect-81-mzqh8] failed to get pod annotation: timed out waiting for the condition\n' and more of these logs.... openshift-etcd_etcd-ip-10-0-136-4.us-east-2.compute.internal_etcd.log:WARNING: 2020/09/08 17:05:39 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "tra nsport is closing" openshift-etcd_etcd-ip-10-0-136-4.us-east-2.compute.internal_etcd.log:WARNING: 2020/09/08 17:05:47 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "tra nsport is closing" openshift-etcd_etcd-ip-10-0-136-4.us-east-2.compute.internal_etcd.log:WARNING: 2020/09/08 17:08:56 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "tra nsport is closing" openshift-etcd_etcd-ip-10-0-136-4.us-east-2.compute.internal_etcd.log:WARNING: 2020/09/08 17:09:34 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "tra nsport is closing" openshift-etcd_etcd-ip-10-0-136-4.us-east-2.compute.internal_etcd.log:WARNING: 2020/09/08 17:11:16 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "tra nsport is closing" openshift-etcd_etcd-ip-10-0-136-4.us-east-2.compute.internal_etcd.log:WARNING: 2020/09/08 17:12:29 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "tra nsport is closing" openshift-etcd_etcd-ip-10-0-136-4.us-east-2.compute.internal_etcd.log:WARNING: 2020/09/08 17:15:07 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "tra nsport is closing" openshift-etcd_etcd-ip-10-0-136-4.us-east-2.compute.internal_etcd.log:WARNING: 2020/09/08 17:15:39 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "tra nsport is closing" So, it does appear likely that the root cause of these network policy failures is from the "transport is closing" issue. *** This bug has been marked as a duplicate of bug 1872470 *** |