Created attachment 1787168 [details] only namespace selector when certain pods in some namespace is allowed Description of problem: some confusing selector text in From column at NetworkPolicy Ingress rules table Version-Release number of selected component (if applicable): 4.8.0-0.nightly-2021-05-26-021757 How reproducible: Always Steps to Reproduce: 1. Create a NetworkPolicy only allows ingress from certain pods in specific namespace kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: ingress-specificnspod namespace: yapei spec: podSelector: matchLabels: type: blue ingress: - ports: - protocol: TCP port: 443 - protocol: UDP port: 80 - protocol: SCTP port: 453 from: - podSelector: matchLabels: type: redhattest namespaceSelector: matchLabels: project: testing policyTypes: - Ingress 2. Check NetworkPolicy Ingress rules table on NetworkPolicy details page 3. Create another NetworkPolicy which allows both some specific pods in the whole cluster and some specific namespace apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: default spec: podSelector: matchLabels: role: db policyTypes: - Ingress - Egress ingress: - from: - ipBlock: cidr: 172.17.0.0/16 except: - 172.17.1.0/24 - namespaceSelector: matchLabels: project: myproject - podSelector: matchLabels: role: frontend ports: - protocol: TCP port: 6379 egress: - to: - ipBlock: cidr: 10.0.0.0/24 ports: - protocol: TCP port: 5978 4. Check NetworkPolicy Ingress rules table on NetworkPolicy details page Actual results: 2. In Ingress rules table, only shows `NS selector: project=testing` 4. In Ingress rules table, console shows an extra `No selector` text which is a little confusing Expected results: 2. Since only specific pods in some namespace is allowed, is there a better way to improve the user experience here? 4. `no selector` text seems need to be removed Additional info:
Created attachment 1787173 [details] `No selector` text shown when podSelector and namespaceSelector co exist
+1 I agree this is confusing. I think also it will be interesting to reuse some texts defined via https://issues.redhat.com/browse/NETOBSERV-4 (creation form), such as: "All pods in the same namespace", "Certain pods in some namespaces" etc. next to showing all the relevant configured selectors.
Also, there's a hierarchy of information to take into account: there's not only several rules that can be configured, but also several "peers" per rule. Today, the configured peer selectors are "flattened" into a rule, hiding how they are organised. For instance, it's not the same thing to have: ingress: - from: - namespaceSelector: matchLabels: project: myproject - podSelector: matchLabels: role: frontend ( = every namespace that matches "project=myproject" is allowed, and every pod that matches "role=frontend" in policy's namespace is allowed) Versus: ingress: - from: - namespaceSelector: matchLabels: project: myproject podSelector: matchLabels: role: frontend ( = every pod that matches "role=frontend" in namespaces that match "project=myproject" is allowed) So, this page needs some reworking.
When networkpolicy is created as kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: default namespace: yapei26325 spec: podSelector: {} ingress: - {} egress: - {} policyTypes: - Ingress - Egress Ingress rules and Egress rules table will show: All incoming traffic is denied to Pods in yapei26325 All outgoing traffic is denied from Pods in yapei26325 Actually all incoming and outgoing traffic are allowed ~
Spotted another bug in that page: rules with pod selectors are showing a selector link that is supposed to search for the corresponding pods. However, it is always searching in the policy's namespace, which makes sense only when the namespace selector is unset. When it's set, it should search in the whole cluster.
Create a networkpolicy with multiple ipBlocks $ cat testdata/networking/networkpolicy/nw-ipblock-multi-cidrs.yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: test-ipblock spec: podSelector: {} ingress: - from: - ipBlock: cidr: 10.128.6.27/32 - ipBlock: cidr: 10.128.10.13/32 - ipBlock: cidr: 10.128.6.28/32 $ oc create -f nw-ipblock-multi-cidrs.yaml 1. For 'Target pods' column, shall we show 'All pods or Any pod' when Pod selector is empty? Now it is shown: Pod selector, No selector 2. We rendered three table rows instead of just one Assign back for further debug and confirmation, this is checked against 4.8.0-0.nightly-2021-06-03-221810
Created attachment 1788907 [details] networkpolicy-multi-ipblocks
Created attachment 1788940 [details] ipBlock exceptions not shown When networkpolicy has following spec: spec: podSelector: {} egress: - to: - ipBlock: cidr: 10.128.10.14/24 except: - 10.128.10.23/32 - 10.128.10.21/32 policyTypes: - Egress The exceptions are not shown in the Egress rules table, Ingress rules table have the same issue
Hi Yadan, - About showing "All pods" for the main selector, +1 I'll do that - About merging all IPBlocks in a rule, +1 I agree it makes sense, although it was on purpose that I changed the display of rows (now, 1 row is 1 peer, not 1 rule)... but ipblocks can be handled as a special case to save space and make it easier to reason about. - About showing the exceptions, you're right it should be there, but my fear is that it screws up the look of the table when there's many exceptions defined; but perhaps it can be displayed in a tooltip. I'm going to try that.
Confirmed that the changes is merged in 4.9 nightly not 4.8. Changing target release to 4.9.0, correct me if I'm wrong
# oc adm release info registry.ci.openshift.org/ocp/release:4.8.0-0.nightly-2021-07-21-150743 --pullspecs | grep console console quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af7e87589149e0f9aa3d0613e454561f62feabc4681e93ba101a153102b984d7 console-operator quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbd31df46b731810a544769049791bfcf87c04cd29f4fc82eeafb2d55c5dc969 # oc image info quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af7e87589149e0f9aa3d0613e454561f62feabc4681e93ba101a153102b984d7 | grep commit io.openshift.build.commit.id=188a49057d477147554804e6b0e9ee8493ac7587 io.openshift.build.commit.url=https://github.com/openshift/console/commit/188a49057d477147554804e6b0e9ee8493ac7587 # cd /root/odev/src/github.com/openshift/console # git fetch origin # git rebase origin/master First, rewinding head to replay your work on top of it... Fast-forwarded master to origin/master. # git log 188a49057d477147554804e6b0e9ee8493ac7587 | grep '#9157' # git log 188a49057d477147554804e6b0e9ee8493ac7587 | grep '#9102' Merge pull request #9102 from jotak/netpol-details From above results we can see that the last fix PR `#9157` didn't go to 4.8 nightly # oc adm release info registry.ci.openshift.org/ocp/release:4.9.0-0.nightly-2021-07-21-081948 --pullspecs | grep console console quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951768a92384d8be87ce7e867de738bc200f9c9f34a0af7936ee939d28b86292 console-operator quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0de8e69661ce732ba76c00b6795aebb938edfd44464de7ac33664906a52285ee # oc image info quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951768a92384d8be87ce7e867de738bc200f9c9f34a0af7936ee939d28b86292 | grep commit io.openshift.build.commit.id=465c7baf49783b7ac3a88e346470857d1c285b99 io.openshift.build.commit.url=https://github.com/openshift/console/commit/465c7baf49783b7ac3a88e346470857d1c285b99 # git log 465c7baf49783b7ac3a88e346470857d1c285b99 | grep '#9157' Merge pull request #9157 from jotak/improve-netpol-display # git log 465c7baf49783b7ac3a88e346470857d1c285b99 | grep '#9102' Merge pull request #9102 from jotak/netpol-details The fix `#9157` only available in 4.9 nightly
1. Create four different networkpolicies cat > networkpolicy-1.yaml << EOF kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: ingress-specificnspod spec: podSelector: matchLabels: type: blue ingress: - ports: - protocol: TCP port: 443 - protocol: UDP port: 80 - protocol: SCTP port: 453 from: - podSelector: matchLabels: type: redhattest namespaceSelector: matchLabels: project: testing policyTypes: - Ingress EOF cat > networkpolicy-2.yaml << EOF apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy spec: podSelector: matchLabels: role: db policyTypes: - Ingress - Egress ingress: - from: - ipBlock: cidr: 172.17.0.0/16 except: - 172.17.1.0/24 - namespaceSelector: matchLabels: project: myproject - podSelector: matchLabels: role: frontend ports: - protocol: TCP port: 6379 egress: - to: - ipBlock: cidr: 10.0.0.0/24 ports: - protocol: TCP port: 5978 EOF cat > networkpolicy-3.yaml << EOF kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: default spec: podSelector: {} ingress: - {} egress: - {} policyTypes: - Ingress - Egress EOF cat > networkpolicy-4.yaml << EOF kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: test-ipblock spec: podSelector: {} ingress: - from: - ipBlock: cidr: 10.128.6.27/32 - ipBlock: cidr: 10.128.10.13/32 - ipBlock: cidr: 10.128.6.28/32 EOF $ oc create -f networkpolicy-1.yaml networkpolicy-2.yaml networkpolicy-3.yaml networkpolicy-4.yaml -n test 2. Check Ingress and Egress rules table on NetworkPolicy details page now the rules are shown correctly, more readable and clear Verified on 4.9.0-0.nightly-2021-07-22-015245
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.9.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:3759