Bug 1983878 - "Ensure that application Namespaces have Network Policies defined" check fails each time.
Summary: "Ensure that application Namespaces have Network Policies defined" check fail...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Compliance Operator
Version: 4.7
Hardware: Unspecified
OS: Linux
unspecified
medium
Target Milestone: ---
: 4.9.0
Assignee: Jakub Hrozek
QA Contact: Prashant Dhamdhere
URL:
Whiteboard:
Depends On: 1990836
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-07-20 04:52 UTC by Johnray Fuller
Modified: 2024-10-01 19:01 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-09-07 06:05:14 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2021:3214 0 None None None 2021-09-07 06:05:28 UTC

Description Johnray Fuller 2021-07-20 04:52:48 UTC
Description of problem:

I have a 4.7.11 cluster with a freshly installed Compliance Operator version 1.35.

I created three namespaces and all have 4 network policies set, but the "Ensure that application Namespaces have Network Policies defined" check fails every time.


The networkpolicies are:
oc get networkpolicies.networking.k8s.io -n test1

NAME                              POD-SELECTOR   AGE
allow-from-openshift-ingress      <none>         117m
allow-from-openshift-monitoring   <none>         117m
allow-same-namespace              <none>         117m
deny-by-default                   <none>         118m

These are applied to each of the three namespaces.

# oc get projects | egrep -v "openshift|kube|default"
NAME                                               
test1                                                           
test2                                                             
test3   

I ran the first jq command in the results XML and get this:

$  oc get networkpolicies -o json --all-namespaces | jq '[.items[] | select((.metadata.name | startswith("openshift") | not) and (.metadata.name | startswith("kube-") | not) and .metadata.name != "default") | .metadata.namespace] | unique'
[
  "test1",
  "test2",
  "test3"
]

The following are the network policies:

#####
allow-from-openshift-ingress.yaml
#####
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-from-openshift-ingress
spec:
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          network.openshift.io/policy-group: ingress
  podSelector: {}
  policyTypes:
  - Ingress
#####
allow-from-openshift-monitoring.yaml
#####
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-from-openshift-monitoring
spec:
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          network.openshift.io/policy-group: monitoring
  podSelector: {}
  policyTypes:
  - Ingress
#####
allow-same-namespace.yaml
#####
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-same-namespace
spec:
  podSelector:
  ingress:
  - from:
    - podSelector: {}

#####
deny-by-default.yaml
#####
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: deny-by-default
spec:
  podSelector:
  ingress: []

Rule ID	xccdf_org.ssgproject.content_rule_configure_network_policies_namespaces

Version-Release number of selected component (if applicable):

OCP 4.7.1
CO 1.35

How reproducible:

Every time

Steps to Reproduce:
1. Install compliance operatore
2. Create namespace
3. Add network policies
4. Run compliance check

Actual results:


Expected results:


Additional info:

Comment 1 Jakub Hrozek 2021-07-20 09:40:01 UTC
As discussed on Slack, the issue is the combination of the content (pretty much the latest) and the operator (one version behind).

We need to make sure to release the latest operator version (0.1.36+) to address this issue.

Comment 5 xiyuan 2021-08-25 14:08:53 UTC
verificaiton pass with 4.9.0-0.nightly-2021-08-24-203710  and compliance-operator.v0.1.39
$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.9.0-0.nightly-2021-08-24-203710   True        False         10h     Cluster version is 4.9.0-0.nightly-2021-08-24-203710
$ oc get ip
NAME            CSV                           APPROVAL    APPROVED
install-8l8hs   compliance-operator.v0.1.39   Automatic   true
$ oc get csv
NAME                              DISPLAY                            VERSION    REPLACES                          PHASE
compliance-operator.v0.1.39       Compliance Operator                0.1.39                                       Succeeded
elasticsearch-operator.5.2.0-45   OpenShift Elasticsearch Operator   5.2.0-45   elasticsearch-operator.5.2.0-44   Succeeded

1. create 3 projects test1, test2, test3; and create several networkpolices in namesapce test1
$ oc get projects | egrep -v "openshift|kube|default"
NAME                                               DISPLAY NAME   STATUS
test1                                                             Active
test2                                                             Active
test3                                                             Active

$ oc get networkpolicies -n test1
NAME                              POD-SELECTOR   AGE
allow-from-openshift-ingress      <none>         22m
allow-from-openshift-monitoring   <none>         22m
allow-same-namespace              <none>         22m
deny-by-default                   <none>         22m
$ oc get networkpolicies -n test2
No resources found in test2 namespace.
$ oc get networkpolicies -n test3
No resources found in test3 namespace.

2. create ssb:
$ oc create -f -<<EOF
apiVersion: compliance.openshift.io/v1alpha1
kind: ScanSettingBinding
metadata:
  name: my-ssb-r
profiles:
  - name: ocp4-moderate
    kind: Profile
    apiGroup: compliance.openshift.io/v1alpha1
settingsRef:
  name: default
  kind: ScanSetting
  apiGroup: compliance.openshift.io/v1alpha1
EOF

3. check test result:
$  oc get checkresults ocp4-moderate-configure-network-policies-namespaces
NAME                                                  STATUS   SEVERITY
ocp4-moderate-configure-network-policies-namespaces   FAIL     high
$ oc get networkpolicies -o json --all-namespaces | jq '[.items[] | select((.metadata.name | startswith("openshift") | not) and (.metadata.name | startswith("kube-") | not) and .metadata.name != "default") | .metadata.namespace] | unique'
[
  "test1"
]

$  oc get  namespaces -o json | jq '[.items[] | select((.metadata.name | startswith("openshift") | not) and (.metadata.name | startswith("kube-") | not) and .metadata.name != "default")]'
[
  {
    "apiVersion": "v1",
    "kind": "Namespace",
    "metadata": {
      "annotations": {
        "openshift.io/description": "",
        "openshift.io/display-name": "",
        "openshift.io/requester": "kube:admin",
        "openshift.io/sa.scc.mcs": "s0:c26,c10",
        "openshift.io/sa.scc.supplemental-groups": "1000670000/10000",
        "openshift.io/sa.scc.uid-range": "1000670000/10000"
      },
      "creationTimestamp": "2021-08-25T13:17:03Z",
      "labels": {
        "kubernetes.io/metadata.name": "test1"
      },
      "name": "test1",
      "resourceVersion": "281061",
      "uid": "8d11ef09-5b01-4011-8d6e-3b3f77351a20"
    },
    "spec": {
      "finalizers": [
        "kubernetes"
      ]
    },
    "status": {
      "phase": "Active"
    }
  },
  {
    "apiVersion": "v1",
    "kind": "Namespace",
    "metadata": {
      "annotations": {
        "openshift.io/description": "",
        "openshift.io/display-name": "",
        "openshift.io/requester": "kube:admin",
        "openshift.io/sa.scc.mcs": "s0:c26,c15",
        "openshift.io/sa.scc.supplemental-groups": "1000680000/10000",
        "openshift.io/sa.scc.uid-range": "1000680000/10000"
      },
      "creationTimestamp": "2021-08-25T13:46:07Z",
      "labels": {
        "kubernetes.io/metadata.name": "test2"
      },
      "name": "test2",
      "resourceVersion": "293824",
      "uid": "f67035fc-89e5-410d-936b-e557e497eab6"
    },
    "spec": {
      "finalizers": [
        "kubernetes"
      ]
    },
    "status": {
      "phase": "Active"
    }
  },
  {
    "apiVersion": "v1",
    "kind": "Namespace",
    "metadata": {
      "annotations": {
        "openshift.io/description": "",
        "openshift.io/display-name": "",
        "openshift.io/requester": "kube:admin",
        "openshift.io/sa.scc.mcs": "s0:c26,c20",
        "openshift.io/sa.scc.supplemental-groups": "1000690000/10000",
        "openshift.io/sa.scc.uid-range": "1000690000/10000"
      },
      "creationTimestamp": "2021-08-25T13:46:18Z",
      "labels": {
        "kubernetes.io/metadata.name": "test3"
      },
      "name": "test3",
      "resourceVersion": "293928",
      "uid": "44923054-7013-4723-8cbe-59f465929554"
    },
    "spec": {
      "finalizers": [
        "kubernetes"
      ]
    },
    "status": {
      "phase": "Active"
    }
  }
]

4. Delete projects test2, test3, and rescan:
$ oc delete project test2 test3
project.project.openshift.io "test2" deleted
project.project.openshift.io "test3" deleted
$ ./oc-compliance rerun-now compliancesuite my-ssb-r
Rerunning scans from 'my-ssb-r': ocp4-moderate, rhcos4-moderate-worker, rhcos4-moderate-master
Re-running scan 'openshift-compliance/ocp4-moderate'
$ oc get suite 
NAME       PHASE   RESULT
my-ssb-r   DONE    NON-COMPLIANT
$  oc get checkresults ocp4-moderate-configure-network-policies-namespaces
NAME                                                  STATUS   SEVERITY
ocp4-moderate-configure-network-policies-namespaces   PASS     high

Comment 7 errata-xmlrpc 2021-09-07 06:05:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Compliance Operator bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3214


Note You need to log in before you can comment on or make changes to this bug.