Bug 1969845 - Policy overview shows no violations
Summary: Policy overview shows no violations
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Advanced Cluster Management for Kubernetes
Classification: Red Hat
Component: GRC & Policy
Version: rhacm-2.2
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: rhacm-2.2.6
Assignee: Gus Parvin
QA Contact: Derek Ho
Mikela Dockery
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-06-09 10:26 UTC by Mihir Lele
Modified: 2022-09-18 16:44 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-08-10 18:33:12 UTC
Target Upstream Version:
Embargoed:
dho: qe_test_coverage-
ming: rhacm-2.2.z+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github open-cluster-management backlog issues 13182 0 None None None 2021-06-09 17:05:10 UTC
Red Hat Product Errata RHBA-2021:3126 0 None None None 2021-08-10 18:33:22 UTC

Description Mihir Lele 2021-06-09 10:26:01 UTC
Description of the problem:

Even though there are violations, the GRC dashboard shows no policy violations 

Release version: 2.2.3

# steps to reproduce:


1. create a new project: 
~~~
oc new-project scott-limit-test   (this one will have limits on it)
~~~
2. define the mem limit ranges in it: see the example here: https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/
~~~
oc apply -f https://k8s.io/examples/admin/resource/memory-defaults.yaml --namespace=scott-limit-test
~~~
3. create another new project: 
~~~
oc new-project scott-nolimit-test (this one will have NO limits on it)
~~~
4. create your mem limit range policy against the **local-cluster**
**notice** how I created mine to look at all scott-* namespaces, and, it matches the metadata name: mem-limit-range which is also what I used when defining the namespace mem limit (see example in #2 above)
~~~
  policy-templates:
    - objectDefinition:
        apiVersion: policy.open-cluster-management.io/v1
        kind: ConfigurationPolicy
        metadata:
          name: policy-limitrange-container-mem-limit-range
        spec:
          namespaceSelector:
            exclude:
              - kube-*
            include:
              - scott-*
          object-templates:
            - complianceType: musthave
              objectDefinition:
                apiVersion: v1
                kind: LimitRange
                metadata:
                  name: mem-limit-range
                spec:
                  limits:
                    - default:
                        memory: 512Mi
                      defaultRequest:
                        memory: 256Mi
                      type: Container
~~~
5. At this point you should refresh the policy page a few times and it will show as green Compliant.
6. Click on the mem limit policy you have created, then click on Status and click on View Details
7. You should see the green Compliant for the scott-limit-test   (this one will have limits on it) and you should see the red Non Compliant for the scott-nolimit-test (this one will have NO limits on it). These are both valid and correct.

# Expected results:
The overall status of the mem limit policy should be red Non compliant due to the non compliant scott-nolimit-test that was found.

Comment 1 Mike Ng 2021-06-15 14:59:33 UTC
G2Bsync 860930112 comment 
 gparvin Mon, 14 Jun 2021 19:17:35 UTC 
 G2Bsync We have identified that this issue is working in a way that is not meeting our expectations for how policies should be applied across multiple namespaces.  The engineer that began this investigation indicated the results seemed to intentionally return `Compliant` in this case, so we are trying to carefully determine if there was some scenario where this behavior was desired.

Thank you SO much for bringing this to our attention!

Comment 2 Martin Ouimet 2021-06-18 13:31:06 UTC
Hello, 

We are facing the exact same issue and I was about to open a bugzilla and finally found out this on. When we create a policy to check for the presence of a resource. 

As soon as one namespace does not comply to the policy, the policy should be in violation status. 

Idea: There should be a flag in the policy to specify if we want to report all namespaces or only on of the namespace specified in the namespaceselector. Similar to musthave and mustonlyhave, there could be a switch to specify something like mustallnamespace. 

Thanks !

Comment 3 Ginny Ghezzo 2021-06-18 14:23:23 UTC
Thanks @mouimet for the idea and capturing it in this Bugzilla. We will follow up after our review next week.

Comment 11 errata-xmlrpc 2021-08-10 18:33:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Advanced Cluster Management 2.2.6 bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3126


Note You need to log in before you can comment on or make changes to this bug.