Description of the problem: Even though there are violations, the GRC dashboard shows no policy violations Release version: 2.2.3 # steps to reproduce: 1. create a new project: ~~~ oc new-project scott-limit-test (this one will have limits on it) ~~~ 2. define the mem limit ranges in it: see the example here: https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/ ~~~ oc apply -f https://k8s.io/examples/admin/resource/memory-defaults.yaml --namespace=scott-limit-test ~~~ 3. create another new project: ~~~ oc new-project scott-nolimit-test (this one will have NO limits on it) ~~~ 4. create your mem limit range policy against the **local-cluster** **notice** how I created mine to look at all scott-* namespaces, and, it matches the metadata name: mem-limit-range which is also what I used when defining the namespace mem limit (see example in #2 above) ~~~ policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-limitrange-container-mem-limit-range spec: namespaceSelector: exclude: - kube-* include: - scott-* object-templates: - complianceType: musthave objectDefinition: apiVersion: v1 kind: LimitRange metadata: name: mem-limit-range spec: limits: - default: memory: 512Mi defaultRequest: memory: 256Mi type: Container ~~~ 5. At this point you should refresh the policy page a few times and it will show as green Compliant. 6. Click on the mem limit policy you have created, then click on Status and click on View Details 7. You should see the green Compliant for the scott-limit-test (this one will have limits on it) and you should see the red Non Compliant for the scott-nolimit-test (this one will have NO limits on it). These are both valid and correct. # Expected results: The overall status of the mem limit policy should be red Non compliant due to the non compliant scott-nolimit-test that was found.
G2Bsync 860930112 comment gparvin Mon, 14 Jun 2021 19:17:35 UTC G2Bsync We have identified that this issue is working in a way that is not meeting our expectations for how policies should be applied across multiple namespaces. The engineer that began this investigation indicated the results seemed to intentionally return `Compliant` in this case, so we are trying to carefully determine if there was some scenario where this behavior was desired. Thank you SO much for bringing this to our attention!
Hello, We are facing the exact same issue and I was about to open a bugzilla and finally found out this on. When we create a policy to check for the presence of a resource. As soon as one namespace does not comply to the policy, the policy should be in violation status. Idea: There should be a flag in the policy to specify if we want to report all namespaces or only on of the namespace specified in the namespaceselector. Similar to musthave and mustonlyhave, there could be a switch to specify something like mustallnamespace. Thanks !
Thanks @mouimet for the idea and capturing it in this Bugzilla. We will follow up after our review next week.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Advanced Cluster Management 2.2.6 bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3126