Bug 1920577 - Provide better visibility into 'SKIP' scan result status as well as into OpenSCAP 'not applicable'
Summary: Provide better visibility into 'SKIP' scan result status as well as into Open...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Compliance Operator
Version: 4.6
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: 4.8.0
Assignee: Jakub Hrozek
QA Contact: Prashant Dhamdhere
URL:
Whiteboard:
Depends On:
Blocks: 1940779 1940783
TreeView+ depends on / blocked
 
Reported: 2021-01-26 16:21 UTC by Andreas Karis
Modified: 2024-06-14 00:02 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1940779 (view as bug list)
Environment:
Last Closed: 2021-07-07 11:29:56 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift compliance-operator pull 579 0 None open Bug 1920577: Rename the SKIP state of ComplianceCheckResult to NOT-APPLICABLE 2021-02-24 12:21:00 UTC
Red Hat Product Errata RHBA-2021:2652 0 None None None 2021-07-07 11:31:09 UTC

Description Andreas Karis 2021-01-26 16:21:12 UTC
Description of problem:

Provide better visibility into 'SKIP' scan result status as well as into OpenSCAP 'not applicable'

As discussed on https://bugzilla.redhat.com/show_bug.cgi?id=1919367

>> a) How can admins know why some tests show as SKIP?

> I'm afraid this is currently not very user-friendly. What you can do it to run the scan with debug: true and then look at the oc logs of the scanner container of the pod running the actual scan. If the rule is not applicable for the current node, you would see something like:

I: oscap: Evaluating definition 'oval:ssg-node_is_ocp4_master_node:def:1': Node is Red Hat OpenShift Container Platform 4 Master Node.
I: oscap:   Evaluating file test 'oval:ssg-test_kube_api_pod_exists:tst:1': Testing if /etc/kubernetes/static-pod-resources/kube-apiserver-certs exists.
I: oscap:     Querying file object 'oval:ssg-object_kube_api_pod_exists:obj:1', flags: 0.
I: oscap:     Creating new syschar for file_object 'oval:ssg-object_kube_api_pod_exists:obj:1'.
I: oscap:     Switching probe to PROBE_OFFLINE_OWN mode.
I: oscap:     I will run file_probe_main:
I: oscap:     Opening file '/host/etc/kubernetes/static-pod-resources/kube-apiserver-certs'.
I: oscap:     Test 'oval:ssg-test_kube_api_pod_exists:tst:1' requires that every object defined by 'oval:ssg-object_kube_api_pod_exists:obj:1' exists on the system.
I: oscap:     0 objects defined by 'oval:ssg-object_kube_api_pod_exists:obj:1' exist on the system.
I: oscap:     Test 'oval:ssg-test_kube_api_pod_exists:tst:1' does not contain any state to compare object with.
I: oscap:     No item matching object 'oval:ssg-object_kube_api_pod_exists:obj:1' was found on the system. (flag=does not exist)
I: oscap:   Test 'oval:ssg-test_kube_api_pod_exists:tst:1' evaluated as false.
I: oscap: Definition 'oval:ssg-node_is_ocp4_master_node:def:1' evaluated as false.
I: oscap: Rule 'xccdf_org.ssgproject.content_rule_etcd_unique_ca' is not applicable.
Result^M        notapplicable

> The above says that openscap tried to match the ssg-node_is_ocp4_master_node that didn't match (Definition 'oval:ssg-node_is_ocp4_master_node:def:1' evaluated as false.) and therefore the rule was skipped.
> As you can see from the raw openscap output, openscap returns 'not applicable', but we map this result to 'SKIP'. This is apparently confusing, would it help at least a bit to expose not applicable as well in the user-visible status? If yes, would you mind filing a separate BZ so that we don't conflate the two issues together?

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 4 Prashant Dhamdhere 2021-03-04 12:23:21 UTC
[Bug Verification]

Looks good. Now for the non-applicable rules, the user-visible status also shows NOT-APPLICABLE instead of SKIP

$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.8.0-0.nightly-2021-03-04-003632   True        False         6h27m   Cluster version is 4.8.0-0.nightly-2021-03-04-003632

$ gh pr checkout 579
From https://github.com/openshift/compliance-operator
 * [new ref]           refs/pull/579/head -> no_skip
Switched to branch 'no_skip'


$ git branch 
  annotations
  fresh-rems
  handle-products
  master
* no_skip
  platform-tailor
  release-4.6
  release-4.7

$ git log -n1
commit 9f78a48492310714a2f17287ecb9255845e9af14 (HEAD -> no_skip)
Author: Jakub Hrozek <jhrozek>
Date:   Wed Feb 24 13:17:36 2021 +0100

    Rename the SKIP state of ComplianceCheckResult to NOT-APPLICABLE
    
    The SKIP status was not really easy to understand. The hope is that
    NOT-APPLICABLE will convey the meaning better.
    
    Jira: OCPBUGSM-23776

$ make deploy-local
Creating 'openshift-compliance' namespace/project
namespace/openshift-compliance created
podman build -t quay.io/compliance-operator/compliance-operator:latest -f build/Dockerfile .
STEP 1: FROM golang:1.15 AS builder
Completed short name "golang" with unqualified-search registries (origin: /etc/containers/registries.conf)
Getting image source signatures
Copying blob d86c3e98df64 done  
Copying blob 1517911a35d7 done  
Copying blob feab2c490a3c done  
Copying blob f15a0f46f8c3 done  
Copying blob 0ecb575e629c done  
Copying blob 7467d1831b69 done  
Copying blob 8361100bc5c4 done  
Copying config f61d7038bf done  
Writing manifest to image destination
Storing signatures
STEP 2: WORKDIR /go/src/github.com/openshift/compliance-operator
--> 5bd153d1423
STEP 3: ENV GOFLAGS=-mod=vendor
--> 04865d61ce4
STEP 4: COPY . . 
--> a4fb83ec29f
STEP 5: RUN make manager
GOFLAGS=-mod=vendor GO111MODULE=auto go build -o /go/src/github.com/openshift/compliance-operator/build/_output/bin/compliance-operator github.com/openshift/compliance-operator/cmd/manager
--> 239e9913637
STEP 6: FROM registry.access.redhat.com/ubi8/ubi-minimal:latest
Getting image source signatures
Copying blob a591faa84ab0 done  
Copying blob 76b9354adec6 done  
Copying config dc080723f5 done  
Writing manifest to image destination
Storing signatures
STEP 7: ENV OPERATOR=/usr/local/bin/compliance-operator     USER_UID=1001     USER_NAME=compliance-operator
--> e8911d91ff9
STEP 8: COPY --from=builder /go/src/github.com/openshift/compliance-operator/build/_output/bin/compliance-operator ${OPERATOR}
--> aa0010527e4
STEP 9: COPY build/bin /usr/local/bin
--> 43670a3397d
STEP 10: RUN  /usr/local/bin/user_setup
+ mkdir -p /root
+ chown 1001:0 /root
+ chmod ug+rwx /root
+ chmod g+rw /etc/passwd
+ rm /usr/local/bin/user_setup
--> 6aad9ffa66e
STEP 11: ENTRYPOINT ["/usr/local/bin/entrypoint"]
--> 5e59f5bbbf1
STEP 12: USER ${USER_UID}
STEP 13: COMMIT quay.io/compliance-operator/compliance-operator:latest
--> fb0469ce166
fb0469ce1663e30fa80b6fa14501f72a063ae6c54efdc02a19e8410d5ac7c22f
podman build -t quay.io/compliance-operator/compliance-operator-bundle:latest -f bundle.Dockerfile .
STEP 1: FROM scratch
STEP 2: LABEL operators.operatorframework.io.bundle.mediatype.v1=registry+v1
--> Using cache e393dad66eefba955728e97f5aacf894da635760825c2bff1afb793097dc7eb0
--> e393dad66ee
STEP 3: LABEL operators.operatorframework.io.bundle.manifests.v1=manifests/
--> Using cache be35e49cad67a5691bcf759662a36040984d030c49f96a4c7ccc53283cc2f2ff
--> be35e49cad6
STEP 4: LABEL operators.operatorframework.io.bundle.metadata.v1=metadata/
--> Using cache 838050be09f50fc9462fc38442c7ecd5eb99c6d09f820987cfa9e5430a13943e
--> 838050be09f
STEP 5: LABEL operators.operatorframework.io.bundle.package.v1=compliance-operator
--> a7b016e85d1
STEP 6: LABEL operators.operatorframework.io.bundle.channels.v1=alpha
--> 99f6522b3ea
STEP 7: LABEL operators.operatorframework.io.bundle.channel.default.v1=alpha
--> 8ffe4aa02ab
STEP 8: COPY deploy/olm-catalog/compliance-operator/manifests /manifests/
--> 7900bb3e796
STEP 9: COPY deploy/olm-catalog/compliance-operator/metadata /metadata/
STEP 10: COMMIT quay.io/compliance-operator/compliance-operator-bundle:latest
--> 5a91ed4c2c9
5a91ed4c2c9e67bccb099582745ae40a55eb1fc605c05be3634fa4e7d2800a9b
Temporarily exposing the default route to the image registry
config.imageregistry.operator.openshift.io/cluster patched
Pushing image quay.io/compliance-operator/compliance-operator:latest to the image registry
IMAGE_REGISTRY_HOST=$(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}'); \
	podman login "--tls-verify=false" -u kubeadmin -p sha256~DbC3s5_Fc-CO8cbXmV3151xhXFwX3cuzYr6flvPTaWc ${IMAGE_REGISTRY_HOST}; \
	podman push "--tls-verify=false" quay.io/compliance-operator/compliance-operator:latest ${IMAGE_REGISTRY_HOST}/openshift/compliance-operator:latest
Login Succeeded!
Getting image source signatures
Copying blob 561032cd4aa5 done  
Copying blob 17b23eccd948 done  
Copying blob d293ff93f761 done  
Copying blob 04a05557bbad done  
Copying blob 821b0c400fe6 done  
Copying config fb0469ce16 done  
Writing manifest to image destination
Storing signatures
Removing the route from the image registry
config.imageregistry.operator.openshift.io/cluster patched
IMAGE_FORMAT variable missing. We're in local enviornment.
customresourcedefinition.apiextensions.k8s.io/compliancecheckresults.compliance.openshift.io created
customresourcedefinition.apiextensions.k8s.io/complianceremediations.compliance.openshift.io created
customresourcedefinition.apiextensions.k8s.io/compliancescans.compliance.openshift.io created
customresourcedefinition.apiextensions.k8s.io/compliancesuites.compliance.openshift.io created
customresourcedefinition.apiextensions.k8s.io/profilebundles.compliance.openshift.io created
customresourcedefinition.apiextensions.k8s.io/profiles.compliance.openshift.io created
customresourcedefinition.apiextensions.k8s.io/rules.compliance.openshift.io created
customresourcedefinition.apiextensions.k8s.io/scansettingbindings.compliance.openshift.io created
customresourcedefinition.apiextensions.k8s.io/scansettings.compliance.openshift.io created
customresourcedefinition.apiextensions.k8s.io/tailoredprofiles.compliance.openshift.io created
customresourcedefinition.apiextensions.k8s.io/variables.compliance.openshift.io created
sed -i 's%quay.io/compliance-operator/compliance-operator:latest%image-registry.openshift-image-registry.svc:5000/openshift/compliance-operator:latest%' deploy/operator.yaml
namespace/openshift-compliance unchanged
deployment.apps/compliance-operator created
role.rbac.authorization.k8s.io/compliance-operator created
clusterrole.rbac.authorization.k8s.io/compliance-operator created
role.rbac.authorization.k8s.io/resultscollector created
role.rbac.authorization.k8s.io/api-resource-collector created
role.rbac.authorization.k8s.io/resultserver created
role.rbac.authorization.k8s.io/remediation-aggregator created
role.rbac.authorization.k8s.io/rerunner created
role.rbac.authorization.k8s.io/profileparser created
clusterrole.rbac.authorization.k8s.io/api-resource-collector created
rolebinding.rbac.authorization.k8s.io/compliance-operator created
clusterrolebinding.rbac.authorization.k8s.io/compliance-operator created
rolebinding.rbac.authorization.k8s.io/resultscollector created
rolebinding.rbac.authorization.k8s.io/remediation-aggregator created
clusterrolebinding.rbac.authorization.k8s.io/api-resource-collector created
rolebinding.rbac.authorization.k8s.io/api-resource-collector created
rolebinding.rbac.authorization.k8s.io/rerunner created
rolebinding.rbac.authorization.k8s.io/profileparser created
rolebinding.rbac.authorization.k8s.io/resultserver created
serviceaccount/compliance-operator created
serviceaccount/resultscollector created
serviceaccount/remediation-aggregator created
serviceaccount/rerunner created
serviceaccount/api-resource-collector created
serviceaccount/profileparser created
serviceaccount/resultserver created
deployment.apps/compliance-operator triggers updated

$ oc project openshift-compliance
Now using project "openshift-compliance" on server "https://api.pdhamdhe-aws04.qe.devcluster.openshift.com:6443".

$ oc get pods
NAME                                             READY   STATUS    RESTARTS   AGE
compliance-operator-65b87cb55-cvtpv              1/1     Running   0          7m18s
ocp4-openshift-compliance-pp-7cd9f6b64f-v7njt    1/1     Running   0          6m29s
rhcos4-openshift-compliance-pp-999fd896f-mt75c   1/1     Running   0          6m29s

$ oc get -oyaml scansetting default |grep debug
debug: true
      f:debug: {}

$ oc create -f - <<EOF
apiVersion: compliance.openshift.io/v1alpha1
kind: ScanSettingBinding
metadata:
  name: my-companys-compliance-requirements
profiles:
  # Node checks
  - name: ocp4-cis-node
    kind: Profile
    apiGroup: compliance.openshift.io/v1alpha1
  # Cluster checks
  - name: ocp4-cis
    kind: Profile
    apiGroup: compliance.openshift.io/v1alpha1
settingsRef:
  name: default
  kind: ScanSetting
  apiGroup: compliance.openshift.io/v1alpha1
EOF
scansettingbinding.compliance.openshift.io/my-companys-compliance-requirements created


$ oc get compliancesuite
NAME                                  PHASE   RESULT
my-companys-compliance-requirements   DONE    NON-COMPLIANT

$ oc get pods
NAME                                                    READY   STATUS      RESTARTS   AGE
aggregator-pod-ocp4-cis                                 0/1     Completed   0          2m57s
aggregator-pod-ocp4-cis-node-master                     0/1     Completed   0          3m2s
aggregator-pod-ocp4-cis-node-worker                     0/1     Completed   0          2m52s
compliance-operator-65b87cb55-cvtpv                     1/1     Running     0          21m
ocp4-cis-api-checks-pod                                 0/2     Completed   0          3m22s
ocp4-openshift-compliance-pp-7cd9f6b64f-v7njt           1/1     Running     0          20m
openscap-pod-030df7380e702e9a9490fc7d62d136ddffa3d07d   0/2     Completed   0          3m22s
openscap-pod-21e0ab10889874d31e0a18ba39fb66d4ad9b72bc   0/2     Completed   0          3m22s
openscap-pod-267732e15c100cd2475e8b11cfe015e6007eeec9   0/2     Completed   0          3m22s
openscap-pod-30f2c6a9e4dc0920b301882982e38b74618f5525   0/2     Completed   0          3m22s
openscap-pod-906a63cb134e8397b79861e9cad962b1002342b6   0/2     Completed   0          3m22s
openscap-pod-b6983383cd4cc8232e58e396f4bf7c8cdebe46b5   0/2     Completed   0          3m22s
rhcos4-openshift-compliance-pp-999fd896f-mt75c          1/1     Running     0          20m

$ oc logs openscap-pod-b6983383cd4cc8232e58e396f4bf7c8cdebe46b5 -c scanner |grep -A 10 "Unique"
Title   Configure A Unique CA Certificate for etcd
Rule    xccdf_org.ssgproject.content_rule_etcd_unique_ca
I: oscap: Evaluating XCCDF rule 'xccdf_org.ssgproject.content_rule_etcd_unique_ca'.
I: oscap: Evaluating definition 'oval:ssg-installed_app_is_ocp4:def:1': Red Hat OpenShift Container Platform.
I: oscap: Definition 'oval:ssg-installed_app_is_ocp4:def:1' evaluated as false.
I: oscap: Evaluating definition 'oval:ssg-installed_app_is_ocp4_node:def:1': Red Hat OpenShift Container Platform Node.
I: oscap: Definition 'oval:ssg-installed_app_is_ocp4_node:def:1' evaluated as true.
I: oscap: Evaluating definition 'oval:ssg-node_is_ocp4_master_node:def:1': Node is Red Hat OpenShift Container Platform 4 Master Node.
I: oscap: Definition 'oval:ssg-node_is_ocp4_master_node:def:1' evaluated as false.
I: oscap: Rule 'xccdf_org.ssgproject.content_rule_etcd_unique_ca' is not applicable.
Result  notapplicable

$ oc get compliancecheckresult |grep "NAME\|ocp4-cis-node-worker-etcd-unique-ca"
NAME                                                                           STATUS           SEVERITY
ocp4-cis-node-worker-etcd-unique-ca                                            NOT-APPLICABLE   medium

$ oc get compliancecheckresult -l compliance.openshift.io/check-status=SKIP
No resources found in openshift-compliance namespace.

$ oc get compliancecheckresult -l compliance.openshift.io/check-status=NOT-APPLICABLE |head -5
NAME                                                                  STATUS           SEVERITY
ocp4-cis-node-worker-etcd-unique-ca                                   NOT-APPLICABLE   medium
ocp4-cis-node-worker-file-groupowner-controller-manager-kubeconfig    NOT-APPLICABLE   medium
ocp4-cis-node-worker-file-groupowner-etcd-data-dir                    NOT-APPLICABLE   medium
ocp4-cis-node-worker-file-groupowner-etcd-data-files                  NOT-APPLICABLE   medium

Comment 7 Prashant Dhamdhere 2021-03-08 12:59:09 UTC
As per the comment #4 changing the bug status to VERIFIED


$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.8.0-0.nightly-2021-03-06-055252   True        False         8h      Cluster version is 4.8.0-0.nightly-2021-03-06-055252

$ oc get csv -nopenshift-compliance
NAME                          DISPLAY               VERSION   REPLACES   PHASE
compliance-operator.v0.1.28   Compliance Operator   0.1.28               Succeeded

$ oc get pods -nopenshift-compliance
NAME                                              READY   STATUS    RESTARTS   AGE
compliance-operator-85fb4c8fc6-dcbh4              1/1     Running   2          118m
ocp4-openshift-compliance-pp-584d9677bb-htrqz     1/1     Running   0          118m
rhcos4-openshift-compliance-pp-67665f48fd-xwtzp   1/1     Running   0          118m

$ oc get -oyaml scansetting default |grep debug
debug: true
      f:debug: {}

$ oc create -f - <<EOF
apiVersion: compliance.openshift.io/v1alpha1
kind: ScanSettingBinding 
metadata:
  name: my-companys-compliance-requirements
profiles:
  # Node checks
  - name: ocp4-cis-node
    kind: Profile
    apiGroup: compliance.openshift.io/v1alpha1
  # Cluster checks
  - name: ocp4-cis
    kind: Profile
    apiGroup: compliance.openshift.io/v1alpha1
settingsRef:
  name: default
  kind: ScanSetting
  apiGroup: compliance.openshift.io/v1alpha1
EOF
scansettingbinding.compliance.openshift.io/my-companys-compliance-requirements created

$ oc get suite
NAME                                  PHASE   RESULT
my-companys-compliance-requirements   DONE    NON-COMPLIANT

$ oc get pods
NAME                                                    READY   STATUS      RESTARTS   AGE
aggregator-pod-ocp4-cis                                 0/1     Completed   0          78s
aggregator-pod-ocp4-cis-node-master                     0/1     Completed   0          61s
aggregator-pod-ocp4-cis-node-worker                     0/1     Completed   0          51s
compliance-operator-85fb4c8fc6-dcbh4                    1/1     Running     2          128m
ocp4-cis-api-checks-pod                                 0/2     Completed   0          98s
ocp4-openshift-compliance-pp-584d9677bb-htrqz           1/1     Running     0          127m
openscap-pod-08e19cdafcf394556dc5d6f3720c07f4e137a4ea   0/2     Completed   0          98s
openscap-pod-1b58fca342a5ea1e7922a7c49fa1840b0787a20f   0/2     Completed   0          99s
openscap-pod-6c6cbb868de8282f14d97c01a606b41843011de8   0/2     Completed   0          98s
openscap-pod-98a960c3f41649d2a7482327a228dba968db503d   0/2     Completed   0          99s
openscap-pod-c2cc693a66caad7885c2fe78ca725be2d0c5ea10   0/2     Completed   0          99s
openscap-pod-f165f55e5ba7840ac9f51b42a3c5941df7cc755d   0/2     Completed   0          99s
rhcos4-openshift-compliance-pp-67665f48fd-xwtzp         1/1     Running     0          127m

$  oc logs openscap-pod-f165f55e5ba7840ac9f51b42a3c5941df7cc755d -c scanner |grep -A 10 "Unique"
Title   Configure A Unique CA Certificate for etcd
Rule    xccdf_org.ssgproject.content_rule_etcd_unique_ca
I: oscap: Evaluating XCCDF rule 'xccdf_org.ssgproject.content_rule_etcd_unique_ca'.
I: oscap: Evaluating definition 'oval:ssg-installed_app_is_ocp4:def:1': Red Hat OpenShift Container Platform.
I: oscap: Definition 'oval:ssg-installed_app_is_ocp4:def:1' evaluated as false.
I: oscap: Evaluating definition 'oval:ssg-installed_app_is_ocp4_node:def:1': Red Hat OpenShift Container Platform Node.
I: oscap: Definition 'oval:ssg-installed_app_is_ocp4_node:def:1' evaluated as true.
I: oscap: Evaluating definition 'oval:ssg-node_is_ocp4_master_node:def:1': Node is Red Hat OpenShift Container Platform 4 Master Node.
I: oscap: Definition 'oval:ssg-node_is_ocp4_master_node:def:1' evaluated as false.
I: oscap: Rule 'xccdf_org.ssgproject.content_rule_etcd_unique_ca' is not applicable.
Result  notapplicable

$ oc get compliancecheckresult |grep "NAME\|ocp4-cis-node-worker-etcd-unique-ca"
NAME                                                                           STATUS           SEVERITY
ocp4-cis-node-worker-etcd-unique-ca                                            NOT-APPLICABLE   medium

$ oc get compliancecheckresult -l compliance.openshift.io/check-status=SKIP
No resources found in openshift-compliance namespace.

$ oc get compliancecheckresult -l compliance.openshift.io/check-status=NOT-APPLICABLE |head -5
NAME                                                                  STATUS           SEVERITY
ocp4-cis-node-worker-etcd-unique-ca                                   NOT-APPLICABLE   medium
ocp4-cis-node-worker-file-groupowner-controller-manager-kubeconfig    NOT-APPLICABLE   medium
ocp4-cis-node-worker-file-groupowner-etcd-data-dir                    NOT-APPLICABLE   medium
ocp4-cis-node-worker-file-groupowner-etcd-data-files                  NOT-APPLICABLE   medium

Comment 11 errata-xmlrpc 2021-07-07 11:29:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Compliance Operator version 0.1.35 for OpenShift Container Platform 4.6-4.8), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:2652


Note You need to log in before you can comment on or make changes to this bug.