Bug 1994609 - The sdn related rules should show status “NOT-APPLICABLE” instead of “PASS” or “FAIL” on OVN cluster [NEEDINFO]
Summary: The sdn related rules should show status “NOT-APPLICABLE” instead of “PASS” o...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Compliance Operator
Version: 4.9
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Vincent Shen
QA Contact: Prashant Dhamdhere
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-08-17 14:16 UTC by xiyuan
Modified: 2022-07-20 15:11 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: The openshift-compliance content didn't include platform specific checks for network types. Consequence: OVN/SDN specific checks would show as failed instead of non-applicable based on the networking configuration Fix: Use the new content and rules which contain platform checks for networking rules Result: More accurate assessment of network-specific checks
Clone Of:
Environment:
Last Closed: 2022-04-18 07:54:00 UTC
Target Upstream Version:
agawand: needinfo? (wenshen)


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ComplianceAsCode content pull 8134 0 None Merged OCP4 adds OVN,SDN networkType CPE 2022-02-11 12:29:30 UTC
Github ComplianceAsCode content pull 8141 0 None open Bug 1994609: OCP: Update SDN rules to the correct platform 2022-02-11 12:28:35 UTC
Github openshift compliance-operator pull 785 0 None Merged Add network to api-resources we always fetch 2022-02-11 12:28:51 UTC
Red Hat Knowledge Base (Solution) 6821291 0 None None None 2022-03-24 09:38:41 UTC
Red Hat Product Errata RHBA-2022:1148 0 None None None 2022-04-18 07:54:10 UTC

Description xiyuan 2021-08-17 14:16:14 UTC
*Description of problem:*
The sdn related rules should show status “NOT-APPLICABLE” instead of “PASS” or “FAIL” on OVN cluster
$ Oc get compliancecheckresults
ocp4-cis-file-permissions-proxy-kubeconfig                                     FAIL             medium
ocp4-cis-node-master-file-groupowner-ip-allocations                            FAIL             medium
ocp4-cis-node-master-file-groupowner-openshift-sdn-cniserver-config            FAIL             medium
ocp4-cis-node-master-file-owner-ip-allocations                                 FAIL             medium
ocp4-cis-node-master-file-owner-openshift-sdn-cniserver-config                 FAIL             medium

*Version-Release number of selected components (if applicable):*
4.9.0-0.nightly-2021-08-16-154237 + compliance-operator-v0.1.38

*How reproducible:*
 Always

*Steps to Reproduce:*
Install compliance operator
Create ssb:
$ oc create -f -<<EOF
apiVersion: compliance.openshift.io/v1alpha1
kind: ScanSettingBinding
metadata:
  name: my-ssb-r
profiles:
  - name: ocp4-cis
    kind: Profile
    apiGroup: compliance.openshift.io/v1alpha1
  - name: ocp4-cis-node
    kind: Profile
    apiGroup: compliance.openshift.io/v1alpha1
settingsRef:
  name: default
  kind: ScanSetting
  apiGroup: compliance.openshift.io/v1alpha1
EOF
scansettingbinding.compliance.openshift.io/my-ssb-r created
Waiting compliancesuite in “DONE” phase, check the scan result for sdn related rules.

*Actual results:*
The compliancecheckresult show status “FAIL” or “PASS” for below rules on OVN cluster.
$ cat rules.txt
ocp4-cis-file-permissions-proxy-kubeconfig
ocp4-cis-node-master-file-groupowner-ip-allocations
ocp4-cis-node-master-file-groupowner-openshift-sdn-cniserver-config
ocp4-cis-node-master-file-owner-ip-allocations
ocp4-cis-node-master-file-owner-openshift-sdn-cniserver-config
ocp4-cis-node-master-file-perms-openshift-sdn-cniserver-config
ocp4-cis-node-worker-file-owner-openshift-sdn-cniserver-config
ocp4-cis-node-worker-file-perms-openshift-sdn-cniserver-config
$ for line in `cat rules.txt`; do oc get compliancecheckresults.compliance.openshift.io | grep -i $line; done
ocp4-cis-file-permissions-proxy-kubeconfig                                     FAIL             medium
ocp4-cis-node-master-file-groupowner-ip-allocations                            FAIL             medium
ocp4-cis-node-master-file-groupowner-openshift-sdn-cniserver-config            FAIL             medium
ocp4-cis-node-master-file-owner-ip-allocations                                 FAIL             medium
ocp4-cis-node-master-file-owner-openshift-sdn-cniserver-config                 FAIL             medium
ocp4-cis-node-master-file-perms-openshift-sdn-cniserver-config                 PASS             medium
ocp4-cis-node-worker-file-owner-openshift-sdn-cniserver-config                 FAIL             medium
ocp4-cis-node-worker-file-perms-openshift-sdn-cniserver-config                 PASS             medium

*Expected results:*
For below sdn related rules,the compliancecheckresult should show rule status “NOT-APPLICABLE” .

*Additional info:*
$ cat rules.txt
ocp4-cis-file-permissions-proxy-kubeconfig
ocp4-cis-node-master-file-groupowner-ip-allocations
ocp4-cis-node-master-file-groupowner-openshift-sdn-cniserver-config
ocp4-cis-node-master-file-owner-ip-allocations
ocp4-cis-node-master-file-owner-openshift-sdn-cniserver-config
ocp4-cis-node-master-file-perms-openshift-sdn-cniserver-config
ocp4-cis-node-worker-file-owner-openshift-sdn-cniserver-config
ocp4-cis-node-worker-file-perms-openshift-sdn-cniserver-config
$ for line in `cat rules.txt`; do echo "*********rule $line**************"; oc get compliancecheckresults.compliance.openshift.io | grep -i $line; oc get compliancecheckresults $line -o=jsonpath={.instructions}; echo "*********The End rule $line******"; done
*********rule ocp4-cis-file-permissions-proxy-kubeconfig**************
ocp4-cis-file-permissions-proxy-kubeconfig                                     FAIL             medium
Run the following command:
$ oc get -nopenshift-sdn ds sdn -ojson | jq -r '.spec.template.spec.volumes[] | select(.configMap.name == "sdn-config") | .configMap.defaultMode'
The output should return a value of 420.*********The End rule ocp4-cis-file-permissions-proxy-kubeconfig******
*********rule ocp4-cis-node-master-file-groupowner-ip-allocations**************
ocp4-cis-node-master-file-groupowner-ip-allocations                            FAIL             medium
To check the group ownership of /var/lib/cni/networks/openshift-sdn/.*,
you'll need to log into a node in the cluster.
As a user with administrator privileges, log into a node in the relevant pool:

$ oc debug node/$NODE_NAME

At the sh-4.4# prompt, run:

# chroot /host


Then,run the command:
$ ls -lL /var/lib/cni/networks/openshift-sdn/.*
If properly configured, the output should indicate the following group-owner:
root*********The End rule ocp4-cis-node-master-file-groupowner-ip-allocations******
*********rule ocp4-cis-node-master-file-groupowner-openshift-sdn-cniserver-config**************
ocp4-cis-node-master-file-groupowner-openshift-sdn-cniserver-config            FAIL             medium
To check the group ownership of /var/run/openshift-sdn/cniserver/config.json,
you'll need to log into a node in the cluster.
As a user with administrator privileges, log into a node in the relevant pool:

$ oc debug node/$NODE_NAME

At the sh-4.4# prompt, run:

# chroot /host


Then,run the command:
$ ls -lL /var/run/openshift-sdn/cniserver/config.json
If properly configured, the output should indicate the following group-owner:
root*********The End rule ocp4-cis-node-master-file-groupowner-openshift-sdn-cniserver-config******
*********rule ocp4-cis-node-master-file-owner-ip-allocations**************
ocp4-cis-node-master-file-owner-ip-allocations                                 FAIL             medium
To check the ownership of /var/lib/cni/networks/openshift-sdn/.*,
you'll need to log into a node in the cluster.
As a user with administrator privileges, log into a node in the relevant pool:

$ oc debug node/$NODE_NAME

At the sh-4.4# prompt, run:

# chroot /host


Then,run the command:
$ ls -lL /var/lib/cni/networks/openshift-sdn/.*
If properly configured, the output should indicate the following owner:
root*********The End rule ocp4-cis-node-master-file-owner-ip-allocations******
*********rule ocp4-cis-node-master-file-owner-openshift-sdn-cniserver-config**************
ocp4-cis-node-master-file-owner-openshift-sdn-cniserver-config                 FAIL             medium
To check the ownership of /var/run/openshift-sdn/cniserver/config.json,
you'll need to log into a node in the cluster.
As a user with administrator privileges, log into a node in the relevant pool:

$ oc debug node/$NODE_NAME

At the sh-4.4# prompt, run:

# chroot /host


Then,run the command:
$ ls -lL /var/run/openshift-sdn/cniserver/config.json
If properly configured, the output should indicate the following owner:
root*********The End rule ocp4-cis-node-master-file-owner-openshift-sdn-cniserver-config******
*********rule ocp4-cis-node-master-file-perms-openshift-sdn-cniserver-config**************
ocp4-cis-node-master-file-perms-openshift-sdn-cniserver-config                 PASS             medium
To check the permissions of /var/run/openshift-sdn/cniserver/config.json,
you'll need to log into a node in the cluster.
As a user with administrator privileges, log into a node in the relevant pool:

$ oc debug node/$NODE_NAME

At the sh-4.4# prompt, run:

# chroot /host


Then,run the command:
$ ls -l /var/run/openshift-sdn/cniserver/config.json
If properly configured, the output should indicate the following permissions:
-r--r--r--*********The End rule ocp4-cis-node-master-file-perms-openshift-sdn-cniserver-config******
*********rule ocp4-cis-node-worker-file-owner-openshift-sdn-cniserver-config**************
ocp4-cis-node-worker-file-owner-openshift-sdn-cniserver-config                 FAIL             medium
To check the ownership of /var/run/openshift-sdn/cniserver/config.json,
you'll need to log into a node in the cluster.
As a user with administrator privileges, log into a node in the relevant pool:

$ oc debug node/$NODE_NAME

At the sh-4.4# prompt, run:

# chroot /host


Then,run the command:
$ ls -lL /var/run/openshift-sdn/cniserver/config.json
If properly configured, the output should indicate the following owner:
root*********The End rule ocp4-cis-node-worker-file-owner-openshift-sdn-cniserver-config******
*********rule ocp4-cis-node-worker-file-perms-openshift-sdn-cniserver-config**************
ocp4-cis-node-worker-file-perms-openshift-sdn-cniserver-config                 PASS             medium
To check the permissions of /var/run/openshift-sdn/cniserver/config.json,
you'll need to log into a node in the cluster.
As a user with administrator privileges, log into a node in the relevant pool:

$ oc debug node/$NODE_NAME

At the sh-4.4# prompt, run:

# chroot /host


Then,run the command:
$ ls -l /var/run/openshift-sdn/cniserver/config.json
If properly configured, the output should indicate the following permissions:
-r--r--r--*********The End rule ocp4-cis-node-worker-file-perms-openshift-sdn-cniserver-config******

Comment 3 Jakub Hrozek 2021-09-24 13:06:51 UTC
This bug is planned for the next sprint (36)

Comment 5 Jakub Hrozek 2021-10-28 12:11:08 UTC
Replanned for the following sprint due to capacity

Comment 6 Jakub Hrozek 2021-11-26 09:14:24 UTC
This keeps getting replanned due to capacity..

Comment 9 Vincent Shen 2022-01-26 23:51:17 UTC
We are going to introduce a feature, creating two CPEs OVN and SDN, so those rules will check if the cluster has OVN/SDN before getting evaluated.

https://issues.redhat.com/browse/CMP-1187

This bug will be fixed once this new feature is completed.

Comment 10 Vincent Shen 2022-02-14 07:01:49 UTC
Related PRs:
Fetch network api resource: https://github.com/openshift/compliance-operator/pull/785
Add OVN/SDN CPEs on CaC repo: https://github.com/ComplianceAsCode/content/pull/8134
Update SDN Rules: https://github.com/ComplianceAsCode/content/pull/8141

Comment 11 Jakub Hrozek 2022-02-15 15:27:48 UTC
All PRs seem to have been merged.

Comment 15 Prashant Dhamdhere 2022-03-30 13:27:54 UTC
[Bug_Verification]


Looks good. The SDN specific rules are reporting scan result PASS on SDN cluster 
and On OVN cluster with these rules,the complianceSuite show scan status “NOT-APPLICABLE”


Verified on:

4.10.0-0.nightly-2022-03-29-163038 + compliance-operator.v0.1.49

Cluster Profiles:

1] IPI_ON_AWS (SDN)


$ oc get csv
NAME                               DISPLAY                            VERSION     REPLACES   PHASE
compliance-operator.v0.1.49        Compliance Operator                0.1.49                 Succeeded
elasticsearch-operator.5.4.0-126   OpenShift Elasticsearch Operator   5.4.0-126              Succeeded


$ oc get pods
NAME                                              READY   STATUS    RESTARTS        AGE
compliance-operator-9bf58698f-7cwnx               1/1     Running   1 (3m20s ago)   4m
ocp4-openshift-compliance-pp-59cd7665d6-fd4nd     1/1     Running   0               2m42s
rhcos4-openshift-compliance-pp-5c85d4d5c8-fndqm   1/1     Running   0               2m42s


$ oc create -f - << EOF
> apiVersion: compliance.openshift.io/v1alpha1
> kind: ScanSettingBinding
> metadata:
>   name: my-ssb-r
> profiles:
>   - name: ocp4-cis
>     kind: Profile
>     apiGroup: compliance.openshift.io/v1alpha1
>   - name: ocp4-cis-node
>     kind: Profile
>     apiGroup: compliance.openshift.io/v1alpha1
> settingsRef:
>   name: default
>   kind: ScanSetting
>   apiGroup: compliance.openshift.io/v1alpha1
> EOF
scansettingbinding.compliance.openshift.io/my-ssb-r created

$ oc get suite -w
NAME       PHASE       RESULT
my-ssb-r   LAUNCHING   NOT-AVAILABLE
my-ssb-r   PENDING     NOT-AVAILABLE
my-ssb-r   LAUNCHING   NOT-AVAILABLE
my-ssb-r   LAUNCHING   NOT-AVAILABLE
my-ssb-r   LAUNCHING   NOT-AVAILABLE
my-ssb-r   RUNNING     NOT-AVAILABLE
my-ssb-r   RUNNING     NOT-AVAILABLE
my-ssb-r   RUNNING     NOT-AVAILABLE
my-ssb-r   AGGREGATING   NOT-AVAILABLE
my-ssb-r   AGGREGATING   NOT-AVAILABLE
my-ssb-r   AGGREGATING   NOT-AVAILABLE
my-ssb-r   DONE          NON-COMPLIANT
my-ssb-r   DONE          NON-COMPLIANT

$ oc get scan
NAME                   PHASE   RESULT
ocp4-cis               DONE    NON-COMPLIANT
ocp4-cis-node-master   DONE    NON-COMPLIANT
ocp4-cis-node-worker   DONE    NON-COMPLIANT


$ oc get pods
NAME                                                    READY   STATUS      RESTARTS      AGE
aggregator-pod-ocp4-cis                                 0/1     Completed   0             25m
aggregator-pod-ocp4-cis-node-master                     0/1     Completed   0             25m
aggregator-pod-ocp4-cis-node-worker                     0/1     Completed   0             25m
compliance-operator-9bf58698f-7cwnx                     1/1     Running     1 (42m ago)   43m
ocp4-cis-api-checks-pod                                 0/2     Completed   0             26m
ocp4-openshift-compliance-pp-59cd7665d6-fd4nd           1/1     Running     0             42m
openscap-pod-3323be0168848bed6d8edfcf7252e658f16fc140   0/2     Completed   0             26m
openscap-pod-41f48381b9ae6f72e1510f8aeaeb7c16515a9258   0/2     Completed   0             26m
openscap-pod-6cc4bba281eb3a8dd07c05d86f0121b70ed74ec6   0/2     Completed   0             26m
openscap-pod-7a963f2aba2c30d3db22497841f89bab0258af51   0/2     Completed   0             26m
openscap-pod-ef4954a84372265aebe1a0f19f857788df352681   0/2     Completed   0             26m
openscap-pod-f8b1391f29e729e117f1ff1edc223f3f974fb3b3   0/2     Completed   0             26m
rhcos4-openshift-compliance-pp-5c85d4d5c8-fndqm         1/1     Running     0             42m


$ oc get compliancecheckresults |grep "sdn\|permissions-proxy\|ip-allocations"
ocp4-cis-file-permissions-proxy-kubeconfig                                     PASS     medium
ocp4-cis-node-master-file-groupowner-ip-allocations                            PASS     medium
ocp4-cis-node-master-file-groupowner-openshift-sdn-cniserver-config            PASS     medium
ocp4-cis-node-master-file-owner-ip-allocations                                 PASS     medium
ocp4-cis-node-master-file-owner-openshift-sdn-cniserver-config                 PASS     medium
ocp4-cis-node-master-file-permissions-ip-allocations                           PASS     medium
ocp4-cis-node-master-file-perms-openshift-sdn-cniserver-config                 PASS     medium
ocp4-cis-node-worker-file-groupowner-ip-allocations                            PASS     medium
ocp4-cis-node-worker-file-groupowner-openshift-sdn-cniserver-config            PASS     medium
ocp4-cis-node-worker-file-owner-ip-allocations                                 PASS     medium
ocp4-cis-node-worker-file-owner-openshift-sdn-cniserver-config                 PASS     medium
ocp4-cis-node-worker-file-permissions-ip-allocations                           PASS     medium
ocp4-cis-node-worker-file-perms-openshift-sdn-cniserver-config                 PASS     medium


$ oc get nodes
NAME                                         STATUS   ROLES    AGE    VERSION
ip-10-0-139-135.us-east-2.compute.internal   Ready    worker   110m   v1.23.5+1f952b3
ip-10-0-139-224.us-east-2.compute.internal   Ready    master   118m   v1.23.5+1f952b3
ip-10-0-180-193.us-east-2.compute.internal   Ready    worker   110m   v1.23.5+1f952b3
ip-10-0-181-235.us-east-2.compute.internal   Ready    master   118m   v1.23.5+1f952b3
ip-10-0-214-110.us-east-2.compute.internal   Ready    worker   110m   v1.23.5+1f952b3
ip-10-0-218-19.us-east-2.compute.internal    Ready    master   116m   v1.23.5+1f952b3


$ oc get -nopenshift-sdn ds sdn -ojson | jq -r '.spec.template.spec.volumes[] | select(.configMap.name == "sdn-config") | .configMap.defaultMode'
420


$ oc debug node/ip-10-0-139-224.us-east-2.compute.internal
Starting pod/ip-10-0-139-224us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`
Pod IP: 10.0.139.224
If you don't see a command prompt, try pressing enter.
sh-4.4# chroot /host
sh-4.4# ls -lL /var/lib/cni/networks/openshift-sdn/.*
/var/lib/cni/networks/openshift-sdn/.:
total 80
-rw-r--r--. 1 root root 64 Mar 30 10:40 10.129.0.10
-rw-r--r--. 1 root root 64 Mar 30 10:41 10.129.0.11
-rw-r--r--. 1 root root 64 Mar 30 10:41 10.129.0.12
-rw-r--r--. 1 root root 64 Mar 30 10:41 10.129.0.13
-rw-r--r--. 1 root root 64 Mar 30 10:39 10.129.0.2
-rw-r--r--. 1 root root 64 Mar 30 10:44 10.129.0.24
-rw-r--r--. 1 root root 64 Mar 30 10:44 10.129.0.26
-rw-r--r--. 1 root root 64 Mar 30 10:44 10.129.0.27
-rw-r--r--. 1 root root 64 Mar 30 10:40 10.129.0.3
-rw-r--r--. 1 root root 64 Mar 30 10:46 10.129.0.33
-rw-r--r--. 1 root root 64 Mar 30 10:40 10.129.0.4
-rw-r--r--. 1 root root 64 Mar 30 10:49 10.129.0.41
-rw-r--r--. 1 root root 64 Mar 30 10:50 10.129.0.43
-rw-r--r--. 1 root root 64 Mar 30 10:40 10.129.0.5
-rw-r--r--. 1 root root 64 Mar 30 10:58 10.129.0.52
-rw-r--r--. 1 root root 64 Mar 30 12:09 10.129.0.53
-rw-r--r--. 1 root root 64 Mar 30 10:40 10.129.0.6
-rw-r--r--. 1 root root 64 Mar 30 10:40 10.129.0.7
-rw-r--r--. 1 root root 64 Mar 30 10:40 10.129.0.8
-rw-r--r--. 1 root root 11 Mar 30 12:27 last_reserved_ip.0

/var/lib/cni/networks/openshift-sdn/..:
total 4
drwxr-xr-x. 2 root root 4096 Mar 30 12:27 openshift-sdn


sh-4.4# ls -lL /var/run/openshift-sdn/cniserver/config.json
-r--r--r--. 1 root root 33 Mar 30 10:39 /var/run/openshift-sdn/cniserver/config.json

sh-4.4# exit
sh-4.4# exit

Removing debug pod ...


$ oc debug node/ip-10-0-139-135.us-east-2.compute.internal
Starting pod/ip-10-0-139-135us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`
Pod IP: 10.0.139.135
If you don't see a command prompt, try pressing enter.
sh-4.4# chroot /host
sh-4.4# ls -lL /var/lib/cni/networks/openshift-sdn/.*
/var/lib/cni/networks/openshift-sdn/.:
total 48
-rw-r--r--. 1 root root 64 Mar 30 11:00 10.129.2.10
-rw-r--r--. 1 root root 64 Mar 30 11:01 10.129.2.13
-rw-r--r--. 1 root root 64 Mar 30 12:09 10.129.2.19
-rw-r--r--. 1 root root 64 Mar 30 10:47 10.129.2.2
-rw-r--r--. 1 root root 64 Mar 30 10:47 10.129.2.3
-rw-r--r--. 1 root root 64 Mar 30 10:48 10.129.2.4
-rw-r--r--. 1 root root 64 Mar 30 10:48 10.129.2.5
-rw-r--r--. 1 root root 64 Mar 30 10:48 10.129.2.6
-rw-r--r--. 1 root root 64 Mar 30 10:48 10.129.2.7
-rw-r--r--. 1 root root 64 Mar 30 10:52 10.129.2.8
-rw-r--r--. 1 root root 64 Mar 30 10:52 10.129.2.9
-rw-r--r--. 1 root root 11 Mar 30 12:39 last_reserved_ip.0

/var/lib/cni/networks/openshift-sdn/..:
total 0
drwxr-xr-x. 2 root root 233 Mar 30 12:39 openshift-sdn
sh-4.4# ls -lL /var/run/openshift-sdn/cniserver/config.json
-r--r--r--. 1 root root 33 Mar 30 10:47 /var/run/openshift-sdn/cniserver/config.json
sh-4.4#  ls -l /var/run/openshift-sdn/cniserver/config.json
-r--r--r--. 1 root root 33 Mar 30 10:47 /var/run/openshift-sdn/cniserver/config.json
sh-4.4# exit
sh-4.4# exit

Removing debug pod ...



2] On OVN cluster for above set of rules (sdn related rules),the complianceSuite show scan status “NOT-APPLICABLE”

# oc create -f - <<EOF
> apiVersion: compliance.openshift.io/v1alpha1
> kind: TailoredProfile
> metadata:
>   name: cis-test
>   namespace: openshift-compliance
> spec:
>   description: CIS
>   title: My modified CIS profile
>   enableRules:
>     - name: ocp4-file-groupowner-cni-conf
>       rationale: set cis-sdn profile
>     - name: ocp4-file-owner-etcd-data-dir
>       rationale: set cis-sdn profile
>     - name: ocp4-file-groupowner-openshift-sdn-cniserver-config
>       rationale: set cis-sdn profile
>     - name: ocp4-file-owner-openshift-sdn-cniserver-config        
>       rationale: set cis-sdn profile
>     - name: ocp4-file-perms-openshift-sdn-cniserver-config
>       rationale: set cis-sdn profile
>     - name: ocp4-file-groupowner-ip-allocations
>       rationale: set cis-sdn profile
>     - name: ocp4-file-owner-ip-allocations
>       rationale: set cis-sdn profile
>     - name: ocp4-file-permissions-ip-allocations
>       rationale: set cis-sdn profile
> EOF
tailoredprofile.compliance.openshift.io/cis-test created
 
# oc get tp
NAME       STATE
cis-test   READY
 
 
# oc create -f - << EOF
> apiVersion: compliance.openshift.io/v1alpha1
> kind: ScanSettingBinding
> metadata:
>   name: my-companys-compliance-requirements
> profiles:
>   # Node checks
>   - name: cis-test
>     kind: TailoredProfile
>     apiGroup: compliance.openshift.io/v1alpha1
> settingsRef:
>   name: default
>   kind: ScanSetting
>   apiGroup: compliance.openshift.io/v1alpha1
> EOF
scansettingbinding.compliance.openshift.io/my-companys-compliance-requirements created
 
# oc get suite -w
NAME                                  PHASE       RESULT
my-companys-compliance-requirements   LAUNCHING   NOT-AVAILABLE
my-companys-compliance-requirements   RUNNING     NOT-AVAILABLE
my-companys-compliance-requirements   AGGREGATING   NOT-AVAILABLE
my-companys-compliance-requirements   DONE          NOT-APPLICABLE
my-companys-compliance-requirements   DONE          NOT-APPLICABLE
 
# oc get ccr
No resources found in openshift-compliance namespace.
 
# oc get compliancecheckresults.compliance.openshift.io
No resources found in openshift-compliance namespace.

Comment 17 errata-xmlrpc 2022-04-18 07:54:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Compliance Operator bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:1148


Note You need to log in before you can comment on or make changes to this bug.