Description of problem: compliance operator doesn't work on RHEL-7 hosts for disconnected cluster Version-Release number of selected component (if applicable): 4.6.0-0.nightly-2020-09-09-184544 How reproducible: Always Steps to Reproduce: 1. Install compliance-operator 1.1 clone compliance-operator git repo $ git clone https://github.com/openshift/compliance-operator.git 1.2 Create 'openshift-compliance' namespace $ oc create -f compliance-operator/deploy/ns.yaml 1.3 Switch to 'openshift-compliance' namespace $ oc project openshift-compliance 1.4 Deploy CustomResourceDefinition. $ for f in $(ls -1 compliance-operator/deploy/crds/*crd.yaml); do oc create -f $f; done 1.5 Deploy compliance-operator. $ oc create -f compliance-operator/deploy/ 2. create label for rhel nodes: oc label node xiyuan09101-09100350-rhel-1 node-role.kubernetes.io/rhel= oc label node xiyuan09101-09100350-rhel-0 node-role.kubernetes.io/rhel= 3. create compliancesuite for rhel node: $ oc create -f - <<EOF apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: example-compliancesuite2 spec: autoApplyRemediations: false schedule: "0 1 * * *" scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_ncp content: ssg-rhel7-ds.xml contentImage: quay.io/complianceascode/ocp4:latest nodeSelector: node-role.kubernetes.io/rhel: "" debug: true EOF compliancesuite.compliance.openshift.io/example-compliancesuite2 created Actual results: compliancesuite finished with "ERROR". $ oc get compliancesuite -w NAME PHASE RESULT example-compliancesuite2 RUNNING NOT-AVAILABLE example-compliancesuite2 AGGREGATING NOT-AVAILABLE example-compliancesuite2 DONE ERROR$ oc describe compliancesuite example-compliancesuite2 | grep -v scanStatuses | grep "Status" -A15 Status: Phase: DONE Result: ERROR Scan Statuses: Errormsg: Downloading: https://www.redhat.com/security/data/oval/com.redhat.rhsa-RHEL7.xml ... error OpenSCAP Error: Download failed: Couldn't connect to server [/builddir/build/BUILD/openscap-1.3.3/src/common/oscap_acquire.c:311] Could not extract scap_org.open-scap_cref_ssg-rhel7-xccdf-1.2.xml with all dependencies from datastream. [/builddir/build/BUILD/openscap-1.3.3/src/DS/ds_sds_session.c:210] Name: workers-scan Phase: DONE Result: ERROR Results Storage: Name: workers-scan Namespace: openshift-compliance Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ResultAvailable 20m suitectrl ComplianceSuite's result is: ERROR Expected results: ompliancesuite finished with "COMPLIANT" or "NON-COMPLIANT". Additional info: Oscap won't download anything if --fetch-remote-resources isn't on the command line. Currently, there is --fetch-remote-resources by default.
This looks good now, The RHEL nodes scan perform successfully in both the scenarios. verified on: 4.6.0-0.nightly-2020-09-22-073212 Compliance Operator v0.1.17 $ oc get nodes NAME STATUS ROLES AGE VERSION ip-10-0-48-116.us-east-2.compute.internal Ready master 128m v1.19.0+7e8389f ip-10-0-52-90.us-east-2.compute.internal Ready master 128m v1.19.0+7e8389f ip-10-0-58-176.us-east-2.compute.internal Ready worker 119m v1.19.0+7e8389f ip-10-0-61-106.us-east-2.compute.internal Ready worker 58m v1.19.0+f5121a6 ip-10-0-63-221.us-east-2.compute.internal Ready worker 58m v1.19.0+f5121a6 ip-10-0-67-68.us-east-2.compute.internal Ready master 128m v1.19.0+7e8389f ip-10-0-70-30.us-east-2.compute.internal Ready worker 118m v1.19.0+7e8389f $ oc get nodes --selector=node.openshift.io/os_id=rhel NAME STATUS ROLES AGE VERSION ip-10-0-61-106.us-east-2.compute.internal Ready worker 58m v1.19.0+f5121a6 ip-10-0-63-221.us-east-2.compute.internal Ready worker 58m v1.19.0+f5121a6 $ oc label node ip-10-0-61-106.us-east-2.compute.internal node-role.kubernetes.io/rhel= node/ip-10-0-61-106.us-east-2.compute.internal labeled $ oc label node ip-10-0-63-221.us-east-2.compute.internal node-role.kubernetes.io/rhel= node/ip-10-0-63-221.us-east-2.compute.internal labeled $ oc get pods NAME READY STATUS RESTARTS AGE compliance-operator-869646dd4f-llrzk 1/1 Running 0 3m10s ocp4-pp-6786c5f5b-bqcpt 1/1 Running 0 2m22s rhcos4-pp-78c8cc9d44-skttw 1/1 Running 0 2m22s 1] httpsProxy: $ oc create -f - <<EOF > apiVersion: compliance.openshift.io/v1alpha1 > kind: ComplianceSuite > metadata: > name: example-compliancesuite > spec: > autoApplyRemediations: false > schedule: "0 1 * * *" > scans: > - name: rhel-scan > profile: xccdf_org.ssgproject.content_profile_ncp > content: ssg-rhel7-ds.xml > contentImage: quay.io/complianceascode/ocp4:latest > rule: "xccdf_org.ssgproject.content_rule_no_netrc_files" > httpsProxy: "http://proxy-user1:JYgU8qRZV4DY4PXJbxJK@ec2-3-137-165-84.us-east-2.compute.amazonaws.com:3128" <<----- > debug: true > nodeSelector: > node-role.kubernetes.io/rhel: "" > EOF compliancesuite.compliance.openshift.io/example-compliancesuite created $ oc get pods NAME READY STATUS RESTARTS AGE aggregator-pod-rhel-scan 0/1 Completed 0 102s compliance-operator-869646dd4f-llrzk 1/1 Running 0 40m ocp4-pp-6786c5f5b-bqcpt 1/1 Running 0 39m rhcos4-pp-78c8cc9d44-skttw 1/1 Running 0 39m rhel-scan-ip-10-0-61-106.us-east-2.compute.internal-pod 0/2 Completed 0 4m32s rhel-scan-ip-10-0-63-221.us-east-2.compute.internal-pod 0/2 Completed 0 4m32s $ oc get compliancesuite NAME PHASE RESULT example-compliancesuite DONE COMPLIANT $ oc describe compliancesuite example-compliancesuite | grep "Status:" -A15 Status: Phase: DONE Result: COMPLIANT Scan Statuses: Name: rhel-scan Phase: DONE Result: COMPLIANT Results Storage: Name: rhel-scan Namespace: openshift-compliance Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ResultAvailable 32m suitectrl The result is: COMPLIANT 2] noExternalResources $ oc create -f - <<EOF > apiVersion: compliance.openshift.io/v1alpha1 > kind: ComplianceSuite > metadata: > name: example-compliancesuite1 > spec: > autoApplyRemediations: false > schedule: "0 1 * * *" > scans: > - name: rhel-scan-noext > profile: xccdf_org.ssgproject.content_profile_ncp > content: ssg-rhel7-ds.xml > contentImage: quay.io/complianceascode/ocp4:latest > noExternalResources: true <<----- > debug: true > nodeSelector: > node-role.kubernetes.io/rhel: "" > EOF compliancesuite.compliance.openshift.io/example-compliancesuite1 created $ oc get pods NAME READY STATUS RESTARTS AGE aggregator-pod-rhel-scan 0/1 Completed 0 28m aggregator-pod-rhel-scan-noext 0/1 Completed 0 14m compliance-operator-869646dd4f-llrzk 1/1 Running 0 67m ocp4-pp-6786c5f5b-bqcpt 1/1 Running 0 66m rhcos4-pp-78c8cc9d44-skttw 1/1 Running 0 66m rhel-scan-ip-10-0-61-106.us-east-2.compute.internal-pod 0/2 Completed 0 31m rhel-scan-ip-10-0-63-221.us-east-2.compute.internal-pod 0/2 Completed 0 31m rhel-scan-noext-ip-10-0-61-106.us-east-2.compute.internal-pod 0/2 Completed 0 24m rhel-scan-noext-ip-10-0-63-221.us-east-2.compute.internal-pod 0/2 Completed 0 24m $ oc get compliancesuite NAME PHASE RESULT example-compliancesuite DONE COMPLIANT example-compliancesuite1 DONE NON-COMPLIANT $ oc describe compliancesuite example-compliancesuite1 | grep "Status:" -A15 Status: Phase: DONE Result: NON-COMPLIANT Scan Statuses: Name: rhel-scan-noext Phase: DONE Result: NON-COMPLIANT Results Storage: Name: rhel-scan-noext Namespace: openshift-compliance Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ResultAvailable 17m suitectrl The result is: NON-COMPLIANT
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4196