Description of problem: when a compliancesuite cr is created, all pod for the scans in the compliancesuite will be removed once the scan completed it would be good if we add these two issues in description as well Version-Release number of selected component (if applicable): 4.5.0-0.nightly-2020-04-09-222236 How reproducible: Always Steps to Reproduce: 1. Deploy operator openshift-compliance(Operator is available at link: https://github.com/openshift/compliance-operator) oc create -f compliance-operator/deploy/ns.yaml oc project openshift-compliance for f in $(ls -1 compliance-operator/deploy/crds/*crd.yaml); do oc create -f $f; done oc create -f compliance-operator/deploy/ 2. apply a compliancesuite without "debug" parameter enabled: oc create -f compliance-operator/deploy/crds/compliance.openshift.io_v1alpha1_compliancesuite_cr.yaml 3. delete the compliancesuite in step 2. add "debug: true" in /crds/compliance.openshift.io_v1alpha1_compliancesuite_cr.yaml and apply again. $ oc create -f -<<EOF > apiVersion: compliance.openshift.io/v1alpha1 > kind: ComplianceSuite > metadata: > name: example-compliancesuite > spec: > autoApplyRemediations: false > scans: > - name: workers-scan > profile: xccdf_org.ssgproject.content_profile_moderate > content: ssg-ocp4-ds.xml > contentImage: quay.io/jhrozek/ocp4-openscap-content:latest > nodeSelector: > node-role.kubernetes.io/worker: "" > - name: masters-scan > profile: xccdf_org.ssgproject.content_profile_moderate > content: ssg-ocp4-ds.xml > contentImage: quay.io/jhrozek/ocp4-openscap-content:latest > nodeSelector: > node-role.kubernetes.io/master: "" > debug: true > EOF Expected results: 1.For sep2, when the "debug" parameter not enabled, all pod for the scans in the compliancesuite will be removed once the scan completed. Please document it somewhere, like README.md 2. For step3, when the "debug" parameter is enabled, all pod for the scans in the compliancesuite should NOT be removed when the scan completed Refer bug title to write actual result >> Done Actual results: 1. For step 2, all pod for the scans in the compliancesuite will be removed once the scan completed. No document for this behavior. 2. For step 3, all pod for the scans in the compliancesuite will be removed once the scan completed. Additional info: logs and results is available here: http://virt-openshift-05.lab.eng.nay.redhat.com/xiyuan/compliance/Pod_removed.log
Fixed upstream in https://github.com/openshift/compliance-operator/commit/a7bbfb06fb02859c25b6ebf062054722946c19f3
Verification pass with 4.6.0-0.nightly-2020-07-22-031913 and compliance operator v0.1.11 All pod for the scans in the compliancesuite won't be removed once the scan completed, no matter debug is true of false. $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.0-0.nightly-2020-07-22-031913 True False 3h50m Cluster version is 4.6.0-0.nightly-2020-07-22-031913 $ oc create -f -<<EOF apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: example-compliancesuite spec: autoApplyRemediations: false schedule: "0 1 * * *" scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: quay.io/complianceascode/ocp4:latest nodeSelector: node-role.kubernetes.io/worker: "" EOF $ oc get pod apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: example-compliancesuite2 spec: autoApplyRemediations: false schedule: "0 1 * * *" scans: - name: master-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: quay.io/complianceascode/ocp4:latest nodeSelector: node-role.kubernetes.io/worker: "" debug: true EOF $ oc get pod NAME READY STATUS RESTARTS AGE aggregator-pod-master-scan 0/1 Completed 0 15m aggregator-pod-workers-scan 0/1 Completed 0 30m compliance-operator-6bcbf66d5b-c89h8 1/1 Running 0 44m compliance-operator-6bcbf66d5b-d9wtn 1/1 Running 0 44m compliance-operator-6bcbf66d5b-rfptv 1/1 Running 0 44m master-scan-ip-10-0-155-171.us-east-2.compute.internal-pod 0/2 Completed 0 17m master-scan-ip-10-0-187-194.us-east-2.compute.internal-pod 0/2 Completed 0 17m master-scan-ip-10-0-209-18.us-east-2.compute.internal-pod 0/2 Completed 0 17m ocp4-pp-59466846fd-t5q9h 1/1 Running 0 43m rhcos4-pp-6845f5dcd-hh52x 1/1 Running 0 43m workers-scan-ip-10-0-155-171.us-east-2.compute.internal-pod 0/2 Completed 0 33m workers-scan-ip-10-0-187-194.us-east-2.compute.internal-pod 0/2 Completed 0 33m workers-scan-ip-10-0-209-18.us-east-2.compute.internal-pod 0/2 Completed 0 33m
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4196