Bug 1867030 - [OCP v46] The Compliance-Operator api-checks pod goes in CrashLoopBackOff during the Platform scan
Summary: [OCP v46] The Compliance-Operator api-checks pod goes in CrashLoopBackOff dur...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Compliance Operator
Version: 4.6
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.6.0
Assignee: Juan Antonio Osorio
QA Contact: Prashant Dhamdhere
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-08-07 08:14 UTC by Prashant Dhamdhere
Modified: 2020-10-27 16:26 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-10-27 16:25:54 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift compliance-operator pull 397 0 None closed Don't crash the api-resource-collector if no checks are found 2020-12-01 15:18:21 UTC
Red Hat Product Errata RHBA-2020:4196 0 None None None 2020-10-27 16:26:16 UTC

Description Prashant Dhamdhere 2020-08-07 08:14:17 UTC
Description of problem: 

The Compliance-Operator 'api-checks' pod goes in CrashLoopBackOff when the 'Platform' scan performs with the ocp4-cis profile. 

$ oc get pods/platform-scan-api-checks-pod 
NAME                           READY   STATUS                  RESTARTS   AGE 
platform-scan-api-checks-pod   0/2     Init:CrashLoopBackOff   5          4m1s 

$ oc get profile.compliance/ocp4-cis 
NAME       AGE 
ocp4-cis   3h19m 

$ oc get profile.compliance ocp4-cis -o yaml |grep "product-type" 
    compliance.openshift.io/product-type: Platform 
          f:compliance.openshift.io/product-type: {} 


$ oc describe pod platform-scan-api-checks-pod|grep -A15 "api-resource-collector" 
      /var/run/secrets/kubernetes.io/serviceaccount from api-resource-collector-token-2rvrd (ro) 
  api-resource-collector: 
    Container ID:  cri-o://8a25719b418693c5003e515200efc8a2497c4a66a79fedfdcc7f267a19230dc3 
    Image:         quay.io/compliance-operator/compliance-operator:latest 
    Image ID:      quay.io/compliance-operator/compliance-operator@sha256:1d03532fcc762d64d1dc3a05d54023105c44298a4337ce1f385050d978019c92 
    Port:          <none> 
    Host Port:     <none> 
    Command: 
      compliance-operator 
      api-resource-collector 
      --content=/content/ssg-ocp4-ds.xml 
      --resultdir=/kubernetes-api-resources 
      --profile=xccdf_org.ssgproject.content_profile_cis 
      --debug 
    State:          Waiting 
      Reason:       CrashLoopBackOff  <<-------- 
    Last State:     Terminated 
      Reason:       Error        <<-------- 
      Exit Code:    1 
      Started:      Fri, 07 Aug 2020 13:17:28 +0530 
      Finished:     Fri, 07 Aug 2020 13:17:29 +0530 
    Ready:          False 
    Restart Count:  7 
    Environment:    <none> 
    Mounts: 

$ oc logs pod/platform-scan-api-checks-pod -c api-resource-collector|tail 
debug: Couldn't find 'warning' child of check xccdf_org.ssgproject.content_rule_scc_limit_ipc_namespace 
debug: Couldn't find 'warning' child of check xccdf_org.ssgproject.content_rule_scc_limit_net_raw_capability 
debug: Couldn't find 'warning' child of check xccdf_org.ssgproject.content_rule_scc_limit_network_namespace 
debug: Couldn't find 'warning' child of check xccdf_org.ssgproject.content_rule_scc_limit_privilege_escalation 
debug: Couldn't find 'warning' child of check xccdf_org.ssgproject.content_rule_scc_limit_privileged_containers 
debug: Couldn't find 'warning' child of check xccdf_org.ssgproject.content_rule_scc_limit_process_id_namespace 
debug: Couldn't find 'warning' child of check xccdf_org.ssgproject.content_rule_scc_limit_root_containers 
debug: Couldn't find 'warning' child of check xccdf_org.ssgproject.content_rule_scheduler_profiling_argument 
debug: Couldn't find 'warning' child of check xccdf_org.ssgproject.content_rule_secrets_no_environment_variables 
FATAL:Error finding resources: no checks found in datastream 

$ oc get compliancesuite 
NAME                      PHASE     RESULT 
example-compliancesuite   RUNNING   NOT-AVAILABLE 


Version-Release number of selected component (if applicable): 
4.6.0-0.nightly-2020-08-06-131904 

How reproducible: 
Always 

Steps to Reproduce: 

1 clone compliance-operator git repo  

$ git clone https://github.com/openshift/compliance-operator.git  

2 Create 'openshift-compliance' namespace  

$ oc create -f compliance-operator/deploy/ns.yaml    

3 Switch to 'openshift-compliance' namespace  

$ oc project openshift-compliance  

4 Deploy CustomResourceDefinition.  

$ for f in $(ls -1 compliance-operator/deploy/crds/*crd.yaml); do oc create -f $f; done  

5. Deploy compliance-operator.  

$ oc create -f compliance-operator/deploy/  

6. Deploy ComplianceSuite CR with ocp4-cis profile and Platform scan 

$ oc create -f - <<EOF 
apiVersion: compliance.openshift.io/v1alpha1 
kind: ComplianceSuite 
metadata: 
  name: example-compliancesuite 
spec: 
  autoApplyRemediations: false 
  schedule: "0 1 * * *" 
  scans: 
    - name: platform-scan 
      scanType: Platform 
      profile: xccdf_org.ssgproject.content_profile_cis 
      content: ssg-ocp4-ds.xml 
      contentImage: quay.io/complianceascode/ocp4:latest 
      debug: true 
EOF 

7. Monitor api-checks pod status and ComplianceSuite result 

$ oc get pods 

$ oc get compliancesuite 


Actual results: 

The Compliance-Operator 'api-checks' pod goes in CrashLoopBackOff when the 'Platform' scan peforms with 'ocp4-cis' profile. 

Expected results: 

The Compliance-Operator 'api-checks' pod should show 'Complete' status and the ComplianceSuite shows relevant result in status.  

Additional info:

Comment 4 Prashant Dhamdhere 2020-08-27 04:58:11 UTC
The Compliance-Operator api-checks pod status looks good now.

Verified on: 
OCP 4.6.0-0.nightly-2020-08-27-005538
compliance-operator.v0.1.13

$ oc get pods/platform-scan-api-checks-pod
NAME                           READY   STATUS      RESTARTS   AGE
platform-scan-api-checks-pod   0/2     Completed   0          55s

$ oc get profile.compliance/ocp4-cis 
NAME       AGE
ocp4-cis   7m6s

$ oc describe pod platform-scan-api-checks-pod|grep -A15 "api-resource-collector" 
      /var/run/secrets/kubernetes.io/serviceaccount from api-resource-collector-token-qjjrz (ro)
  api-resource-collector:
    Container ID:  cri-o://fb595ce7f02b7b0a7aeb674894ae1f434d5ccadc4425c425aa2bb2ced94cfc1c
    Image:         quay.io/compliance-operator/compliance-operator:latest
    Image ID:      quay.io/compliance-operator/compliance-operator@sha256:268cb1032080e63e462fe2c216140c9b5b3ae9ba46de1d67da0b695cbe4e0782
    Port:          <none>
    Host Port:     <none>
    Command:
      compliance-operator
      api-resource-collector
      --content=/content/ssg-ocp4-ds.xml
      --resultdir=/kubernetes-api-resources
      --profile=xccdf_org.ssgproject.content_profile_cis
      --debug
    State:          Terminated   <<-----
      Reason:       Completed     <<-----
      Exit Code:    0
      Started:      Thu, 27 Aug 2020 10:14:24 +0530
      Finished:     Thu, 27 Aug 2020 10:14:25 +0530
    Ready:          True
    Restart Count:  0
    Environment:    <none>


$ oc logs pod/platform-scan-api-checks-pod -c api-resource-collector|tail 
debug: Couldn't find 'warning' child of check xccdf_org.ssgproject.content_rule_scc_limit_network_namespace
debug: Couldn't find 'warning' child of check xccdf_org.ssgproject.content_rule_scc_limit_privilege_escalation
debug: Couldn't find 'warning' child of check xccdf_org.ssgproject.content_rule_scc_limit_privileged_containers
debug: Couldn't find 'warning' child of check xccdf_org.ssgproject.content_rule_scc_limit_process_id_namespace
debug: Couldn't find 'warning' child of check xccdf_org.ssgproject.content_rule_scc_limit_root_containers
debug: Couldn't find 'warning' child of check xccdf_org.ssgproject.content_rule_scheduler_profiling_argument
debug: Couldn't find 'warning' child of check xccdf_org.ssgproject.content_rule_secrets_no_environment_variables
no valid checks found in datastream      <<-----
Fetching URI: '/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver'
Saving fetched resource to: '/kubernetes-api-resources/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver'


$ oc get pods
NAME                                   READY   STATUS      RESTARTS   AGE
aggregator-pod-platform-scan           0/1     Completed   0          4m46s  <<-----
compliance-operator-869646dd4f-5vq7z   1/1     Running     0          12m
ocp4-pp-7f89f556cc-zzmkj               1/1     Running     0          11m
platform-scan-api-checks-pod           0/2     Completed   0          5m16s   <<-----
rhcos4-pp-7c44999587-bckrn             1/1     Running     0          11m


$  oc get compliancesuite
NAME                      PHASE   RESULT
example-compliancesuite   DONE    NON-COMPLIANT

Comment 6 errata-xmlrpc 2020-10-27 16:25:54 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4196


Note You need to log in before you can comment on or make changes to this bug.