Bug 2102025 - MachineConfigPool 'master' is paused and can not sync until it is unpaused
Summary: MachineConfigPool 'master' is paused and can not sync until it is unpaused
Keywords:
Status: CLOSED DUPLICATE of bug 2082151
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Compliance Operator
Version: 4.11
Hardware: ppc64le
OS: Linux
medium
medium
Target Milestone: ---
: 4.12.0
Assignee: Jakub Hrozek
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-06-29 07:23 UTC by Aditi Jadhav
Modified: 2023-09-15 01:56 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-07-01 13:00:23 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Logs for Compliance operator (2.50 MB, text/plain)
2022-06-29 12:09 UTC, Aditi Jadhav
no flags Details

Description Aditi Jadhav 2022-06-29 07:23:18 UTC
Description of problem:
Installing compliance operator on PowerVS for profile compliance but for one of the pod we are getting below error

# oc logs pod/ocp4-pci-dss-api-checks-pod
Defaulted container "log-collector" out of: log-collector, scanner, content-container (init), api-resource-collector (init)
I0628 06:13:51.112826       1 request.go:645] Throttling request took 1.046564562s, request: GET:https://172.30.0.1:443/apis/security.openshift.io/v1?

And it seems to be related to MachineConfig Pool because when we investigated we got following error

Failed to resync 4.11.0-0.nightly-ppc64le-2022-06-23-152107 because: Required MachineConfigPool 'master' is paused and can not sync until it is unpaused


Version-Release number of selected component (if applicable):
4.11.0-0.nightly-ppc64le-2022-06-23-152107

How reproducible:
Installed Compliance operator on PowerVS and getting issue with ocp4-pci-dss-api-checks-pod pod when trying to enable ocp4-pci-dss profile.

Actual results:
Pod ocp4-pci-dss-api-checks-pod goes into CrashloopBackOff state.

Expected results:
Pod should be in running state with expected results.

Additional info:
ClusterID: 4905409b-c1de-4108-a41b-2954e64015d8
ClusterVersion: Stable at "4.11.0-fc.3"
ClusterOperators:
        clusteroperator/ingress is degraded because The "default" ingress controller reports Degraded=True: DegradedConditions: One or more other status conditions indicate a degraded state: CanaryChecksSucceeding=False (CanaryChecksRepetitiveFailures: Canary route checks for the default ingress controller are failing)
        clusteroperator/machine-config is degraded because Failed to resync 4.11.0-fc.3 because: Required MachineConfigPool 'master' is paused and can not sync until it is unpaused

Comment 1 Jakub Hrozek 2022-06-29 11:38:10 UTC
Please gather the logs of --all-containers of both the operator and the pod. Also please attach the scans you were running.
But in general I think this can be the same issue as https://bugzilla.redhat.com/show_bug.cgi?id=2091546 and I'm setting needinfo on Vincent to confirm.

Comment 2 Aditi Jadhav 2022-06-29 12:09:50 UTC
Created attachment 1893387 [details]
Logs for Compliance operator

Comment 3 Aditi Jadhav 2022-06-29 12:31:33 UTC
@jhrozek Providing you logs of compliance operator and failing pod as below
[root@luks-test-8c97-tor01-bastion-0 ~]# oc get pods
NAME                                              READY   STATUS             RESTARTS         AGE
compliance-operator-5c7fb94f49-m4l4c              1/1     Running            0                5h19m
ocp4-openshift-compliance-pp-799595b497-ngpqr     1/1     Running            0                5h17m
ocp4-pci-dss-api-checks-pod                       1/2     CrashLoopBackOff   69 (3m56s ago)   5h9m
ocp4-pci-dss-rs-598759984-pjj4w                   1/1     Running            0                5h9m
rhcos4-openshift-compliance-pp-84675d66d7-2lrmn   1/1     Running            0                5h16m


[root@luks-test-8c97-tor01-bastion-0 ~]# oc get mc
NAME                                                               GENERATEDBYCONTROLLER                      IGNITIONVERSION   AGE
00-master                                                          37b741601f9b7ff9b2e1870102cc2970b24e1835   3.2.0             19h
00-worker                                                          37b741601f9b7ff9b2e1870102cc2970b24e1835   3.2.0             19h
01-master-container-runtime                                        37b741601f9b7ff9b2e1870102cc2970b24e1835   3.2.0             19h
01-master-kubelet                                                  37b741601f9b7ff9b2e1870102cc2970b24e1835   3.2.0             19h
01-worker-container-runtime                                        37b741601f9b7ff9b2e1870102cc2970b24e1835   3.2.0             19h
01-worker-kubelet                                                  37b741601f9b7ff9b2e1870102cc2970b24e1835   3.2.0             19h
75-ocp4-pci-dss-node-master-kubelet-enable-protect-kernel-sysctl                                              3.1.0             5h6m
75-ocp4-pci-dss-node-worker-kubelet-enable-protect-kernel-sysctl                                              3.1.0             5h7m
99-master-chrony-configuration                                                                                2.2.0             19h
99-master-generated-kubelet                                        37b741601f9b7ff9b2e1870102cc2970b24e1835   3.2.0             5h6m
99-master-generated-registries                                     37b741601f9b7ff9b2e1870102cc2970b24e1835   3.2.0             19h
99-master-ssh                                                                                                 3.2.0             19h
99-worker-chrony-configuration                                                                                2.2.0             19h
99-worker-generated-kubelet                                        37b741601f9b7ff9b2e1870102cc2970b24e1835   3.2.0             5h7m
99-worker-generated-registries                                     37b741601f9b7ff9b2e1870102cc2970b24e1835   3.2.0             19h
99-worker-ssh                                                                                                 3.2.0             19h
master-storage                                                                                                3.2.0             19h
rendered-master-1b70db9000fe403aa7a8d46fa4765994                   37b741601f9b7ff9b2e1870102cc2970b24e1835   3.2.0             5h6m
rendered-master-68154de611271142eb2e2bf4425123b7                   37b741601f9b7ff9b2e1870102cc2970b24e1835   3.2.0             19h
rendered-master-7af03bce1fd1bfeee085f204ea9526d4                   37b741601f9b7ff9b2e1870102cc2970b24e1835   3.2.0             5h6m
rendered-master-a420d41c742c2a09f51797951480b36c                   37b741601f9b7ff9b2e1870102cc2970b24e1835   3.2.0             5h6m
rendered-worker-38dd9528b94e626b07ace54ec7729983                   37b741601f9b7ff9b2e1870102cc2970b24e1835   3.2.0             19h
rendered-worker-3ad028edaac948c7fa776a32d368b42e                   37b741601f9b7ff9b2e1870102cc2970b24e1835   3.2.0             5h6m
rendered-worker-b10e249d47622f6b96375e324dbcb408                   37b741601f9b7ff9b2e1870102cc2970b24e1835   3.2.0             5h6m
worker-storage                                                                                                3.2.0             19h

[root@luks-test-8c97-tor01-bastion-0 ~]# oc get mcp
NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
master   rendered-master-68154de611271142eb2e2bf4425123b7   False     False      False      3              0                   0                     0                      19h
worker   rendered-worker-38dd9528b94e626b07ace54ec7729983   False     False      False      2              0                   0                     0                      19h

[root@luks-test-8c97-tor01-bastion-0 ~]# oc logs pod/ocp4-pci-dss-api-checks-pod
Defaulted container "log-collector" out of: log-collector, scanner, content-container (init), api-resource-collector (init)
I0629 11:50:53.673738       1 request.go:645] Throttling request took 1.003169794s, request: GET:https://172.30.0.1:443/apis/operators.coreos.com/v2?timeout=32s

Please let me know if any more logs required.

Comment 4 Jakub Hrozek 2022-06-29 14:20:24 UTC
restoring the needinfo for Vincent

Comment 5 Jakub Hrozek 2022-06-29 14:23:42 UTC
ah, only now I noticed that this is PCI-DSS. Did you look into why is the pod crashing? (oc describe it). Is it OOMKilled?

Comment 6 Gaurav Bankar 2022-06-29 20:22:25 UTC
@jhrozek Required output for describing pod which is crashing 

# oc describe pod ocp4-pci-dss-api-checks-pod
Name:         ocp4-pci-dss-api-checks-pod
Namespace:    openshift-compliance
Priority:     0
Node:         tor01-master-2.luks-test-8c97.169.48.22.42.nip.io/192.168.0.201
Start Time:   Wed, 29 Jun 2022 06:49:50 +0000
Labels:       compliance.openshift.io/scan-name=ocp4-pci-dss
              workload=scanner
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "openshift-sdn",
                    "interface": "eth0",
                    "ips": [
                        "10.130.1.65"
                    ],
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "openshift-sdn",
                    "interface": "eth0",
                    "ips": [
                        "10.130.1.65"
                    ],
                    "default": true,
                    "dns": {}
                }]
              openshift.io/scc: restricted-v2
              seccomp.security.alpha.kubernetes.io/pod: runtime/default
              workload.openshift.io/management: {"effect": "PreferredDuringScheduling"}
Status:       Running
IP:           10.130.1.65
IPs:
  IP:  10.130.1.65
Init Containers:
  content-container:
    Container ID:  cri-o://14946900932f144bc06b701acd4490638cbba27efe698f5350e754b548e87080
    Image:         quay.io/aditijadhav/ocp4:latest
    Image ID:      quay.io/aditijadhav/ocp4@sha256:f42b3c6f7f367b07d6912386feb5b8cd8d36f4bc993c309669f2c737511793dd
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
      -c
      cp /ssg-ocp4-ds.xml /content | /bin/true
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Wed, 29 Jun 2022 06:49:56 +0000
      Finished:     Wed, 29 Jun 2022 06:49:56 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     50m
      memory:  50Mi
    Requests:
      cpu:        10m
      memory:     10Mi
    Environment:  <none>
    Mounts:
      /content from content-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fwqcd (ro)
  api-resource-collector:
    Container ID:  cri-o://ad71dbbfa83c34ee62ff68a8b4f21b94f1bf756cb3114463ea25794d0e94b75c
    Image:         quay.io/aditijadhav/compliance-operator:latest
    Image ID:      quay.io/aditijadhav/compliance-operator@sha256:718783bbfe4dfd4f91b014c7d00f6a60a6b78f6b1e412d75b8baa4ae0194d8e8
    Port:          <none>
    Host Port:     <none>
    Command:
      compliance-operator
      api-resource-collector
      --content=/content/ssg-ocp4-ds.xml
      --resultdir=/kubernetes-api-resources
      --profile=xccdf_org.ssgproject.content_profile_pci-dss
      --warnings-output-file=/reports/warning_output
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Wed, 29 Jun 2022 06:50:01 +0000
      Finished:     Wed, 29 Jun 2022 06:50:11 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  100Mi
    Requests:
      cpu:        10m
      memory:     20Mi
    Environment:  <none>
    Mounts:
      /content from content-dir (ro)
      /kubernetes-api-resources from fetch-results (rw)
      /reports from report-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fwqcd (ro)
Containers:
  log-collector:
    Container ID:  cri-o://9b8d4ebcb8793c3edba6fc12424bf86f8a63688c6c23a3802874c3245ee9f278
    Image:         quay.io/aditijadhav/compliance-operator:latest
    Image ID:      quay.io/aditijadhav/compliance-operator@sha256:718783bbfe4dfd4f91b014c7d00f6a60a6b78f6b1e412d75b8baa4ae0194d8e8
    Port:          <none>
    Host Port:     <none>
    Command:
      compliance-operator
      resultscollector
      --arf-file=/reports/report-arf.xml
      --results-file=/reports/report.xml
      --exit-code-file=/reports/exit_code
      --oscap-output-file=/reports/cmd_output
      --warnings-output-file=/reports/warning_output
      --config-map-name=ocp4-pci-dss-api-checks-pod
      --owner=ocp4-pci-dss
      --namespace=openshift-compliance
      --resultserveruri=https://ocp4-pci-dss-rs:8443/
      --tls-client-cert=/etc/pki/tls/tls.crt
      --tls-client-key=/etc/pki/tls/tls.key
      --tls-ca=/etc/pki/tls/ca.crt
    State:          Running
      Started:      Wed, 29 Jun 2022 19:51:51 +0000
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Wed, 29 Jun 2022 18:51:44 +0000
      Finished:     Wed, 29 Jun 2022 19:51:47 +0000
    Ready:          True
    Restart Count:  13
    Limits:
      cpu:     100m
      memory:  100Mi
    Requests:
      cpu:        10m
      memory:     20Mi
    Environment:  <none>
    Mounts:
      /etc/pki/tls from tls (ro)
      /reports from report-dir (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fwqcd (ro)
  scanner:
    Container ID:  cri-o://b3d92f109a594660aab45053e28dc50fb33de7ef0ec31f51f949f04deca77015
    Image:         quay.io/aditijadhav/openscap-ocp:1.3.5
    Image ID:      quay.io/aditijadhav/openscap-ocp@sha256:7c60a8aa7c9fdbc44950369fffb144511633c3f23d1c08837e097c52c4d5168a
    Port:          <none>
    Host Port:     <none>
    Command:
      /scripts/openscap-container-entrypoint
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Wed, 29 Jun 2022 20:09:28 +0000
      Finished:     Wed, 29 Jun 2022 20:09:28 +0000
    Ready:          False
    Restart Count:  160
    Limits:
      cpu:     100m
      memory:  500Mi
    Requests:
      cpu:     10m
      memory:  50Mi
    Environment Variables from:
      ocp4-pci-dss-openscap-env-map-platform  ConfigMap  Optional: false
    Environment:                              <none>
    Mounts:
      /content from content-dir (ro)
      /kubernetes-api-resources from fetch-results (rw)
      /reports from report-dir (rw)
      /scripts from ocp4-pci-dss-openscap-container-entrypoint (ro)
      /tmp from tmp-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fwqcd (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  report-dir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  content-dir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  tmp-dir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  fetch-results:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  ocp4-pci-dss-openscap-container-entrypoint:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      ocp4-pci-dss-openscap-container-entrypoint
    Optional:  false
  tls:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  result-client-cert-ocp4-pci-dss
    Optional:    false
  kube-api-access-fwqcd:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
    ConfigMapName:           openshift-service-ca.crt
    ConfigMapOptional:       <nil>
QoS Class:                   Burstable
Node-Selectors:              node-role.kubernetes.io/master=
Tolerations:                 node-role.kubernetes.io/master:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason   Age                     From     Message
  ----     ------   ----                    ----     -------
  Normal   Pulled   164m (x129 over 13h)    kubelet  Container image "quay.io/aditijadhav/openscap-ocp:1.3.5" already present on machine
  Warning  BackOff  4m27s (x3681 over 13h)  kubelet  Back-off restarting failed container

Comment 7 Jakub Hrozek 2022-06-29 20:28:10 UTC
(Please stop resetting all needinfo when adding comments, I need to add it back manually every time)

OK, so it's not the OOMKilled issue, but something else. Can you also attach:

oc logs ocp4-pci-dss-api-checks-pod --all-containers

Comment 8 Gaurav Bankar 2022-06-29 20:30:57 UTC
[root@luks-test-8c97-tor01-bastion-0 ~]# oc logs ocp4-pci-dss-api-checks-pod --all-containers
File '/content/ssg-ocp4-ds.xml' found, using.
Fetching URI: '/version'
Fetching URI: '/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver'
Fetching URI: '/apis/config.openshift.io/v1/infrastructures/cluster'
Fetching URI: '/apis/config.openshift.io/v1/networks/cluster'
Fetching URI: '/api/v1/nodes'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/apis/rbac.authorization.k8s.io/v1/clusterrolebindings'
Fetching URI: '/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/catch-all'
W0629 06:50:08.408207       1 warnings.go:67] flowcontrol.apiserver.k8s.io/v1beta1 FlowSchema is deprecated in v1.23+, unavailable in v1.26+; use flowcontrol.apiserver.k8s.io/v1beta2 FlowSchema
Fetching URI: '/apis/operator.openshift.io/v1/kubeapiservers/cluster'
Fetching URI: '/apis/flowcontrol.apiserver.k8s.io/v1alpha1/flowschemas/catch-all'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/apis/config.openshift.io/v1/apiservers/cluster'
Fetching URI: '/apis/config.openshift.io/v1/apiservers/cluster'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-debugger'
Fetching URI: '/api/v1/namespaces/openshift-apiserver/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/apis/logging.openshift.io/v1/namespaces/openshift-logging/clusterlogforwarders/instance'
Fetching URI: '/apis/config.openshift.io/v1/apiservers/cluster'
Fetching URI: '/apis/compliance.openshift.io/v1alpha1/compliancesuites?limit=5'
Fetching URI: '/apis/operator.openshift.io/v1/networks/cluster'
Fetching URI: '/apis/networking.k8s.io/v1/networkpolicies'
Fetching URI: '/api/v1/namespaces'
Fetching URI: '/api/v1/namespaces/openshift-kube-controller-manager/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-controller-manager/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-controller-manager/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-controller-manager/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-controller-manager/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-controller-manager/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod'
Fetching URI: '/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod'
Fetching URI: '/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod'
Fetching URI: '/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod'
Fetching URI: '/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod'
Fetching URI: '/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod'
Fetching URI: '/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod'
Fetching URI: '/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod'
Fetching URI: '/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod'
Fetching URI: '/apis/fileintegrity.openshift.io/v1alpha1/fileintegrities?limit=5'
Fetching URI: '/apis/monitoring.coreos.com/v1/prometheusrules?limit=500'
Fetching URI: '/apis/apps/v1/namespaces/openshift-sdn/daemonsets/sdn'
Fetching URI: '/apis/config.openshift.io/v1/oauths/cluster'
Fetching URI: '/api/v1/namespaces/kube-system/secrets/kubeadmin'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-kube-apiserver/configmaps/config'
Fetching URI: '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
Fetching URI: '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
Fetching URI: '/apis/machine.openshift.io/v1beta1/machinesets?limit=500'
Fetching URI: '/apis/machine.openshift.io/v1beta1/machinesets?limit=500'
Fetching URI: '/apis/machine.openshift.io/v1beta1/machinesets?limit=500'
Fetching URI: '/api/v1/namespaces/openshift-apiserver/configmaps/config'
Fetching URI: '/api/v1/namespaces/openshift-apiserver/configmaps/config'
Fetching URI: '/apis/config.openshift.io/v1/oauths/cluster'
Fetching URI: '/api/v1/namespaces/openshift-apiserver/configmaps/config'
Fetching URI: '/apis/rbac.authorization.k8s.io/v1/clusterroles?limit=1000'
Fetching URI: '/apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-debugger'
Fetching URI: '/apis/rbac.authorization.k8s.io/v1/roles?limit=1000'
Fetching URI: '/apis/route.openshift.io/v1/routes?limit=500'
Fetching URI: '/apis/security.openshift.io/v1/securitycontextconstraints'
Fetching URI: '/api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler-pod'
Fetching URI: '/apis/storage.k8s.io/v1/storageclasses'
Fetching URI: '/api/v1/namespaces/openshift-apiserver/configmaps/config'
Saving fetched resource to: '/kubernetes-api-resources/apis/config.openshift.io/v1/networks/cluster'
Saving fetched resource to: '/kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#ffe65d9fac11909686e59349c6a0111aaf57caa26bd2db3e7dcb1a0a22899145'
Saving fetched resource to: '/kubernetes-api-resources/apis/rbac.authorization.k8s.io/v1/clusterrolebindings'
Saving fetched resource to: '/kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#8c02c853df9307960712da853d79f916a091fe8bce6312720d7c17de03c2017b'
Saving fetched resource to: '/kubernetes-api-resources/api/v1/namespaces/openshift-kube-controller-manager/configmaps/config#e27218fb5fb7cd68a9911eb2db6bf715ca959f639e56cb60f90be782ddd7fcf8'
Saving fetched resource to: '/kubernetes-api-resources/api/v1/namespaces/openshift-kube-controller-manager/configmaps/config#407a17f0f401ae8c92955bc382bc80ee34a9afd51ab787e405bf524d03ebf3c8'
Saving fetched resource to: '/kubernetes-api-resources/apis/rbac.authorization.k8s.io/v1/clusterroles?limit=1000'
Saving fetched resource to: '/kubernetes-api-resources/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver'
Saving fetched resource to: '/kubernetes-api-resources/apis/rbac.authorization.k8s.io/v1/roles?limit=1000'
Saving fetched resource to: '/kubernetes-api-resources/apis/flowcontrol.apiserver.k8s.io/v1alpha1/flowschemas/catch-all'
Saving fetched resource to: '/kubernetes-api-resources/api/v1/namespaces/openshift-kube-controller-manager/configmaps/config#4cbbbf49b93400715e43dc698f6484799805c502ad3aeb8285de579753b54d31'
Saving fetched resource to: '/kubernetes-api-resources/apis/config.openshift.io/v1/oauths/cluster'
Saving fetched resource to: '/kubernetes-api-resources/apis/machineconfiguration.openshift.io/v1/machineconfigs#191c7889a801949fcc07c8f067ca719c614388ea53f4b96b7148c57799e423b3'
Saving fetched resource to: '/kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#54842ba5cf821644f2727625c1518eba2de6e6b7ae318043d0bf7ccc9570e430'
Saving fetched resource to: '/kubernetes-api-resources/apis/operator.openshift.io/v1/kubeapiservers/cluster'
Saving fetched resource to: '/kubernetes-api-resources/apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-debugger'
Saving fetched resource to: '/kubernetes-api-resources/api/v1/namespaces#34d4beecc95c65d815d9d48fd4fdcb0c521631852ad088ef74e36d012b0e1e0d'
Saving fetched resource to: '/kubernetes-api-resources/api/v1/namespaces/openshift-kube-controller-manager/configmaps/config#9f09cca56dc1e9f9605eb5a94aed74de554fd209513a9222e4fe9c0ed669aeee'
Saving fetched resource to: '/kubernetes-api-resources/api/v1/namespaces/openshift-kube-controller-manager/configmaps/config#be4ff4c2d3e706eb3b2f17921e5163bca81082bd313ff067ef625af9e6cb61ff'
Saving fetched resource to: '/kubernetes-api-resources/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod#72b7530e9fb0f39686f598b00d791485841e98be902ba16431a5629726dd7027'
Saving fetched resource to: '/kubernetes-api-resources/api/v1/namespaces/kube-system/secrets/kubeadmin'
Saving fetched resource to: '/kubernetes-api-resources/version'
Saving fetched resource to: '/kubernetes-api-resources/api/v1/namespaces/openshift-apiserver/configmaps/config'
Saving fetched resource to: '/kubernetes-api-resources/apis/compliance.openshift.io/v1alpha1/compliancesuites?limit=5'
Saving fetched resource to: '/kubernetes-api-resources/apis/fileintegrity.openshift.io/v1alpha1/fileintegrities?limit=5'
Saving fetched resource to: '/kubernetes-api-resources/apis/apps/v1/namespaces/openshift-sdn/daemonsets/sdn'
Saving fetched resource to: '/kubernetes-api-resources/apis/route.openshift.io/v1/routes?limit=500'
Saving fetched resource to: '/kubernetes-api-resources/api/v1/nodes'
Saving fetched resource to: '/kubernetes-api-resources/apis/monitoring.coreos.com/v1/prometheusrules?limit=500#1af9e378f0bc0282076028afdb43f9d17f4cfb2f631c4d73ce65d9d0f3b10a08'
Saving fetched resource to: '/kubernetes-api-resources/apis/security.openshift.io/v1/securitycontextconstraints#3b8b4f5ca7174ce2d40bef71b6dd3d03c213c3c8a53c2386b79a6e1a2e23c317'
Saving fetched resource to: '/kubernetes-api-resources/api/v1/namespaces/openshift-kube-controller-manager/configmaps/config#8241ce1009dc5dd166436d0311b60b96aa3a2f591ba43a26e2b9d0bfc9071414'
Saving fetched resource to: '/kubernetes-api-resources/apis/operator.openshift.io/v1/networks/cluster#35e33d6dc1252a03495b35bd1751cac70041a511fa4d282c300a8b83b83e3498'
Saving fetched resource to: '/kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#95b5b27bb6ea2b122e810c99c17c2430c4845596942804847dd677557cfed88e'
Saving fetched resource to: '/kubernetes-api-resources/apis/config.openshift.io/v1/apiservers/cluster'
Saving fetched resource to: '/kubernetes-api-resources/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod'
Saving fetched resource to: '/kubernetes-api-resources/apis/machineconfiguration.openshift.io/v1/machineconfigs#136fe907b51dc9ea5011707799731b533561dab4b043f086f36c0b5c9c288414'
Saving fetched resource to: '/kubernetes-api-resources/apis/machine.openshift.io/v1beta1/machinesets?limit=500#06ea2adfb5429a7351e7bd78b7ec378225e0d3256c4c9e4e3b2ce59900959267'
Saving fetched resource to: '/kubernetes-api-resources/apis/machine.openshift.io/v1beta1/machinesets?limit=500#b9dfb8d8585cff7f72cd7403be3b5790ff7716fbe23facf6e251712ade7d60c1'
Saving fetched resource to: '/kubernetes-api-resources/apis/machine.openshift.io/v1beta1/machinesets?limit=500#4de267a890d70235b0f43110ee972bee760ecce356b1e9cb910f99cc33a02cc2'
Saving fetched resource to: '/kubernetes-api-resources/apis/storage.k8s.io/v1/storageclasses#b29cc2d371d1860f106d9cc419c79290e5aad8b18c9a39c83f867c7838d5e132'
Saving fetched resource to: '/kubernetes-api-resources/apis/config.openshift.io/v1/infrastructures/cluster'
Saving fetched resource to: '/kubernetes-api-resources/apis/logging.openshift.io/v1/namespaces/openshift-logging/clusterlogforwarders/instance'
Saving fetched resource to: '/kubernetes-api-resources/apis/networking.k8s.io/v1/networkpolicies#51742b3e87275db9eb7fc6c0286a9e536178a2a83e3670b615ceaf545e7fd300'
Saving fetched resource to: '/kubernetes-api-resources/api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler-pod#569895645b4f9b87d4e21ab3c6fe4cc03627259826715e5043d5d8889c6c12d3'
Saving fetched resource to: '/kubernetes-api-resources/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/catch-all'
I0629 19:51:53.475178       1 request.go:645] Throttling request took 1.005764034s, request: GET:https://172.30.0.1:443/apis/metal3.io/v1alpha1?timeout=32s
exec /scripts/openscap-container-entrypoint: permission denied

Comment 9 Jakub Hrozek 2022-06-29 20:35:29 UTC
OK, so it's probably this:
exec /scripts/openscap-container-entrypoint: permission denied

which sounds like https://bugzilla.redhat.com/show_bug.cgi?id=2082151

what operator version is this? I see you're running OCP 4.11 but I don't see the operator version in the BZ.

Also:
Image:         quay.io/aditijadhav/compliance-operator:latest
sounds like you built the operator yourself, so RHBZ might not even be the proper venue (this does not sound like an OCP build, but something custom?)

Comment 10 Gaurav Bankar 2022-06-29 20:48:32 UTC
Actually we working on Enabling PCI profile for ppc64le architect for enabling those profile we need to build image from source code, which working on 4.10

"Compliance Operator Version: 0.1.49"

Comment 11 Manoj Kumar 2022-06-29 21:27:34 UTC
The custom build was just to enable the PCI compliance profile, which not on by default on ppc64le.

Apparently the same operator works well on OpenShift 4.10 and this issue is only seen on 4.11.

Comment 12 Jakub Hrozek 2022-06-30 07:56:08 UTC
(In reply to Gaurav Bankar from comment #10)
> Actually we working on Enabling PCI profile for ppc64le architect for
> enabling those profile we need to build image from source code, which
> working on 4.10
> 
> "Compliance Operator Version: 0.1.49"

Then most likely you are hitting 2082151. Please upgrade, let me know if you are still hitting the issue.

Comment 13 Aditi Jadhav 2022-07-01 11:23:58 UTC
@jhrozek After your suggestion we have upgraded compliance operator to 0.1.52 on OCP 4.11 cluster. And we are getting the expected results i.e. pod issue is resolved now. Thanks for your support.

Comment 14 Jakub Hrozek 2022-07-01 13:00:23 UTC
Great, I'm happy that CO works for you now.

*** This bug has been marked as a duplicate of bug 2082151 ***

Comment 15 Red Hat Bugzilla 2023-09-15 01:56:27 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 365 days


Note You need to log in before you can comment on or make changes to this bug.