Bug 2102511 - [OSD] mcp puase status stuck at true issue as Compliance Operator failed to check if kubeletconfig custom-kubelet is subset of rendered MC 99-worker-generated-kubelet
Summary: [OSD] mcp puase status stuck at true issue as Compliance Operator failed to c...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Compliance Operator
Version: 4.11
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.12.0
Assignee: Vincent Shen
QA Contact:
Jeana Routh
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-06-30 05:30 UTC by xiyuan
Modified: 2022-12-22 21:57 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
* Previously, the Compliance Operator held machine configurations in a stuck state because it could not determine the relationship between machine configurations and kubelet configurations due to incorrect assumptions about machine configuration names. With this release, the Compliance Operator is able to determine if a kubelet configuration is a subset of a machine configuration. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2102511[*BZ#2102511*])
Clone Of:
Environment:
Last Closed: 2022-11-02 16:00:53 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ComplianceAsCode compliance-operator pull 58 0 None Merged BUG 2102511: Fix hardcoded logic by filtering to expected file name. 2022-07-15 21:06:28 UTC
Red Hat Product Errata RHBA-2022:6657 0 None None None 2022-11-02 16:01:09 UTC

Description xiyuan 2022-06-30 05:30:10 UTC
Description of problem:
On OSD cluster, when trying to apply auto remediation through scansettingbinding for ocp4-cis and ocp4-cis-node profile, the mcp puase status stuck at true issue as Compliance Operator failed to check if kubeletconfig custom-kubelet is subset of rendered MC 99-worker-generated-kubelet
oc logs pod/compliance-operator-6cb7d86447-4jlpz --all-containers
...
{"level":"info","ts":1656481406.2472951,"logger":"suitectrl","msg":"All scans are in Done phase. Post-processing remediations","Request.Namespace":"openshift-compliance","Request.Name":"my-ssb-r"}
{"level":"error","ts":1656481406.2477512,"logger":"suitectrl","msg":"Retriable error","Request.Namespace":"openshift-compliance","Request.Name":"my-ssb-r","error":"failed to check if kubeletconfig custom-kubelet is subset of rendered MC 99-worker-generated-kubelet: invalid character 'N' looking for beginning of value","stacktrace":"github.com/openshift/compliance-operator/pkg/controller/compliancesuite.(*ReconcileComplianceSuite).Reconcile\n\t/remote-source/app/pkg/controller/compliancesuite/compliancesuite_controller.go:181\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.2/pkg/internal/controller/controller.go:235\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.2/pkg/internal/controller/controller.go:209\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.2/pkg/internal/controller/controller.go:188\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/remote-source/deps/gomod/pkg/mod/k8s.io/apimachinery.11/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/remote-source/deps/gomod/pkg/mod/k8s.io/apimachinery.11/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/remote-source/deps/gomod/pkg/mod/k8s.io/apimachinery.11/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/remote-source/deps/gomod/pkg/mod/k8s.io/apimachinery.11/pkg/util/wait/wait.go:90"}
{"level":"error","ts":1656481406.2478116,"logger":"controller","msg":"Reconciler error","controller":"compliancesuite-controller","name":"my-ssb-r","namespace":"openshift-compliance","error":"failed to check if kubeletconfig custom-kubelet is subset of rendered MC 99-worker-generated-kubelet: invalid character 'N' looking for beginning of value","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.2/pkg/internal/controller/controller.go:209\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.2/pkg/internal/controller/controller.go:188\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/remote-source/deps/gomod/pkg/mod/k8s.io/apimachinery.11/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/remote-source/deps/gomod/pkg/mod/k8s.io/apimachinery.11/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/remote-source/deps/gomod/pkg/mod/k8s.io/apimachinery.11/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/remote-source/deps/gomod/pkg/mod/k8s.io/apimachinery.11/pkg/util/wait/wait.go:90"}

$ oc get kubeletconfig
NAME             AGE
custom-kubelet   53m
$ oc get kubeletconfig custom-kubelet -o yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"machineconfiguration.openshift.io/v1","kind":"KubeletConfig","metadata":{"annotations":{},"labels":{"hive.openshift.io/managed":"true"},"name":"custom-kubelet"},"spec":{"autoSizingReserved":true,"machineConfigPoolSelector":{"matchExpressions":[{"key":"machineconfiguration.openshift.io/mco-built-in","operator":"Exists"}]}}}
  creationTimestamp: "2022-06-29T05:00:50Z"
  finalizers:
  - 99-worker-generated-kubelet
  - 99-master-generated-kubelet
  generation: 20
  labels:
    hive.openshift.io/managed: "true"
  name: custom-kubelet
  resourceVersion: "91136"
  uid: f63645c0-54c1-48c8-a0c8-bd8d8b97fa9e
spec:
  autoSizingReserved: true
  kubeletConfig:
    eventRecordQPS: 10
    evictionHard:
      imagefs.available: 10%
      imagefs.inodesFree: 5%
      memory.available: 200Mi
      nodefs.available: 5%
      nodefs.inodesFree: 4%
    evictionPressureTransitionPeriod: 0s
    evictionSoft:
      imagefs.available: 15%
      imagefs.inodesFree: 10%
      memory.available: 500Mi
      nodefs.available: 10%
      nodefs.inodesFree: 5%
    evictionSoftGracePeriod:
      imagefs.available: 1m30s
      imagefs.inodesFree: 1m30s
      memory.available: 1m30s
      nodefs.available: 1m30s
      nodefs.inodesFree: 1m30s
    makeIPTablesUtilChains: true
    tlsCipherSuites:
    - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
    - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
    - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
    - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
  machineConfigPoolSelector:
    matchExpressions:
    - key: machineconfiguration.openshift.io/mco-built-in
      operator: Exists
status:
  conditions:
  - lastTransitionTime: "2022-06-29T05:42:15Z"
    message: Success
    status: "True"
    type: Success
$ oc get mc 99-master-generated-kubelet -o yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  annotations:
    machineconfiguration.openshift.io/generated-by-controller-version: 7152adb176e7ff4bf6c1d3a2e7b0aae4fd2794b6
  creationTimestamp: "2022-06-29T05:00:50Z"
  generation: 5
  labels:
    machineconfiguration.openshift.io/role: master
  name: 99-master-generated-kubelet
  ownerReferences:
  - apiVersion: machineconfiguration.openshift.io/v1
    blockOwnerDeletion: true
    controller: true
    kind: KubeletConfig
    name: custom-kubelet
    uid: f63645c0-54c1-48c8-a0c8-bd8d8b97fa9e
  resourceVersion: "91122"
  uid: b3a43d72-1ab0-45c7-aa35-36bbd1a5b8a5
spec:
  config:
    ignition:
      version: 3.2.0
    storage:
      files:
      - contents:
          source: data:text/plain,NODE_SIZING_ENABLED%3Dtrue%0ASYSTEM_RESERVED_MEMORY%3D1Gi%0ASYSTEM_RESERVED_CPU%3D500m%0A
        mode: 420
        overwrite: true
        path: /etc/node-sizing-enabled.env
      - contents:
          source: data:text/plain,%7B%0A%20%20%22kind%22%3A%20%22KubeletConfiguration%22%2C%0A%20%20%22apiVersion%22%3A%20%22kubelet.config.k8s.io%2Fv1beta1%22%2C%0A%20%20%22staticPodPath%22%3A%20%22%2Fetc%2Fkubernetes%2Fmanifests%22%2C%0A%20%20%22syncFrequency%22%3A%20%220s%22%2C%0A%20%20%22fileCheckFrequency%22%3A%20%220s%22%2C%0A%20%20%22httpCheckFrequency%22%3A%20%220s%22%2C%0A%20%20%22tlsCipherSuites%22%3A%20%5B%0A%20%20%20%20%22TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384%22%2C%0A%20%20%20%20%22TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384%22%2C%0A%20%20%20%20%22TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256%22%2C%0A%20%20%20%20%22TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256%22%0A%20%20%5D%2C%0A%20%20%22tlsMinVersion%22%3A%20%22VersionTLS12%22%2C%0A%20%20%22rotateCertificates%22%3A%20true%2C%0A%20%20%22serverTLSBootstrap%22%3A%20true%2C%0A%20%20%22authentication%22%3A%20%7B%0A%20%20%20%20%22x509%22%3A%20%7B%0A%20%20%20%20%20%20%22clientCAFile%22%3A%20%22%2Fetc%2Fkubernetes%2Fkubelet-ca.crt%22%0A%20%20%20%20%7D%2C%0A%20%20%20%20%22webhook%22%3A%20%7B%0A%20%20%20%20%20%20%22cacheTTL%22%3A%20%220s%22%0A%20%20%20%20%7D%2C%0A%20%20%20%20%22anonymous%22%3A%20%7B%0A%20%20%20%20%20%20%22enabled%22%3A%20false%0A%20%20%20%20%7D%0A%20%20%7D%2C%0A%20%20%22authorization%22%3A%20%7B%0A%20%20%20%20%22webhook%22%3A%20%7B%0A%20%20%20%20%20%20%22cacheAuthorizedTTL%22%3A%20%220s%22%2C%0A%20%20%20%20%20%20%22cacheUnauthorizedTTL%22%3A%20%220s%22%0A%20%20%20%20%7D%0A%20%20%7D%2C%0A%20%20%22eventRecordQPS%22%3A%2010%2C%0A%20%20%22clusterDomain%22%3A%20%22cluster.local%22%2C%0A%20%20%22clusterDNS%22%3A%20%5B%0A%20%20%20%20%22172.30.0.10%22%0A%20%20%5D%2C%0A%20%20%22streamingConnectionIdleTimeout%22%3A%20%220s%22%2C%0A%20%20%22nodeStatusUpdateFrequency%22%3A%20%220s%22%2C%0A%20%20%22nodeStatusReportFrequency%22%3A%20%220s%22%2C%0A%20%20%22imageMinimumGCAge%22%3A%20%220s%22%2C%0A%20%20%22volumeStatsAggPeriod%22%3A%20%220s%22%2C%0A%20%20%22systemCgroups%22%3A%20%22%2Fsystem.slice%22%2C%0A%20%20%22cgroupRoot%22%3A%20%22%2F%22%2C%0A%20%20%22cgroupDriver%22%3A%20%22systemd%22%2C%0A%20%20%22cpuManagerReconcilePeriod%22%3A%20%220s%22%2C%0A%20%20%22runtimeRequestTimeout%22%3A%20%220s%22%2C%0A%20%20%22maxPods%22%3A%20250%2C%0A%20%20%22kubeAPIQPS%22%3A%2050%2C%0A%20%20%22kubeAPIBurst%22%3A%20100%2C%0A%20%20%22serializeImagePulls%22%3A%20false%2C%0A%20%20%22evictionHard%22%3A%20%7B%0A%20%20%20%20%22imagefs.available%22%3A%20%2210%25%22%2C%0A%20%20%20%20%22imagefs.inodesFree%22%3A%20%225%25%22%2C%0A%20%20%20%20%22memory.available%22%3A%20%22200Mi%22%2C%0A%20%20%20%20%22nodefs.available%22%3A%20%225%25%22%2C%0A%20%20%20%20%22nodefs.inodesFree%22%3A%20%224%25%22%0A%20%20%7D%2C%0A%20%20%22evictionSoft%22%3A%20%7B%0A%20%20%20%20%22imagefs.available%22%3A%20%2215%25%22%2C%0A%20%20%20%20%22imagefs.inodesFree%22%3A%20%2210%25%22%2C%0A%20%20%20%20%22memory.available%22%3A%20%22500Mi%22%2C%0A%20%20%20%20%22nodefs.available%22%3A%20%2210%25%22%2C%0A%20%20%20%20%22nodefs.inodesFree%22%3A%20%225%25%22%0A%20%20%7D%2C%0A%20%20%22evictionSoftGracePeriod%22%3A%20%7B%0A%20%20%20%20%22imagefs.available%22%3A%20%221m30s%22%2C%0A%20%20%20%20%22imagefs.inodesFree%22%3A%20%221m30s%22%2C%0A%20%20%20%20%22memory.available%22%3A%20%221m30s%22%2C%0A%20%20%20%20%22nodefs.available%22%3A%20%221m30s%22%2C%0A%20%20%20%20%22nodefs.inodesFree%22%3A%20%221m30s%22%0A%20%20%7D%2C%0A%20%20%22evictionPressureTransitionPeriod%22%3A%20%220s%22%2C%0A%20%20%22makeIPTablesUtilChains%22%3A%20true%2C%0A%20%20%22featureGates%22%3A%20%7B%0A%20%20%20%20%22APIPriorityAndFairness%22%3A%20true%2C%0A%20%20%20%20%22CSIMigrationAWS%22%3A%20false%2C%0A%20%20%20%20%22CSIMigrationAzureDisk%22%3A%20false%2C%0A%20%20%20%20%22CSIMigrationAzureFile%22%3A%20false%2C%0A%20%20%20%20%22CSIMigrationGCE%22%3A%20false%2C%0A%20%20%20%20%22CSIMigrationOpenStack%22%3A%20false%2C%0A%20%20%20%20%22CSIMigrationvSphere%22%3A%20false%2C%0A%20%20%20%20%22DownwardAPIHugePages%22%3A%20true%2C%0A%20%20%20%20%22LegacyNodeRoleBehavior%22%3A%20false%2C%0A%20%20%20%20%22NodeDisruptionExclusion%22%3A%20true%2C%0A%20%20%20%20%22PodSecurity%22%3A%20true%2C%0A%20%20%20%20%22RotateKubeletServerCertificate%22%3A%20true%2C%0A%20%20%20%20%22ServiceNodeExclusion%22%3A%20true%2C%0A%20%20%20%20%22SupportPodPidsLimit%22%3A%20true%0A%20%20%7D%2C%0A%20%20%22memorySwap%22%3A%20%7B%7D%2C%0A%20%20%22containerLogMaxSize%22%3A%20%2250Mi%22%2C%0A%20%20%22systemReserved%22%3A%20%7B%0A%20%20%20%20%22ephemeral-storage%22%3A%20%221Gi%22%0A%20%20%7D%2C%0A%20%20%22logging%22%3A%20%7B%0A%20%20%20%20%22flushFrequency%22%3A%200%2C%0A%20%20%20%20%22verbosity%22%3A%200%2C%0A%20%20%20%20%22options%22%3A%20%7B%0A%20%20%20%20%20%20%22json%22%3A%20%7B%0A%20%20%20%20%20%20%20%20%22infoBufferSize%22%3A%20%220%22%0A%20%20%20%20%20%20%7D%0A%20%20%20%20%7D%0A%20%20%7D%2C%0A%20%20%22shutdownGracePeriod%22%3A%20%220s%22%2C%0A%20%20%22shutdownGracePeriodCriticalPods%22%3A%20%220s%22%0A%7D%0A
        mode: 420
        overwrite: true
        path: /etc/kubernetes/kubelet.conf
  extensions: null
  fips: false
  kernelArguments: null
  kernelType: ""
  osImageURL: ""
Version-Release number of selected component (if applicable):
4.10 + Compliance-operator-v0.1.52
How reproducible:
Always

Steps to Reproduce:
1. Install Compliance operator on a OSD cluster
2. create scansettingbinding in openshift-compliance namespace(NOTE: it will apply auto remediations by mc):
$ oc project openshift-compliance
$ oc apply -f -<<EOF
apiVersion: compliance.openshift.io/v1alpha1
kind: ScanSettingBinding
metadata:
  name: my-ssb-r
profiles:
  - name: ocp4-cis
    kind: Profile
    apiGroup: compliance.openshift.io/v1alpha1
  - name: ocp4-cis-node
    kind: Profile
    apiGroup: compliance.openshift.io/v1alpha1
settingsRef:
  name: default-auto-apply
  kind: ScanSetting
  apiGroup: compliance.openshift.io/v1alpha1
EOF
Actual results:
On OSD cluster, when trying to apply auto remediation through scansettingbinding for ocp4-cis and ocp4-cis-node profile, the mcp puase status stuck at true issue as Compliance Operator failed to check if kubeletconfig custom-kubelet is subset of rendered MC 99-worker-generated-kubelet



Expected results:
when trying to apply auto remediation through scansettingbinding for ocp4-cis and ocp4-cis-node profile, the mcp will be unpause soon. Compliance remediations will be applied successfully.

Additional info:
It may related with belwo config for /etc/node-sizing-enabled.env

NODE_SIZING_ENABLED=true
SYSTEM_RESERVED_MEMORY=1Gi
SYSTEM_RESERVED_CPU=500m

Comment 5 xiyuan 2022-09-26 03:16:18 UTC
Hi Jakub, 
Tried to verify with 4.12.0-0.nightly-2022-09-25-071630 and compliance-operator.v0.1.55, the alert still exists. Could you please help to double check? Thanks.
$ token=`oc  create token prometheus-k8s -n openshift-monitoring`
$  oc -n openshift-compliance exec compliance-operator-7489d57b55-6c2j5  -- curl -k -H "Authorization: Bearer $token" 'https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?' --data-urlencode 'query=ALERTS{alertname="APIRemovedInNextEUSReleaseInUse",resource="cronjobs"}' | jq
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   403    0   308  100    95   1974    608 --:--:-- --:--:-- --:--:--  2583
{
  "status": "success",
  "data": {
    "resultType": "vector",
    "result": [
      {
        "metric": {
          "__name__": "ALERTS",
          "alertname": "APIRemovedInNextEUSReleaseInUse",
          "alertstate": "pending",
          "group": "batch",
          "namespace": "openshift-kube-apiserver",
          "resource": "cronjobs",
          "severity": "info",
          "version": "v1beta1"
        },
        "value": [
          1664161876.437,
          "1"
        ]
      }
    ]
  }
}
$ oc get apirequestcounts cronjobs.v1beta1.batch -o yaml
apiVersion: apiserver.openshift.io/v1
kind: APIRequestCount
metadata:
  creationTimestamp: "2022-09-26T02:35:22Z"
  generation: 1
  name: cronjobs.v1beta1.batch
  resourceVersion: "66324"
  uid: 4ff4faaf-1b43-4ed6-8523-54bbd1d83e66
spec:
  numberOfUsersToReport: 10
status:
  currentHour:
    byNode:
    - byUser:
      - byVerb:
        - requestCount: 1
          verb: watch
        requestCount: 1
        userAgent: compliance-operator/v0.0.0
        username: system:serviceaccount:openshift-compliance:compliance-operator
      nodeName: 10.0.0.6
      requestCount: 1
    requestCount: 1
  last24h:
  - byNode:
    - nodeName: 10.0.0.6
      requestCount: 0
    requestCount: 0
  - byNode:
    - nodeName: 10.0.0.6
      requestCount: 0
    requestCount: 0
  - byNode:
    - byUser:
      - byVerb:
        - requestCount: 2
          verb: create
        - requestCount: 1
          verb: delete
        - requestCount: 1
          verb: list
        - requestCount: 4
          verb: watch
        requestCount: 8
        userAgent: compliance-operator/v0.0.0
        username: system:serviceaccount:openshift-compliance:compliance-operator
      nodeName: 10.0.0.6
      requestCount: 8
    requestCount: 8
  - byNode:
    - byUser:
      - byVerb:
        - requestCount: 1
          verb: watch
        requestCount: 1
        userAgent: compliance-operator/v0.0.0
        username: system:serviceaccount:openshift-compliance:compliance-operator
      nodeName: 10.0.0.6
      requestCount: 1
    requestCount: 1
  - requestCount: 0
  - byNode:
    - nodeName: 10.0.0.6
      requestCount: 0
    requestCount: 0
  - byNode:
    - nodeName: 10.0.0.6
      requestCount: 0
    requestCount: 0
  - byNode:
    - nodeName: 10.0.0.6
      requestCount: 0
    requestCount: 0
  - byNode:
    - nodeName: 10.0.0.6
      requestCount: 0
    requestCount: 0
  - byNode:
    - nodeName: 10.0.0.6
      requestCount: 0
    requestCount: 0
  - byNode:
    - nodeName: 10.0.0.6
      requestCount: 0
    requestCount: 0
  - byNode:
    - nodeName: 10.0.0.6
      requestCount: 0
    requestCount: 0
  - byNode:
    - nodeName: 10.0.0.6
      requestCount: 0
    requestCount: 0
  - byNode:
    - nodeName: 10.0.0.6
      requestCount: 0
    requestCount: 0
  - byNode:
    - nodeName: 10.0.0.6
      requestCount: 0
    requestCount: 0
  - byNode:
    - nodeName: 10.0.0.6
      requestCount: 0
    requestCount: 0
  - byNode:
    - nodeName: 10.0.0.6
      requestCount: 0
    requestCount: 0
  - byNode:
    - nodeName: 10.0.0.6
      requestCount: 0
    requestCount: 0
  - byNode:
    - nodeName: 10.0.0.6
      requestCount: 0
    requestCount: 0
  - byNode:
    - nodeName: 10.0.0.6
      requestCount: 0
    requestCount: 0
  - byNode:
    - nodeName: 10.0.0.6
      requestCount: 0
    requestCount: 0
  - byNode:
    - nodeName: 10.0.0.6
      requestCount: 0
    requestCount: 0
  - byNode:
    - nodeName: 10.0.0.6
      requestCount: 0
    requestCount: 0
  - byNode:
    - nodeName: 10.0.0.6
      requestCount: 0
    requestCount: 0
  removedInRelease: "1.25"
  requestCount: 9
$ oc explain cronjobs
KIND:     CronJob
VERSION:  batch/v1

DESCRIPTION:
     CronJob represents the configuration of a single cron job.

...

Comment 6 xiyuan 2022-09-26 03:18:09 UTC
Sorry, please ignore https://bugzilla.redhat.com/show_bug.cgi?id=2102511#c5. It is for another bug https://bugzilla.redhat.com/show_bug.cgi?id=2098581

Comment 7 xiyuan 2022-09-29 04:19:14 UTC
Hi Vincent,
The remediation could be applied successfully with 4.11.5 + compliance-operator.v0.1.56 (as osd latest version is based on 4.11.5)
Generally it is good.
The only problem is why it still using existing kubeletconfigs, no new kubeletconfig compliance-operator-kubelet-xxx created.
Is it working as expected? Thanks.
 
$ oc get clusterversion
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.11.5    True        False         130m    Cluster version is 4.11.5

$ oc get ip
NAME            CSV                           APPROVAL    APPROVED
install-rhncj   compliance-operator.v0.1.56   Automatic   true
$ oc get csv
NAME                                      DISPLAY                  VERSION           REPLACES                                  PHASE
compliance-operator.v0.1.56               Compliance Operator      0.1.56                                                      Succeeded
route-monitor-operator.v0.1.422-151be96   Route Monitor Operator   0.1.422-151be96   route-monitor-operator.v0.1.408-c2256a2   Succeeded
$ oc apply -f -<<EOF
apiVersion: compliance.openshift.io/v1alpha1
kind: ScanSettingBinding
metadata:
  name: test
profiles:
  - apiGroup: compliance.openshift.io/v1alpha1
    kind: Profile
    name: ocp4-cis
  - apiGroup: compliance.openshift.io/v1alpha1
    kind: Profile
    name: ocp4-cis-node
settingsRef:
  apiGroup: compliance.openshift.io/v1alpha1
  kind: ScanSetting
  name: default-auto-apply
EOF
scansettingbinding.compliance.openshift.io/test created
$ oc get suite -w
NAME   PHASE       RESULT
test   LAUNCHING   NOT-AVAILABLE
test   LAUNCHING   NOT-AVAILABLE
test   RUNNING     NOT-AVAILABLE
test   RUNNING     NOT-AVAILABLE
test   RUNNING     NOT-AVAILABLE
test   AGGREGATING   NOT-AVAILABLE
test   AGGREGATING   NOT-AVAILABLE
test   AGGREGATING   NOT-AVAILABLE
test   DONE          NON-COMPLIANT
^C
$ oc get mcp -w
NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
master   rendered-master-f176aaeeca17140975a4208e9be91bde   False     True       False      3              0                   0                     0                      129m
worker   rendered-worker-0cbdbabc44618577397e4c3c703fec8f   False     True       False      4              0                   0                     0                      129m
^C             
$ oc get kubeletconfigs.machineconfiguration.openshift.io 
NAME             AGE
custom-kubelet   108m
$ oc get kubeletconfigs.machineconfiguration.openshift.io custom-kubelet -o yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
  annotations:
    machineconfiguration.openshift.io/mc-name-suffix: ""
  creationTimestamp: "2022-09-29T02:01:10Z"
  finalizers:
  - 99-worker-generated-kubelet
  - 99-master-generated-kubelet
  generation: 19
  labels:
    hive.openshift.io/managed: "true"
  name: custom-kubelet
  resourceVersion: "120217"
  uid: f46db966-eee4-4f28-a7d9-97c845224346
spec:
  autoSizingReserved: true
  kubeletConfig:
    evictionHard:
      imagefs.available: 10%
      imagefs.inodesFree: 5%
      memory.available: 200Mi
      nodefs.available: 5%
      nodefs.inodesFree: 4%
    evictionPressureTransitionPeriod: 0s
    evictionSoft:
      imagefs.available: 15%
      imagefs.inodesFree: 10%
      memory.available: 500Mi
      nodefs.available: 10%
      nodefs.inodesFree: 5%
    evictionSoftGracePeriod:
      imagefs.available: 1m30s
      imagefs.inodesFree: 1m30s
      memory.available: 1m30s
      nodefs.available: 1m30s
      nodefs.inodesFree: 1m30s
    streamingConnectionIdleTimeout: 5m0s
    tlsCipherSuites:
    - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
    - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
    - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
    - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
  machineConfigPoolSelector:
    matchExpressions:
    - key: machineconfiguration.openshift.io/mco-built-in
      operator: Exists
status:
  conditions:
  - lastTransitionTime: "2022-09-29T03:49:26Z"
    message: Success
    status: "True"
    type: Success
$ oc get mc -l compliance.openshift.io/suite=test
NAME                                                           GENERATEDBYCONTROLLER   IGNITIONVERSION   AGE
75-ocp4-cis-node-master-kubelet-enable-protect-kernel-sysctl                           3.1.0             9m7s
75-ocp4-cis-node-worker-kubelet-enable-protect-kernel-sysctl                           3.1.0             9m7s
$ oc get mcp
NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
master   rendered-master-98f138b8996404f337889556551f1f89   True      False      False      3              3                   3                     0                      145m
worker   rendered-worker-ea6756515bf26eca29412dae0f3d65e4   True      False      False      4              4                   4                     0                      145m


$ oc get cr
NAME                                                                 STATE
ocp4-cis-api-server-encryption-provider-cipher                       Applied
ocp4-cis-api-server-encryption-provider-config                       Applied
ocp4-cis-audit-profile-set                                           Applied
ocp4-cis-kubelet-configure-tls-cipher-suites                         Applied
ocp4-cis-kubelet-configure-tls-cipher-suites-1                       Applied
ocp4-cis-kubelet-enable-streaming-connections                        Applied
ocp4-cis-kubelet-enable-streaming-connections-1                      Applied
ocp4-cis-kubelet-eviction-thresholds-set-hard-imagefs-available      Applied
ocp4-cis-kubelet-eviction-thresholds-set-hard-imagefs-available-1    Applied
ocp4-cis-kubelet-eviction-thresholds-set-hard-imagefs-available-2    Applied
ocp4-cis-kubelet-eviction-thresholds-set-hard-imagefs-available-3    Applied
ocp4-cis-kubelet-eviction-thresholds-set-hard-imagefs-inodesfree     Applied
ocp4-cis-kubelet-eviction-thresholds-set-hard-imagefs-inodesfree-1   Applied
ocp4-cis-kubelet-eviction-thresholds-set-hard-imagefs-inodesfree-2   Applied
ocp4-cis-kubelet-eviction-thresholds-set-hard-imagefs-inodesfree-3   Applied
ocp4-cis-kubelet-eviction-thresholds-set-hard-memory-available       Applied
ocp4-cis-kubelet-eviction-thresholds-set-hard-memory-available-1     Applied
ocp4-cis-kubelet-eviction-thresholds-set-hard-memory-available-2     Applied
ocp4-cis-kubelet-eviction-thresholds-set-hard-memory-available-3     Applied
ocp4-cis-kubelet-eviction-thresholds-set-hard-nodefs-available       Applied
ocp4-cis-kubelet-eviction-thresholds-set-hard-nodefs-available-1     Applied
ocp4-cis-kubelet-eviction-thresholds-set-hard-nodefs-available-2     Applied
ocp4-cis-kubelet-eviction-thresholds-set-hard-nodefs-available-3     Applied
ocp4-cis-kubelet-eviction-thresholds-set-hard-nodefs-inodesfree      Applied
ocp4-cis-kubelet-eviction-thresholds-set-hard-nodefs-inodesfree-1    Applied
ocp4-cis-kubelet-eviction-thresholds-set-hard-nodefs-inodesfree-2    Applied
ocp4-cis-kubelet-eviction-thresholds-set-hard-nodefs-inodesfree-3    Applied
ocp4-cis-kubelet-eviction-thresholds-set-soft-imagefs-available      Applied
ocp4-cis-kubelet-eviction-thresholds-set-soft-imagefs-available-1    Applied
ocp4-cis-kubelet-eviction-thresholds-set-soft-imagefs-available-2    Applied
ocp4-cis-kubelet-eviction-thresholds-set-soft-imagefs-available-3    Applied
ocp4-cis-kubelet-eviction-thresholds-set-soft-imagefs-available-4    Applied
ocp4-cis-kubelet-eviction-thresholds-set-soft-imagefs-available-5    Applied
ocp4-cis-kubelet-eviction-thresholds-set-soft-imagefs-inodesfree     Applied
ocp4-cis-kubelet-eviction-thresholds-set-soft-imagefs-inodesfree-1   Applied
ocp4-cis-kubelet-eviction-thresholds-set-soft-imagefs-inodesfree-2   Applied
ocp4-cis-kubelet-eviction-thresholds-set-soft-imagefs-inodesfree-3   Applied
ocp4-cis-kubelet-eviction-thresholds-set-soft-imagefs-inodesfree-4   Applied
ocp4-cis-kubelet-eviction-thresholds-set-soft-imagefs-inodesfree-5   Applied
ocp4-cis-kubelet-eviction-thresholds-set-soft-memory-available       Applied
ocp4-cis-kubelet-eviction-thresholds-set-soft-memory-available-1     Applied
ocp4-cis-kubelet-eviction-thresholds-set-soft-memory-available-2     Applied
ocp4-cis-kubelet-eviction-thresholds-set-soft-memory-available-3     Applied
ocp4-cis-kubelet-eviction-thresholds-set-soft-memory-available-4     Applied
ocp4-cis-kubelet-eviction-thresholds-set-soft-memory-available-5     Applied
ocp4-cis-kubelet-eviction-thresholds-set-soft-nodefs-available       Applied
ocp4-cis-kubelet-eviction-thresholds-set-soft-nodefs-available-1     Applied
ocp4-cis-kubelet-eviction-thresholds-set-soft-nodefs-available-2     Applied
ocp4-cis-kubelet-eviction-thresholds-set-soft-nodefs-available-3     Applied
ocp4-cis-kubelet-eviction-thresholds-set-soft-nodefs-available-4     Applied
ocp4-cis-kubelet-eviction-thresholds-set-soft-nodefs-available-5     Applied
ocp4-cis-kubelet-eviction-thresholds-set-soft-nodefs-inodesfree      Applied
ocp4-cis-kubelet-eviction-thresholds-set-soft-nodefs-inodesfree-1    Applied
ocp4-cis-kubelet-eviction-thresholds-set-soft-nodefs-inodesfree-2    Applied
ocp4-cis-kubelet-eviction-thresholds-set-soft-nodefs-inodesfree-3    Applied
ocp4-cis-kubelet-eviction-thresholds-set-soft-nodefs-inodesfree-4    Applied
ocp4-cis-kubelet-eviction-thresholds-set-soft-nodefs-inodesfree-5    Applied
ocp4-cis-node-master-kubelet-enable-protect-kernel-defaults          MissingDependencies
ocp4-cis-node-master-kubelet-enable-protect-kernel-sysctl            Applied
ocp4-cis-node-worker-kubelet-enable-protect-kernel-defaults          MissingDependencies
ocp4-cis-node-worker-kubelet-enable-protect-kernel-sysctl            Applied
$ oc compliance rerun-now scansettingbinding test
Rerunning scans from 'test': ocp4-cis, ocp4-cis-node-master, ocp4-cis-node-worker
Re-running scan 'openshift-compliance/ocp4-cis'
Re-running scan 'openshift-compliance/ocp4-cis-node-master'
Re-running scan 'openshift-compliance/ocp4-cis-node-worker'
$ oc get mcp -w
NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
master   rendered-master-98f138b8996404f337889556551f1f89   False     True       False      3              0                   0                     0                      148m
worker   rendered-worker-ea6756515bf26eca29412dae0f3d65e4   False     True       False      4              0                   0                     0                      148m
...

Comment 8 xiyuan 2022-09-29 04:20:24 UTC
As https://bugzilla.redhat.com/show_bug.cgi?id=2102511#c7 not related to the bug, move it to verified. If there is issue about existing/new kubeletconfig, new bug will be raised.

Comment 9 xiyuan 2022-09-29 04:39:30 UTC
Added the final result for https://bugzilla.redhat.com/show_bug.cgi?id=2102511#c7. After two rounds of remediation and another round of rescan, all auto-remediation will be applied:
$ oc compliance rerun-now scansettingbinding test
Rerunning scans from 'test': ocp4-cis, ocp4-cis-node-master, ocp4-cis-node-worker
Re-running scan 'openshift-compliance/ocp4-cis'
Re-running scan 'openshift-compliance/ocp4-cis-node-master'
Re-running scan 'openshift-compliance/ocp4-cis-node-worker'
$ oc get suite
NAME   PHASE   RESULT
test   DONE    NON-COMPLIANT
$ oc get ccr -l compliance.openshift.io/automated-remediation=,compliance.openshift.io/check-status=FAIL
No resources found in openshift-compliance namespace.

Comment 10 Vincent Shen 2022-09-29 19:13:35 UTC
(In reply to xiyuan from comment #7)
> Hi Vincent,
> The remediation could be applied successfully with 4.11.5 +
> compliance-operator.v0.1.56 (as osd latest version is based on 4.11.5)
> Generally it is good.
> The only problem is why it still using existing kubeletconfigs, no new
> kubeletconfig compliance-operator-kubelet-xxx created.
> Is it working as expected? Thanks.
>  
> $ oc get clusterversion
> NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
> version   4.11.5    True        False         130m    Cluster version is
> 4.11.5
> 
> $ oc get ip
> NAME            CSV                           APPROVAL    APPROVED
> install-rhncj   compliance-operator.v0.1.56   Automatic   true
> $ oc get csv
> NAME                                      DISPLAY                  VERSION  
> REPLACES                                  PHASE
> compliance-operator.v0.1.56               Compliance Operator      0.1.56   
> Succeeded
> route-monitor-operator.v0.1.422-151be96   Route Monitor Operator  
> 0.1.422-151be96   route-monitor-operator.v0.1.408-c2256a2   Succeeded
> $ oc apply -f -<<EOF
> apiVersion: compliance.openshift.io/v1alpha1
> kind: ScanSettingBinding
> metadata:
>   name: test
> profiles:
>   - apiGroup: compliance.openshift.io/v1alpha1
>     kind: Profile
>     name: ocp4-cis
>   - apiGroup: compliance.openshift.io/v1alpha1
>     kind: Profile
>     name: ocp4-cis-node
> settingsRef:
>   apiGroup: compliance.openshift.io/v1alpha1
>   kind: ScanSetting
>   name: default-auto-apply
> EOF
> scansettingbinding.compliance.openshift.io/test created
> $ oc get suite -w
> NAME   PHASE       RESULT
> test   LAUNCHING   NOT-AVAILABLE
> test   LAUNCHING   NOT-AVAILABLE
> test   RUNNING     NOT-AVAILABLE
> test   RUNNING     NOT-AVAILABLE
> test   RUNNING     NOT-AVAILABLE
> test   AGGREGATING   NOT-AVAILABLE
> test   AGGREGATING   NOT-AVAILABLE
> test   AGGREGATING   NOT-AVAILABLE
> test   DONE          NON-COMPLIANT
> ^C
> $ oc get mcp -w
> NAME     CONFIG                                             UPDATED  
> UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT
> DEGRADEDMACHINECOUNT   AGE
> master   rendered-master-f176aaeeca17140975a4208e9be91bde   False     True  
> False      3              0                   0                     0       
> 129m
> worker   rendered-worker-0cbdbabc44618577397e4c3c703fec8f   False     True  
> False      4              0                   0                     0       
> 129m
> ^C             
> $ oc get kubeletconfigs.machineconfiguration.openshift.io 
> NAME             AGE
> custom-kubelet   108m
> $ oc get kubeletconfigs.machineconfiguration.openshift.io custom-kubelet -o
> yaml
> apiVersion: machineconfiguration.openshift.io/v1
> kind: KubeletConfig
> metadata:
>   annotations:
>     machineconfiguration.openshift.io/mc-name-suffix: ""
>   creationTimestamp: "2022-09-29T02:01:10Z"
>   finalizers:
>   - 99-worker-generated-kubelet
>   - 99-master-generated-kubelet
>   generation: 19
>   labels:
>     hive.openshift.io/managed: "true"
>   name: custom-kubelet
>   resourceVersion: "120217"
>   uid: f46db966-eee4-4f28-a7d9-97c845224346
> spec:
>   autoSizingReserved: true
>   kubeletConfig:
>     evictionHard:
>       imagefs.available: 10%
>       imagefs.inodesFree: 5%
>       memory.available: 200Mi
>       nodefs.available: 5%
>       nodefs.inodesFree: 4%
>     evictionPressureTransitionPeriod: 0s
>     evictionSoft:
>       imagefs.available: 15%
>       imagefs.inodesFree: 10%
>       memory.available: 500Mi
>       nodefs.available: 10%
>       nodefs.inodesFree: 5%
>     evictionSoftGracePeriod:
>       imagefs.available: 1m30s
>       imagefs.inodesFree: 1m30s
>       memory.available: 1m30s
>       nodefs.available: 1m30s
>       nodefs.inodesFree: 1m30s
>     streamingConnectionIdleTimeout: 5m0s
>     tlsCipherSuites:
>     - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
>     - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
>     - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
>     - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
>   machineConfigPoolSelector:
>     matchExpressions:
>     - key: machineconfiguration.openshift.io/mco-built-in
>       operator: Exists
> status:
>   conditions:
>   - lastTransitionTime: "2022-09-29T03:49:26Z"
>     message: Success
>     status: "True"
>     type: Success
> $ oc get mc -l compliance.openshift.io/suite=test
> NAME                                                          
> GENERATEDBYCONTROLLER   IGNITIONVERSION   AGE
> 75-ocp4-cis-node-master-kubelet-enable-protect-kernel-sysctl                
> 3.1.0             9m7s
> 75-ocp4-cis-node-worker-kubelet-enable-protect-kernel-sysctl                
> 3.1.0             9m7s
> $ oc get mcp
> NAME     CONFIG                                             UPDATED  
> UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT
> DEGRADEDMACHINECOUNT   AGE
> master   rendered-master-98f138b8996404f337889556551f1f89   True      False 
> False      3              3                   3                     0       
> 145m
> worker   rendered-worker-ea6756515bf26eca29412dae0f3d65e4   True      False 
> False      4              4                   4                     0       
> 145m
> 
> 
> $ oc get cr
> NAME                                                                 STATE
> ocp4-cis-api-server-encryption-provider-cipher                       Applied
> ocp4-cis-api-server-encryption-provider-config                       Applied
> ocp4-cis-audit-profile-set                                           Applied
> ocp4-cis-kubelet-configure-tls-cipher-suites                         Applied
> ocp4-cis-kubelet-configure-tls-cipher-suites-1                       Applied
> ocp4-cis-kubelet-enable-streaming-connections                        Applied
> ocp4-cis-kubelet-enable-streaming-connections-1                      Applied
> ocp4-cis-kubelet-eviction-thresholds-set-hard-imagefs-available      Applied
> ocp4-cis-kubelet-eviction-thresholds-set-hard-imagefs-available-1    Applied
> ocp4-cis-kubelet-eviction-thresholds-set-hard-imagefs-available-2    Applied
> ocp4-cis-kubelet-eviction-thresholds-set-hard-imagefs-available-3    Applied
> ocp4-cis-kubelet-eviction-thresholds-set-hard-imagefs-inodesfree     Applied
> ocp4-cis-kubelet-eviction-thresholds-set-hard-imagefs-inodesfree-1   Applied
> ocp4-cis-kubelet-eviction-thresholds-set-hard-imagefs-inodesfree-2   Applied
> ocp4-cis-kubelet-eviction-thresholds-set-hard-imagefs-inodesfree-3   Applied
> ocp4-cis-kubelet-eviction-thresholds-set-hard-memory-available       Applied
> ocp4-cis-kubelet-eviction-thresholds-set-hard-memory-available-1     Applied
> ocp4-cis-kubelet-eviction-thresholds-set-hard-memory-available-2     Applied
> ocp4-cis-kubelet-eviction-thresholds-set-hard-memory-available-3     Applied
> ocp4-cis-kubelet-eviction-thresholds-set-hard-nodefs-available       Applied
> ocp4-cis-kubelet-eviction-thresholds-set-hard-nodefs-available-1     Applied
> ocp4-cis-kubelet-eviction-thresholds-set-hard-nodefs-available-2     Applied
> ocp4-cis-kubelet-eviction-thresholds-set-hard-nodefs-available-3     Applied
> ocp4-cis-kubelet-eviction-thresholds-set-hard-nodefs-inodesfree      Applied
> ocp4-cis-kubelet-eviction-thresholds-set-hard-nodefs-inodesfree-1    Applied
> ocp4-cis-kubelet-eviction-thresholds-set-hard-nodefs-inodesfree-2    Applied
> ocp4-cis-kubelet-eviction-thresholds-set-hard-nodefs-inodesfree-3    Applied
> ocp4-cis-kubelet-eviction-thresholds-set-soft-imagefs-available      Applied
> ocp4-cis-kubelet-eviction-thresholds-set-soft-imagefs-available-1    Applied
> ocp4-cis-kubelet-eviction-thresholds-set-soft-imagefs-available-2    Applied
> ocp4-cis-kubelet-eviction-thresholds-set-soft-imagefs-available-3    Applied
> ocp4-cis-kubelet-eviction-thresholds-set-soft-imagefs-available-4    Applied
> ocp4-cis-kubelet-eviction-thresholds-set-soft-imagefs-available-5    Applied
> ocp4-cis-kubelet-eviction-thresholds-set-soft-imagefs-inodesfree     Applied
> ocp4-cis-kubelet-eviction-thresholds-set-soft-imagefs-inodesfree-1   Applied
> ocp4-cis-kubelet-eviction-thresholds-set-soft-imagefs-inodesfree-2   Applied
> ocp4-cis-kubelet-eviction-thresholds-set-soft-imagefs-inodesfree-3   Applied
> ocp4-cis-kubelet-eviction-thresholds-set-soft-imagefs-inodesfree-4   Applied
> ocp4-cis-kubelet-eviction-thresholds-set-soft-imagefs-inodesfree-5   Applied
> ocp4-cis-kubelet-eviction-thresholds-set-soft-memory-available       Applied
> ocp4-cis-kubelet-eviction-thresholds-set-soft-memory-available-1     Applied
> ocp4-cis-kubelet-eviction-thresholds-set-soft-memory-available-2     Applied
> ocp4-cis-kubelet-eviction-thresholds-set-soft-memory-available-3     Applied
> ocp4-cis-kubelet-eviction-thresholds-set-soft-memory-available-4     Applied
> ocp4-cis-kubelet-eviction-thresholds-set-soft-memory-available-5     Applied
> ocp4-cis-kubelet-eviction-thresholds-set-soft-nodefs-available       Applied
> ocp4-cis-kubelet-eviction-thresholds-set-soft-nodefs-available-1     Applied
> ocp4-cis-kubelet-eviction-thresholds-set-soft-nodefs-available-2     Applied
> ocp4-cis-kubelet-eviction-thresholds-set-soft-nodefs-available-3     Applied
> ocp4-cis-kubelet-eviction-thresholds-set-soft-nodefs-available-4     Applied
> ocp4-cis-kubelet-eviction-thresholds-set-soft-nodefs-available-5     Applied
> ocp4-cis-kubelet-eviction-thresholds-set-soft-nodefs-inodesfree      Applied
> ocp4-cis-kubelet-eviction-thresholds-set-soft-nodefs-inodesfree-1    Applied
> ocp4-cis-kubelet-eviction-thresholds-set-soft-nodefs-inodesfree-2    Applied
> ocp4-cis-kubelet-eviction-thresholds-set-soft-nodefs-inodesfree-3    Applied
> ocp4-cis-kubelet-eviction-thresholds-set-soft-nodefs-inodesfree-4    Applied
> ocp4-cis-kubelet-eviction-thresholds-set-soft-nodefs-inodesfree-5    Applied
> ocp4-cis-node-master-kubelet-enable-protect-kernel-defaults         
> MissingDependencies
> ocp4-cis-node-master-kubelet-enable-protect-kernel-sysctl            Applied
> ocp4-cis-node-worker-kubelet-enable-protect-kernel-defaults         
> MissingDependencies
> ocp4-cis-node-worker-kubelet-enable-protect-kernel-sysctl            Applied
> $ oc compliance rerun-now scansettingbinding test
> Rerunning scans from 'test': ocp4-cis, ocp4-cis-node-master,
> ocp4-cis-node-worker
> Re-running scan 'openshift-compliance/ocp4-cis'
> Re-running scan 'openshift-compliance/ocp4-cis-node-master'
> Re-running scan 'openshift-compliance/ocp4-cis-node-worker'
> $ oc get mcp -w
> NAME     CONFIG                                             UPDATED  
> UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT
> DEGRADEDMACHINECOUNT   AGE
> master   rendered-master-98f138b8996404f337889556551f1f89   False     True  
> False      3              0                   0                     0       
> 148m
> worker   rendered-worker-ea6756515bf26eca29412dae0f3d65e4   False     True  
> False      4              0                   0                     0       
> 148m
> ...

Yes, It is expected, If there is preexisting KubeletConfig Object, we will keep using that one

Comment 12 errata-xmlrpc 2022-11-02 16:00:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Compliance Operator bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:6657


Note You need to log in before you can comment on or make changes to this bug.