Bug 2094382 - Auto remediation does not work for rules rhcos4-high-master-sysctl-kernel-yama-ptrace-scope and rhcos4-sysctl-kernel-core-pattern
Summary: Auto remediation does not work for rules rhcos4-high-master-sysctl-kernel-yam...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Compliance Operator
Version: 4.11
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.12.0
Assignee: Vincent Shen
QA Contact: xiyuan
Jeana Routh
URL:
Whiteboard:
: 2101353 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-06-07 13:55 UTC by xiyuan
Modified: 2022-12-22 19:33 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
* Previously, applying automatic remediation for the `rhcos4-high-master-sysctl-kernel-yama-ptrace-scope` and `rhcos4-sysctl-kernel-core-pattern` rules resulted in subsequent failures of those rules in scan results, even though they were remediated. The issue is fixed in this release. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2094382[*BZ#2094382*])
Clone Of:
Environment:
Last Closed: 2022-07-14 12:40:58 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ComplianceAsCode content pull 8944 0 None open [WIP]: Fix rule sysctl_kernel_yama_ptrace_scope and sysctl_kernel_core_pattern 2022-06-14 07:07:35 UTC
Red Hat Product Errata RHBA-2022:5537 0 None None None 2022-07-14 12:41:05 UTC

Description xiyuan 2022-06-07 13:55:15 UTC
Description of problem:
There is auto remediation available for rules rhcos4-high-master-sysctl-kernel-yama-ptrace-scope and rhcos4-sysctl-kernel-core-pattern. However, after auto remediation applied, the scan for rules rhcos4-sysctl-kernel-core-pattern and rhcos4-sysctl-kernel-core-pattern still returned FAIL for both master and worker nodes. Seen from below:
$ oc get ccr -l compliance.openshift.io/check-status=FAIL,compliance.openshift.io/automated-remediation= 
NAME                                                 STATUS   SEVERITY
rhcos4-high-master-sysctl-kernel-yama-ptrace-scope   FAIL     medium
rhcos4-high-worker-sysctl-kernel-yama-ptrace-scope   FAIL     medium
rhcos4-high-master-sysctl-kernel-core-pattern        FAIL     medium
rhcos4-high-worker-sysctl-kernel-core-pattern        FAIL     medium


$ oc get ccr rhcos4-high-master-sysctl-kernel-yama-ptrace-scope -o=jsonpath={.instructions}
The runtime status of the kernel.yama.ptrace_scope kernel parameter can be queried
by running the following command:
$ sysctl kernel.yama.ptrace_scope
The output of the command should indicate a value of 1.
The preferable way how to assure the runtime compliance is to have
correct persistent configuration, and rebooting the system.

The persistent kernel parameter configuration is performed by specifying the appropriate
assignment in any file located in the /etc/sysctl.d directory.
Verify that there is not any existing incorrect configuration by executing the following command:
$ grep -r '^\s*kernel.yama.ptrace_scope\s*=' /etc/sysctl.conf /etc/sysctl.d
If any assignments other than
kernel.yama.ptrace_scope = 1
are found, or the correct assignment is duplicated, remove those offending lines from respective files,
and make sure that exactly one file in
/etc/sysctl.d contains kernel.yama.ptrace_scope = 1, and that one assignment
is returned when
$ grep -r kernel.yama.ptrace_scope /etc/sysctl.conf /etc/sysctl.d
is executed



$ $ oc debug node/xiyuan07-v-f8jcq-master-0 -- chroot /host sysctl kernel.yama.ptrace_scope
Starting pod/xiyuan07-v-f8jcq-master-0-debug ...
To use host binaries, run `chroot /host`
kernel.yama.ptrace_scope = 1

Removing debug pod ...


$ oc debug node/xiyuan07-v-f8jcq-master-0 -- chroot /host grep -r '^\s*kernel.yama.ptrace_scope\s*=' /etc/sysctl.conf /etc/sysctl.d
Starting pod/xiyuan07-v-f8jcq-master-0-debug ...
To use host binaries, run `chroot /host`
/etc/sysctl.d/75-sysctl_kernel_yama_ptrace_scope.conf:kernel.yama.ptrace_scope=1

Removing debug pod ...

$ oc get mc 75-rhcos4-high-master-sysctl-kernel-yama-ptrace-scope -o yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  annotations:
    compliance.openshift.io/remediation: ""
  creationTimestamp: "2022-06-07T05:46:46Z"
  generation: 1
  labels:
    compliance.openshift.io/scan-name: rhcos4-high-master
    compliance.openshift.io/suite: high-compliance
    machineconfiguration.openshift.io/role: master
  name: 75-rhcos4-high-master-sysctl-kernel-yama-ptrace-scope
  resourceVersion: "202871"
  uid: 5f1ab06c-8323-40f2-919e-edd0d4b22074
spec:
  config:
    ignition:
      version: 3.1.0
    storage:
      files:
      - contents:
          source: data:,kernel.yama.ptrace_scope%3D1%0A
        mode: 420
        overwrite: true
        path: /etc/sysctl.d/75-sysctl_kernel_yama_ptrace_scope.conf

$ oc get ccr rhcos4-high-master-sysctl-kernel-core-pattern -o=jsonpath={.instructions}
The runtime status of the kernel.core_pattern kernel parameter can be queried
by running the following command:
$ sysctl kernel.core_pattern
The output of the command should indicate a value of |/bin/false.
The preferable way how to assure the runtime compliance is to have
correct persistent configuration, and rebooting the system.

The persistent kernel parameter configuration is performed by specifying the appropriate
assignment in any file located in the /etc/sysctl.d directory.
Verify that there is not any existing incorrect configuration by executing the following command:
$ grep -r '^\s*kernel.core_pattern\s*=' /etc/sysctl.conf /etc/sysctl.d
If any assignments other than
kernel.core_pattern = |/bin/false
are found, or the correct assignment is duplicated, remove those offending lines from respective files,
and make sure that exactly one file in
/etc/sysctl.d contains kernel.core_pattern = |/bin/false, and that one assignment
is returned when
$ grep -r kernel.core_pattern /etc/sysctl.conf /etc/sysctl.d
is executed.

$ oc debug node/xiyuan07-v-f8jcq-master-0 -- chroot /host sysctl kernel.core_pattern
Starting pod/xiyuan07-v-f8jcq-master-0-debug ...
To use host binaries, run `chroot /host`
kernel.core_pattern = |/bin/false

Removing debug pod ...


$ oc debug node/xiyuan07-v-f8jcq-master-0 -- chroot /host  grep -r '^\s*kernel.core_pattern\s*=' /etc/sysctl.conf /etc/sysctl.d
Starting pod/xiyuan07-v-f8jcq-master-0-debug ...
To use host binaries, run `chroot /host`
/etc/sysctl.d/75-sysctl_kernel_core_pattern.conf:kernel.core_pattern = |/bin/false

Removing debug pod ...

$ oc get mc 75-rhcos4-high-master-sysctl-kernel-core-pattern -o yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  annotations:
    compliance.openshift.io/remediation: ""
  creationTimestamp: "2022-06-07T05:47:02Z"
  generation: 1
  labels:
    compliance.openshift.io/scan-name: rhcos4-high-master
    compliance.openshift.io/suite: high-compliance
    machineconfiguration.openshift.io/role: master
  name: 75-rhcos4-high-master-sysctl-kernel-core-pattern
  resourceVersion: "203190"
  uid: a4b6fd2d-47a6-46da-a151-cd6b8e2caa01
spec:
  config:
    ignition:
      version: 3.1.0
    storage:
      files:
      - contents:
          source: data:,kernel.core_pattern%20%3D%20%7C/bin/false%0A
        mode: 420
        overwrite: true
        path: /etc/sysctl.d/75-sysctl_kernel_core_pattern.conf
Version-Release number of selected component (if applicable):
4.11.0-0.nightly-2022-06-06-025509 + compliance-operatorv0.1.52

How reproducible:
Always

Steps to Reproduce:
Install Compliance Operator
oc apply -f - <<EOF
apiVersion: compliance.openshift.io/v1alpha1
kind: ScanSettingBinding
metadata:
  name: high-compliance
profiles:
  - name: rhcos4-high
    kind: Profile
    apiGroup: compliance.openshift.io/v1alpha1
settingsRef:
  name: default-auto-apply
  kind: ScanSetting
  apiGroup: compliance.openshift.io/v1alpha1
EOF
3. Wait until scan done, cluster reboot finished, check whether there are rules has-unmet-dependencies:
$ oc get cr -l compliance.openshift.io/has-unmet-dependencies=
If yes, trigger another round of remediation with below command:
$ oc compliance rerun-now scansettingbinding high-compliance
Rerunning scans from 'high-compliance': rhcos4-high-master, rhcos4-high-worker
Re-running scan 'openshift-compliance/rhcos4-high-master'
Re-running scan 'openshift-compliance/rhcos4-high-worker'
4. When cluster reboot done, trigger rescan again:
$ oc compliance rerun-now scansettingbinding high-compliance

Actual results:
There is auto remediation available for rules rhcos4-high-master-sysctl-kernel-yama-ptrace-scope and rhcos4-sysctl-kernel-core-pattern. However, after auto remediation applied, the scan for rules rhcos4-sysctl-kernel-core-pattern and rhcos4-sysctl-kernel-core-pattern still returned FAIL for both master and worker nodes. Seen from below:
$ oc get ccr -l compliance.openshift.io/check-status=FAIL,compliance.openshift.io/automated-remediation= 
NAME                                                 STATUS   SEVERITY
rhcos4-high-master-sysctl-kernel-yama-ptrace-scope   FAIL     medium
rhcos4-high-worker-sysctl-kernel-yama-ptrace-scope   FAIL     medium
rhcos4-high-master-sysctl-kernel-core-pattern        FAIL     medium
rhcos4-high-worker-sysctl-kernel-core-pattern        FAIL     medium
Details seen more details from description

Expected results:
after auto remediation applied, the scan for rules rhcos4-high-master-sysctl-kernel-yama-ptrace-scope and rhcos4-sysctl-kernel-core-pattern should return PASS for both master and worker nodes

Comment 2 Vincent Shen 2022-06-28 14:45:32 UTC
*** Bug 2101353 has been marked as a duplicate of this bug. ***

Comment 5 xiyuan 2022-07-08 05:39:37 UTC
verification pass with CO v0.1.53 and 4.11.0-rc.1
$ oc get ip
NAME            CSV                           APPROVAL    APPROVED
install-hksfh   compliance-operator.v0.1.53   Automatic   true
$ oc get csv
NAME                            DISPLAY                            VERSION   REPLACES   PHASE
compliance-operator.v0.1.53     Compliance Operator                0.1.53               Succeeded
elasticsearch-operator.v5.5.0   OpenShift Elasticsearch Operator   5.5.0                Succeeded
$ oc get clusterversion
NAME      VERSION       AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.11.0-rc.1   True        False         3h45m   Cluster version is 4.11.0-rc.1

$ oc apply -f -<<EOF
apiVersion: compliance.openshift.io/v1alpha1
kind: ScanSettingBinding
metadata:
  name: tp1
profiles:
  - name: mod-node
    kind: TailoredProfile
    apiGroup: compliance.openshift.io/v1alpha1
settingsRef: 
  name: default-auto-apply
  kind: ScanSetting
  apiGroup: compliance.openshift.io/v1alpha1
EOF

scansettingbinding.compliance.openshift.io/tp1 created
[xiyuan@MiWiFi-RA69-srv func]$ oc get suite
NAME   PHASE       RESULT
tp1    LAUNCHING   NOT-AVAILABLE
[xiyuan@MiWiFi-RA69-srv func]$ oc get suite -w
NAME   PHASE       RESULT
tp1    LAUNCHING   NOT-AVAILABLE
tp1    LAUNCHING   NOT-AVAILABLE
tp1    RUNNING     NOT-AVAILABLE
tp1    RUNNING     NOT-AVAILABLE
tp1    AGGREGATING   NOT-AVAILABLE
tp1    AGGREGATING   NOT-AVAILABLE
tp1    DONE          NON-COMPLIANT
tp1    DONE          NON-COMPLIANT
$ oc get mcp -w
NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
master   rendered-master-80d4713e574127cdabcd36fc602e5325   False     True       False      3              0                   0                     0                      3h52m
worker   rendered-worker-6c2fffbb70d45659bce31d596159ca70   False     True       False      3              1                   1                     0                      3h52m
master   rendered-master-80d4713e574127cdabcd36fc602e5325   False     True       False      3              0                   1                     0                      3h53m
master   rendered-master-80d4713e574127cdabcd36fc602e5325   False     True       True       3              0                   0                     1                      3h53m
master   rendered-master-80d4713e574127cdabcd36fc602e5325   False     True       False      3              1                   1                     0                      3h53m
master   rendered-master-80d4713e574127cdabcd36fc602e5325   False     True       False      3              1                   1                     0                      3h53m
worker   rendered-worker-6c2fffbb70d45659bce31d596159ca70   False     True       False      3              2                   2                     0                      3h53m
worker   rendered-worker-adaa1a35a2f5a4e7817d65eee69da29e   True      False      False      3              3                   3                     0                      3h59m
master   rendered-master-80d4713e574127cdabcd36fc602e5325   False     True       False      3              1                   2                     0                      3h59m
master   rendered-master-80d4713e574127cdabcd36fc602e5325   False     True       False      3              2                   2                     0                      3h59m
...
master   rendered-master-6dceaacdc42f245350f1068c13fa68f0   True      False      False      3              3                   3                     0                      4h4m
worker   rendered-worker-adaa1a35a2f5a4e7817d65eee69da29e   True      False      False      3              3                   3                     0                      4h4m
$ oc compliance rerun-now scansettingbinding tp1
Rerunning scans from 'tp1': mod-node-master, mod-node-worker
Re-running scan 'openshift-compliance/mod-node-master'
Re-running scan 'openshift-compliance/mod-node-worker'
$ oc get suite -w
NAME   PHASE       RESULT
tp1    LAUNCHING   NOT-AVAILABLE
tp1    RUNNING     NOT-AVAILABLE
tp1    RUNNING     NOT-AVAILABLE
tp1    AGGREGATING   NOT-AVAILABLE
tp1    AGGREGATING   NOT-AVAILABLE
tp1    DONE          COMPLIANT
tp1    DONE          COMPLIANT
^C$ oc get ccr
NAME                                              STATUS   SEVERITY
mod-node-master-sysctl-kernel-core-pattern        PASS     medium
mod-node-master-sysctl-kernel-yama-ptrace-scope   PASS     medium
mod-node-worker-sysctl-kernel-core-pattern        PASS     medium
mod-node-worker-sysctl-kernel-yama-ptrace-scope   PASS     medium

Comment 7 errata-xmlrpc 2022-07-14 12:40:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Compliance Operator bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:5537


Note You need to log in before you can comment on or make changes to this bug.