Description of problem: One of Compliance operator (CO) rule "ocp4-cis-node-worker-file-groupowner-ovs-conf-db" reports groupowner permissions not properly set for OVS conf file The remediation on rule suggest: Rule description: |- To properly set the group owner of /etc/openvswitch/conf.db , run the command: $ sudo chgrp hugetlbfs /etc/openvswitch/conf.db id: xccdf_org.ssgproject.content_rule_file_groupowner_ovs_conf_db This issue is not seen on System P. Version-Release number of selected component (if applicable): OCP 4.10 and Compliance Operator 0.1.49 How reproducible: Consistently reproducible. Steps to Reproduce: 1.On a 4.10 OCP cluster 2. login to any of node and check for user and group ownership of /etc/openvswitch/conf.db - it shows as below $ ll /etc/openvswitch/conf.db -rw-r-----. 1 openvswitch openvswitch 24930 Apr 6 14:22 /etc/openvswitch/conf.db Actual results: $ ll /etc/openvswitch/conf.db -rw-r-----. 1 openvswitch openvswitch 24930 Apr 6 14:22 /etc/openvswitch/conf.db Expected results: groupownership should be "hugetlbfs" $ ll /etc/openvswitch/conf.db -rw-r-----. 1 openvswitch hugetlbfs 24930 Apr 6 14:22 /etc/openvswitch/conf.db Additional info: This issue was found during the testing of compliance operator that the group ownership is incorrect. Once the groupownserhip is set to "hugetlbfs" the compliance scan passes the rule.
Re-assigning to Dan Horak for some evaluation - Hi Dan, is it possible if you could take a look into the question asked from this Slack thread and offer your thoughts? https://coreos.slack.com/archives/CFFJUNP6C/p1649339828311289
Non-essential users and groups are added by individual packages during their installation. I am not familiar with the openvswitch package, but it should be responsible for adding "openvswitch" as both group and user and for adding the "hugetlbfs" group. Depending on the version of the package the "hugetlbfs" group might be added only when openvswitch is built with DPDK support. Without knowing/having the details I guess openvswitch is built without DPDK on s390x, but is built with DPDK on ppc64le (and other platforms).
Re-assigning to FDP team for evaluation to see if this is expected behavoir. Thanks Dan!
Hi Timothy and team, do you think this bug exhibit the expected behavior from your evaluation? Or perhaps it's a bug related to compliance operator?
I don't know why you want the config files as hugetlbfs group, since the primary group of openvswitch user is openvswitch and so any file created from openvswitch user uses openvswitch as group (for POSIX)
I think we should route this to the compliance team. The initial problem here is that the compliance fails on Z due to the difference in the group as discussed here. If the system implementation is correct, we might fix then the compliance rule.
Thank you for your input, Timothy and Holger. Moving to Compliance Operator team per Comment 6. Please feel free to re-assign back to Multi-Arch if the component is incorrect.
related conversation: https://coreos.slack.com/archives/CHCRR73PF/p1645639975411149 A fix patch has been purposed here: https://github.com/ComplianceAsCode/content/pull/8728
For compliance-operator.v0.1.53 + OCP 4.11.0-rc.1, the group owner is hugetlbfs for /etc/openvswitch/conf.db: $ oc get ip NAME CSV APPROVAL APPROVED install-hksfh compliance-operator.v0.1.53 Automatic true $ oc get csv NAME DISPLAY VERSION REPLACES PHASE compliance-operator.v0.1.53 Compliance Operator 0.1.53 Succeeded elasticsearch-operator.v5.5.0 OpenShift Elasticsearch Operator 5.5.0 Succeeded $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-rc.1 True False 7h13m Cluster version is 4.11.0-rc.1 $ oc apply -f -<<EOF > apiVersion: compliance.openshift.io/v1alpha1 > kind: TailoredProfile > metadata: > name: test-node > namespace: openshift-compliance > spec: > description: set value for ocp4-nerc-cip-oauth-or-oauthclient-inactivity-timeout > title: set value for ocp4-nerc-cip-oauth-or-oauthclient-inactivity-timeout > enableRules: > - name: ocp4-file-groupowner-ovs-conf-db > rationale: platform > EOF tailoredprofile.compliance.openshift.io/test-node created $ oc apply -f -<<EOF > apiVersion: compliance.openshift.io/v1alpha1 > kind: ScanSettingBinding > metadata: > name: test > profiles: > - apiGroup: compliance.openshift.io/v1alpha1 > kind: TailoredProfile > name: test-node > settingsRef: > apiGroup: compliance.openshift.io/v1alpha1 > kind: ScanSetting > name: default > EOF scansettingbinding.compliance.openshift.io/test created $ oc get suite -w NAME PHASE RESULT test LAUNCHING NOT-AVAILABLE test LAUNCHING NOT-AVAILABLE test RUNNING NOT-AVAILABLE test RUNNING NOT-AVAILABLE test AGGREGATING NOT-AVAILABLE test AGGREGATING NOT-AVAILABLE test DONE COMPLIANT test DONE COMPLIANT ^C$ oc get ccr NAME STATUS SEVERITY test-node-master-file-groupowner-ovs-conf-db PASS medium test-node-worker-file-groupowner-ovs-conf-db PASS medium Check instructions: $ oc get ccr test-node-master-file-groupowner-ovs-conf-db -o=jsonpath={.instructions} To check the group ownership of /etc/openvswitch/conf.db, you'll need to log into a node in the cluster. As a user with administrator privileges, log into a node in the relevant pool: $ oc debug node/$NODE_NAME At the sh-4.4# prompt, run: # chroot /host Then,run the command: $ ls -lL /etc/openvswitch/conf.db If properly configured, the output should indicate the following group-owner: hugetlbfs[xiyuan@MiWiFi-RA69-srv func]$ oc get ccr test-node-worker-file-groupowner-ovs-conf-db -o=jsonpath={.instructions} To check the group ownership of /etc/openvswitch/conf.db, you'll need to log into a node in the cluster. As a user with administrator privileges, log into a node in the relevant pool: $ oc debug node/$NODE_NAME At the sh-4.4# prompt, run: # chroot /host Then,run the command: $ ls -lL /etc/openvswitch/conf.db If properly configured, the output should indicate the following group-owner: $ for i in `oc get node -l node-role.kubernetes.io/master= --no-headers | awk '{print $1}'`;do oc debug node/$i -- chroot /host ls -lL /etc/openvswitch/conf.db; done Starting pod/ip-10-0-137-93us-east-2computeinternal-debug ... To use host binaries, run `chroot /host` -rw-r-----. 1 openvswitch hugetlbfs 264274 Jul 8 09:00 /etc/openvswitch/conf.db Removing debug pod ... Starting pod/ip-10-0-185-124us-east-2computeinternal-debug ... To use host binaries, run `chroot /host` -rw-r-----. 1 openvswitch hugetlbfs 421801 Jul 8 09:00 /etc/openvswitch/conf.db Removing debug pod ... Starting pod/ip-10-0-221-7us-east-2computeinternal-debug ... To use host binaries, run `chroot /host` -rw-r-----. 1 openvswitch hugetlbfs 412798 Jul 8 09:00 /etc/openvswitch/conf.db Removing debug pod ... $ for i in `oc get node -l node-role.kubernetes.io/worker= --no-headers | awk '{print $1}'`;do oc debug node/$i -- chroot /host ls -lL /etc/openvswitch/conf.db; done Starting pod/ip-10-0-151-137us-east-2computeinternal-debug ... To use host binaries, run `chroot /host` -rw-r-----. 1 openvswitch hugetlbfs 73068 Jul 8 09:00 /etc/openvswitch/conf.db Removing debug pod ... Starting pod/ip-10-0-178-92us-east-2computeinternal-debug ... To use host binaries, run `chroot /host` -rw-r-----. 1 openvswitch hugetlbfs 126478 Jul 8 09:00 /etc/openvswitch/conf.db Removing debug pod ... Starting pod/ip-10-0-214-86us-east-2computeinternal-debug ... To use host binaries, run `chroot /host` -rw-r-----. 1 openvswitch hugetlbfs 137592 Jul 8 09:03 /etc/openvswitch/conf.db Removing debug pod ...
Posting my findings from the OCP Z 4.10 cluster. The rule is skipped and scan is marked as NOT-APPLICAPLE [root@m1319001 Compliance-Operator]# oc get ip NAME CSV APPROVAL APPROVED install-pq59f compliance-operator.v0.1.53 Automatic true [root@m1319001 Compliance-Operator]# oc get csv NAME DISPLAY VERSION REPLACES PHASE compliance-operator.v0.1.53 Compliance Operator 0.1.53 Succeeded [root@m1319001 Compliance-Operator]# oc apply -f -<<EOF > apiVersion: compliance.openshift.io/v1alpha1 > kind: TailoredProfile > metadata: > name: test-node > namespace: openshift-compliance > spec: > description: set value for ocp4-nerc-cip-oauth-or-oauthclient-inactivity-timeout > title: set value for ocp4-nerc-cip-oauth-or-oauthclient-inactivity-timeout > enableRules: > - name: ocp4-file-groupowner-ovs-conf-db > rationale: platform > EOF tailoredprofile.compliance.openshift.io/test-node created [root@m1319001 Compliance-Operator]# oc apply -f -<<EOF > apiVersion: compliance.openshift.io/v1alpha1 > kind: ScanSettingBinding > metadata: > name: test > profiles: > - apiGroup: compliance.openshift.io/v1alpha1 > kind: TailoredProfile > name: test-node > settingsRef: > apiGroup: compliance.openshift.io/v1alpha1 > kind: ScanSetting > name: default > EOF scansettingbinding.compliance.openshift.io/test created [root@m1319001 Compliance-Operator]# oc get suite -w NAME PHASE RESULT test LAUNCHING NOT-AVAILABLE test LAUNCHING NOT-AVAILABLE test RUNNING NOT-AVAILABLE test RUNNING NOT-AVAILABLE test AGGREGATING NOT-AVAILABLE test AGGREGATING NOT-AVAILABLE test DONE NOT-APPLICABLE test DONE NOT-APPLICABLE ^C[root@m1319001 Compliance-Operator]# oc get ccr No resources found in openshift-compliance namespace. [root@m1319001 Compliance-Operator]# for i in `oc get node -l node-role.kubernetes.io/master= --no-headers | awk '{print $1}'`;do oc debug node/$i -- chroot /host ls -lL /etc/openvswitch/conf.db; done Starting pod/master-0ocp-m1319001lnxero1boe-debug ... To use host binaries, run `chroot /host` -rw-r-----. 1 openvswitch openvswitch 118911 Jul 10 17:15 /etc/openvswitch/conf.db Removing debug pod ... Starting pod/master-1ocp-m1319001lnxero1boe-debug ... To use host binaries, run `chroot /host` -rw-r-----. 1 openvswitch openvswitch 203011 Jul 10 17:15 /etc/openvswitch/conf.db Removing debug pod ... Starting pod/master-2ocp-m1319001lnxero1boe-debug ... To use host binaries, run `chroot /host` -rw-r-----. 1 openvswitch openvswitch 113934 Jul 10 17:15 /etc/openvswitch/conf.db Removing debug pod ... [root@m1319001 Compliance-Operator]# for i in `oc get node -l node-role.kubernetes.io/worker= --no-headers | awk '{print $1}'`;do oc debug node/$i -- chroot /host ls -lL /etc/openvswitch/conf.db; done Starting pod/bootstrap-0ocp-m1319001lnxero1boe-debug ... To use host binaries, run `chroot /host` -rw-r-----. 1 openvswitch openvswitch 613496 Jul 10 17:20 /etc/openvswitch/conf.db Removing debug pod ... Starting pod/worker-0ocp-m1319001lnxero1boe-debug ... To use host binaries, run `chroot /host` -rw-r-----. 1 openvswitch openvswitch 24555 Jul 10 17:15 /etc/openvswitch/conf.db Removing debug pod ... Starting pod/worker-1ocp-m1319001lnxero1boe-debug ... To use host binaries, run `chroot /host` -rw-r-----. 1 openvswitch openvswitch 25653 Jul 10 17:15 /etc/openvswitch/conf.db Removing debug pod ...
The defect works as expected for IBM Z.
Add test result for more rules with compliance-operator.v0.1.53: $ oc apply -f -<<EOF apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: test-node namespace: openshift-compliance spec: description: des title: title for enableRules: - name: ocp4-file-groupowner-ovs-conf-db rationale: node - name: ocp4-file-groupowner-ovs-conf-db-s390x rationale: node - name: ocp4-file-groupowner-ovs-conf-db-lock rationale: node - name: ocp4-file-groupowner-ovs-conf-db-lock-s390x rationale: node - name: ocp4-file-groupowner-ovs-sys-id-conf rationale: node - name: ocp4-file-groupowner-ovs-sys-id-conf-s390x rationale: node EOF tailoredprofile.compliance.openshift.io/test-node created $ oc apply -f-<<EOF apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: test profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: TailoredProfile name: test-node settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default EOF scansettingbinding.compliance.openshift.io/test created $ oc get suite -w NAME PHASE RESULT test LAUNCHING NOT-AVAILABLE test LAUNCHING NOT-AVAILABLE test RUNNING NOT-AVAILABLE test RUNNING NOT-AVAILABLE test AGGREGATING NOT-AVAILABLE test AGGREGATING NOT-AVAILABLE test DONE COMPLIANT test DONE COMPLIANT ^C$ oc get ccr NAME STATUS SEVERITY test-node-master-file-groupowner-ovs-conf-db PASS medium test-node-master-file-groupowner-ovs-conf-db-lock PASS medium test-node-master-file-groupowner-ovs-sys-id-conf PASS medium test-node-worker-file-groupowner-ovs-conf-db PASS medium test-node-worker-file-groupowner-ovs-conf-db-lock PASS medium test-node-worker-file-groupowner-ovs-sys-id-conf PASS medium
Adding test results for all rules mentioned above from z cluster 4.10 oc apply -f -<<EOF > apiVersion: compliance.openshift.io/v1alpha1 > kind: TailoredProfile > metadata: > name: test-node > namespace: openshift-compliance > spec: > description: des > title: title for > enableRules: > - name: ocp4-file-groupowner-ovs-conf-db > rationale: node > - name: ocp4-file-groupowner-ovs-conf-db-s390x > rationale: node > - name: ocp4-file-groupowner-ovs-conf-db-lock > rationale: node > - name: ocp4-file-groupowner-ovs-conf-db-lock-s390x > rationale: node > - name: ocp4-file-groupowner-ovs-sys-id-conf > rationale: node > - name: ocp4-file-groupowner-ovs-sys-id-conf-s390x > rationale: node > EOF tailoredprofile.compliance.openshift.io/test-node created [root@m1319001 ~]# oc apply -f-<<EOF > apiVersion: compliance.openshift.io/v1alpha1 > kind: ScanSettingBinding > metadata: > name: test > profiles: > - apiGroup: compliance.openshift.io/v1alpha1 > kind: TailoredProfile > name: test-node > settingsRef: > apiGroup: compliance.openshift.io/v1alpha1 > kind: ScanSetting > name: default > EOF scansettingbinding.compliance.openshift.io/test created [root@m1319001 ~]# oc get suite -w NAME PHASE RESULT test LAUNCHING NOT-AVAILABLE test LAUNCHING NOT-AVAILABLE test RUNNING NOT-AVAILABLE test RUNNING NOT-AVAILABLE test AGGREGATING NOT-AVAILABLE test AGGREGATING NOT-AVAILABLE test DONE COMPLIANT test DONE COMPLIANT ^C[root@m1319001 ~]# oc get ccr NAME STATUS SEVERITY test-node-master-file-groupowner-ovs-conf-db-lock-s390x PASS medium test-node-master-file-groupowner-ovs-conf-db-s390x PASS medium test-node-master-file-groupowner-ovs-sys-id-conf-s390x PASS medium test-node-worker-file-groupowner-ovs-conf-db-lock-s390x PASS medium test-node-worker-file-groupowner-ovs-conf-db-s390x PASS medium test-node-worker-file-groupowner-ovs-sys-id-conf-s390x PASS medium [root@m1319001 ~]#
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Compliance Operator bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:5537