Bug 2072597 - Group ownership for ovs config file is not properly set on Z
Summary: Group ownership for ovs config file is not properly set on Z
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Compliance Operator
Version: 4.10
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Vincent Shen
QA Contact: xiyuan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-04-06 15:13 UTC by rishika.kedia
Modified: 2022-07-14 12:41 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: Group ownership for /etc/openvswitch/conf.db is incorrect on Z architecture. Consequence: The ocp4-cis-node-worker-file-groupowner-ovs-conf-db check fails Fix: Consume 0.1.53 version of the compliance operator and content Result: The check is marked as not-applicable on Z architecture systems.
Clone Of:
Environment:
Last Closed: 2022-07-14 12:40:58 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ComplianceAsCode content pull 8728 0 None open OCP: update rules for s390x 2022-05-10 16:07:54 UTC
Red Hat Issue Tracker FD-1882 0 None None None 2022-04-08 20:23:30 UTC
Red Hat Issue Tracker MULTIARCH-2463 0 None None None 2022-04-06 15:40:01 UTC
Red Hat Product Errata RHBA-2022:5537 0 None None None 2022-07-14 12:41:05 UTC

Description rishika.kedia 2022-04-06 15:13:57 UTC
Description of problem:
One of Compliance operator (CO) rule  "ocp4-cis-node-worker-file-groupowner-ovs-conf-db"  reports groupowner permissions not properly set for OVS conf file
 
The remediation on rule suggest:                          
Rule description: |-
  To properly set the group owner of /etc/openvswitch/conf.db , run the command:


  $ sudo chgrp hugetlbfs /etc/openvswitch/conf.db
id: xccdf_org.ssgproject.content_rule_file_groupowner_ovs_conf_db

This issue is not seen on System P.

Version-Release number of selected component (if applicable):
OCP 4.10 and Compliance Operator 0.1.49

How reproducible:
Consistently reproducible.

Steps to Reproduce:
1.On a 4.10 OCP cluster
2. login to any of node and check for user and group ownership of /etc/openvswitch/conf.db - it shows as below

$  ll /etc/openvswitch/conf.db
-rw-r-----. 1 openvswitch openvswitch 24930 Apr  6 14:22 /etc/openvswitch/conf.db


Actual results:
$  ll /etc/openvswitch/conf.db
-rw-r-----. 1 openvswitch openvswitch 24930 Apr  6 14:22 /etc/openvswitch/conf.db

Expected results:
groupownership should be "hugetlbfs"

$  ll /etc/openvswitch/conf.db
-rw-r-----. 1 openvswitch hugetlbfs 24930 Apr  6 14:22 /etc/openvswitch/conf.db

Additional info:
This issue was found during the testing of compliance operator that the group ownership is incorrect. Once the groupownserhip is set to "hugetlbfs" the compliance scan passes the rule.

Comment 1 Dan Li 2022-04-08 16:36:22 UTC
Re-assigning to Dan Horak for some evaluation - Hi Dan, is it possible if you could take a look into the question asked from this Slack thread and offer your thoughts? https://coreos.slack.com/archives/CFFJUNP6C/p1649339828311289

Comment 2 Dan Horák 2022-04-08 17:22:13 UTC
Non-essential users and groups are added by individual packages during their installation. I am not familiar with the openvswitch package, but it should be responsible for adding "openvswitch" as both group and user and for adding the "hugetlbfs" group. Depending on the version of the package the "hugetlbfs" group might be added only when openvswitch is built with DPDK support. Without knowing/having the details I guess openvswitch is built without DPDK on s390x, but is built with DPDK on ppc64le (and other platforms).

Comment 3 Jeremy Poulin 2022-04-08 20:17:03 UTC
Re-assigning to FDP team for evaluation to see if this is expected behavoir. Thanks Dan!

Comment 4 Dan Li 2022-04-20 12:11:18 UTC
Hi Timothy and team, do you think this bug exhibit the expected behavior from your evaluation? Or perhaps it's a bug related to compliance operator?

Comment 5 Timothy Redaelli 2022-04-26 08:55:39 UTC
I don't know why you want the config files as hugetlbfs group,
since the primary group of openvswitch user is openvswitch and
so any file created from openvswitch user uses openvswitch as group (for POSIX)

Comment 6 Holger Wolf 2022-04-26 13:44:52 UTC
I think we should route this to the compliance team. The initial problem here is that the compliance fails on Z due to the difference in the group as discussed here.
If the system implementation is correct, we might fix then the compliance rule.

Comment 7 Dan Li 2022-04-26 13:51:26 UTC
Thank you for your input, Timothy and Holger.

Moving to Compliance Operator team per Comment 6. Please feel free to re-assign back to Multi-Arch if the component is incorrect.

Comment 8 Vincent Shen 2022-05-10 16:07:26 UTC
related conversation: https://coreos.slack.com/archives/CHCRR73PF/p1645639975411149

A fix patch has been purposed here: https://github.com/ComplianceAsCode/content/pull/8728

Comment 14 xiyuan 2022-07-08 09:10:01 UTC
For compliance-operator.v0.1.53  + OCP 4.11.0-rc.1, the group owner is hugetlbfs for /etc/openvswitch/conf.db:
$ oc get ip
NAME            CSV                           APPROVAL    APPROVED
install-hksfh   compliance-operator.v0.1.53   Automatic   true
$ oc get csv
NAME                            DISPLAY                            VERSION   REPLACES   PHASE
compliance-operator.v0.1.53     Compliance Operator                0.1.53               Succeeded
elasticsearch-operator.v5.5.0   OpenShift Elasticsearch Operator   5.5.0                Succeeded
$ oc get clusterversion
NAME      VERSION       AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.11.0-rc.1   True        False         7h13m   Cluster version is 4.11.0-rc.1

$ oc apply -f -<<EOF
> apiVersion: compliance.openshift.io/v1alpha1
> kind: TailoredProfile
> metadata:
>   name: test-node
>   namespace: openshift-compliance
> spec:                                         
>   description: set value for ocp4-nerc-cip-oauth-or-oauthclient-inactivity-timeout
>   title: set value for ocp4-nerc-cip-oauth-or-oauthclient-inactivity-timeout
>   enableRules:
>     - name: ocp4-file-groupowner-ovs-conf-db
>       rationale: platform
> EOF
tailoredprofile.compliance.openshift.io/test-node created

$ oc apply -f -<<EOF
> apiVersion: compliance.openshift.io/v1alpha1
> kind: ScanSettingBinding
> metadata:
>   name: test
> profiles:
>   - apiGroup: compliance.openshift.io/v1alpha1
>     kind: TailoredProfile
>     name: test-node
> settingsRef:
>   apiGroup: compliance.openshift.io/v1alpha1
>   kind: ScanSetting
>   name: default
> EOF
scansettingbinding.compliance.openshift.io/test created
$ oc get suite -w
NAME   PHASE       RESULT
test   LAUNCHING   NOT-AVAILABLE
test   LAUNCHING   NOT-AVAILABLE
test   RUNNING     NOT-AVAILABLE
test   RUNNING     NOT-AVAILABLE
test   AGGREGATING   NOT-AVAILABLE
test   AGGREGATING   NOT-AVAILABLE
test   DONE          COMPLIANT
test   DONE          COMPLIANT
^C$ oc get ccr
NAME                                           STATUS   SEVERITY
test-node-master-file-groupowner-ovs-conf-db   PASS     medium
test-node-worker-file-groupowner-ovs-conf-db   PASS     medium

Check instructions:
$ oc get ccr test-node-master-file-groupowner-ovs-conf-db -o=jsonpath={.instructions}
To check the group ownership of /etc/openvswitch/conf.db,
you'll need to log into a node in the cluster.
As a user with administrator privileges, log into a node in the relevant pool:

$ oc debug node/$NODE_NAME

At the sh-4.4# prompt, run:

# chroot /host


Then,run the command:
$ ls -lL /etc/openvswitch/conf.db
If properly configured, the output should indicate the following group-owner:
hugetlbfs[xiyuan@MiWiFi-RA69-srv func]$ oc get ccr test-node-worker-file-groupowner-ovs-conf-db -o=jsonpath={.instructions}
To check the group ownership of /etc/openvswitch/conf.db,
you'll need to log into a node in the cluster.
As a user with administrator privileges, log into a node in the relevant pool:

$ oc debug node/$NODE_NAME

At the sh-4.4# prompt, run:

# chroot /host


Then,run the command:
$ ls -lL /etc/openvswitch/conf.db
If properly configured, the output should indicate the following group-owner:

$ for i in `oc get node -l node-role.kubernetes.io/master= --no-headers | awk '{print $1}'`;do oc debug node/$i -- chroot /host ls -lL /etc/openvswitch/conf.db; done
Starting pod/ip-10-0-137-93us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`
-rw-r-----. 1 openvswitch hugetlbfs 264274 Jul  8 09:00 /etc/openvswitch/conf.db

Removing debug pod ...
Starting pod/ip-10-0-185-124us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`
-rw-r-----. 1 openvswitch hugetlbfs 421801 Jul  8 09:00 /etc/openvswitch/conf.db

Removing debug pod ...
Starting pod/ip-10-0-221-7us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`
-rw-r-----. 1 openvswitch hugetlbfs 412798 Jul  8 09:00 /etc/openvswitch/conf.db

Removing debug pod ...

$ for i in `oc get node -l node-role.kubernetes.io/worker= --no-headers | awk '{print $1}'`;do oc debug node/$i -- chroot /host ls -lL /etc/openvswitch/conf.db; done
Starting pod/ip-10-0-151-137us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`
-rw-r-----. 1 openvswitch hugetlbfs 73068 Jul  8 09:00 /etc/openvswitch/conf.db

Removing debug pod ...
Starting pod/ip-10-0-178-92us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`
-rw-r-----. 1 openvswitch hugetlbfs 126478 Jul  8 09:00 /etc/openvswitch/conf.db

Removing debug pod ...
Starting pod/ip-10-0-214-86us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`
-rw-r-----. 1 openvswitch hugetlbfs 137592 Jul  8 09:03 /etc/openvswitch/conf.db

Removing debug pod ...

Comment 15 Sanidhya 2022-07-10 17:22:40 UTC
Posting my findings from the OCP Z 4.10 cluster. The rule is skipped and scan is marked as NOT-APPLICAPLE

[root@m1319001 Compliance-Operator]# oc get ip
NAME            CSV                           APPROVAL    APPROVED
install-pq59f   compliance-operator.v0.1.53   Automatic   true
[root@m1319001 Compliance-Operator]# oc get csv
NAME                          DISPLAY               VERSION   REPLACES   PHASE
compliance-operator.v0.1.53   Compliance Operator   0.1.53               Succeeded
[root@m1319001 Compliance-Operator]# oc apply -f -<<EOF
> apiVersion: compliance.openshift.io/v1alpha1
> kind: TailoredProfile
> metadata:
>   name: test-node
>   namespace: openshift-compliance
> spec:
>   description: set value for ocp4-nerc-cip-oauth-or-oauthclient-inactivity-timeout
>   title: set value for ocp4-nerc-cip-oauth-or-oauthclient-inactivity-timeout
>   enableRules:
>      - name: ocp4-file-groupowner-ovs-conf-db
>        rationale: platform
> EOF

tailoredprofile.compliance.openshift.io/test-node created

[root@m1319001 Compliance-Operator]# oc apply -f -<<EOF
> apiVersion: compliance.openshift.io/v1alpha1
> kind: ScanSettingBinding
> metadata:
>   name: test
> profiles:
>   - apiGroup: compliance.openshift.io/v1alpha1
>     kind: TailoredProfile
>     name: test-node
> settingsRef:
>   apiGroup: compliance.openshift.io/v1alpha1
>   kind: ScanSetting
>   name: default
> EOF
scansettingbinding.compliance.openshift.io/test created
[root@m1319001 Compliance-Operator]# oc get suite -w
NAME   PHASE       RESULT
test   LAUNCHING   NOT-AVAILABLE
test   LAUNCHING   NOT-AVAILABLE
test   RUNNING     NOT-AVAILABLE
test   RUNNING     NOT-AVAILABLE
test   AGGREGATING   NOT-AVAILABLE
test   AGGREGATING   NOT-AVAILABLE
test   DONE          NOT-APPLICABLE
test   DONE          NOT-APPLICABLE
^C[root@m1319001 Compliance-Operator]# oc get ccr
No resources found in openshift-compliance namespace.


[root@m1319001 Compliance-Operator]# for i in `oc get node -l node-role.kubernetes.io/master= --no-headers | awk '{print $1}'`;do oc debug node/$i -- chroot /host ls -lL /etc/openvswitch/conf.db; done
Starting pod/master-0ocp-m1319001lnxero1boe-debug ...
To use host binaries, run `chroot /host`
-rw-r-----. 1 openvswitch openvswitch 118911 Jul 10 17:15 /etc/openvswitch/conf.db

Removing debug pod ...
Starting pod/master-1ocp-m1319001lnxero1boe-debug ...
To use host binaries, run `chroot /host`
-rw-r-----. 1 openvswitch openvswitch 203011 Jul 10 17:15 /etc/openvswitch/conf.db

Removing debug pod ...
Starting pod/master-2ocp-m1319001lnxero1boe-debug ...
To use host binaries, run `chroot /host`
-rw-r-----. 1 openvswitch openvswitch 113934 Jul 10 17:15 /etc/openvswitch/conf.db

Removing debug pod ...

[root@m1319001 Compliance-Operator]# for i in `oc get node -l node-role.kubernetes.io/worker= --no-headers | awk '{print $1}'`;do oc debug node/$i -- chroot /host ls -lL /etc/openvswitch/conf.db; done
Starting pod/bootstrap-0ocp-m1319001lnxero1boe-debug ...
To use host binaries, run `chroot /host`
-rw-r-----. 1 openvswitch openvswitch 613496 Jul 10 17:20 /etc/openvswitch/conf.db

Removing debug pod ...
Starting pod/worker-0ocp-m1319001lnxero1boe-debug ...
To use host binaries, run `chroot /host`
-rw-r-----. 1 openvswitch openvswitch 24555 Jul 10 17:15 /etc/openvswitch/conf.db

Removing debug pod ...
Starting pod/worker-1ocp-m1319001lnxero1boe-debug ...
To use host binaries, run `chroot /host`
-rw-r-----. 1 openvswitch openvswitch 25653 Jul 10 17:15 /etc/openvswitch/conf.db

Removing debug pod ...

Comment 16 rishika.kedia 2022-07-11 10:48:15 UTC
The defect works as expected for IBM Z.

Comment 18 xiyuan 2022-07-11 13:56:45 UTC
Add test result for more rules with compliance-operator.v0.1.53:
$ oc apply -f -<<EOF
apiVersion: compliance.openshift.io/v1alpha1
kind: TailoredProfile
metadata:
  name: test-node
  namespace: openshift-compliance
spec:                                         
  description: des
  title: title for 
  enableRules:
    - name: ocp4-file-groupowner-ovs-conf-db
      rationale: node
    - name: ocp4-file-groupowner-ovs-conf-db-s390x
      rationale: node
    - name: ocp4-file-groupowner-ovs-conf-db-lock
      rationale: node
    - name: ocp4-file-groupowner-ovs-conf-db-lock-s390x
      rationale: node
    - name: ocp4-file-groupowner-ovs-sys-id-conf
      rationale: node
    - name: ocp4-file-groupowner-ovs-sys-id-conf-s390x
      rationale: node
EOF
tailoredprofile.compliance.openshift.io/test-node created

$ oc apply -f-<<EOF
apiVersion: compliance.openshift.io/v1alpha1
kind: ScanSettingBinding
metadata:
  name: test
profiles:
  - apiGroup: compliance.openshift.io/v1alpha1
    kind: TailoredProfile
    name: test-node
settingsRef:
  apiGroup: compliance.openshift.io/v1alpha1
  kind: ScanSetting
  name: default
EOF
scansettingbinding.compliance.openshift.io/test created
$ oc get suite -w
NAME   PHASE       RESULT
test   LAUNCHING   NOT-AVAILABLE
test   LAUNCHING   NOT-AVAILABLE
test   RUNNING     NOT-AVAILABLE
test   RUNNING     NOT-AVAILABLE
test   AGGREGATING   NOT-AVAILABLE
test   AGGREGATING   NOT-AVAILABLE
test   DONE          COMPLIANT
test   DONE          COMPLIANT
^C$ oc get ccr
NAME                                                STATUS   SEVERITY
test-node-master-file-groupowner-ovs-conf-db        PASS     medium
test-node-master-file-groupowner-ovs-conf-db-lock   PASS     medium
test-node-master-file-groupowner-ovs-sys-id-conf    PASS     medium
test-node-worker-file-groupowner-ovs-conf-db        PASS     medium
test-node-worker-file-groupowner-ovs-conf-db-lock   PASS     medium
test-node-worker-file-groupowner-ovs-sys-id-conf    PASS     medium

Comment 19 Sanidhya 2022-07-12 09:23:44 UTC
Adding test results for all rules mentioned above from z cluster 4.10

oc apply -f -<<EOF
> apiVersion: compliance.openshift.io/v1alpha1
> kind: TailoredProfile
> metadata:
>   name: test-node
>   namespace: openshift-compliance
> spec:                                         
>   description: des
>   title: title for 
>   enableRules:
>     - name: ocp4-file-groupowner-ovs-conf-db
>       rationale: node
>     - name: ocp4-file-groupowner-ovs-conf-db-s390x
>       rationale: node
>     - name: ocp4-file-groupowner-ovs-conf-db-lock
>       rationale: node
>     - name: ocp4-file-groupowner-ovs-conf-db-lock-s390x
>       rationale: node
>     - name: ocp4-file-groupowner-ovs-sys-id-conf
>       rationale: node
>     - name: ocp4-file-groupowner-ovs-sys-id-conf-s390x
>       rationale: node
> EOF
tailoredprofile.compliance.openshift.io/test-node created
[root@m1319001 ~]# oc apply -f-<<EOF
> apiVersion: compliance.openshift.io/v1alpha1
> kind: ScanSettingBinding
> metadata:
>   name: test
> profiles:
>   - apiGroup: compliance.openshift.io/v1alpha1
>     kind: TailoredProfile
>     name: test-node
> settingsRef:
>   apiGroup: compliance.openshift.io/v1alpha1
>   kind: ScanSetting
>   name: default
> EOF
scansettingbinding.compliance.openshift.io/test created
[root@m1319001 ~]# oc get suite -w
NAME            PHASE       RESULT
test            LAUNCHING   NOT-AVAILABLE
test            LAUNCHING   NOT-AVAILABLE
test            RUNNING     NOT-AVAILABLE
test            RUNNING     NOT-AVAILABLE
test            AGGREGATING   NOT-AVAILABLE
test            AGGREGATING   NOT-AVAILABLE
test            DONE          COMPLIANT
test            DONE          COMPLIANT
^C[root@m1319001 ~]# oc get ccr
NAME                                                      STATUS   SEVERITY
test-node-master-file-groupowner-ovs-conf-db-lock-s390x   PASS     medium
test-node-master-file-groupowner-ovs-conf-db-s390x        PASS     medium
test-node-master-file-groupowner-ovs-sys-id-conf-s390x    PASS     medium
test-node-worker-file-groupowner-ovs-conf-db-lock-s390x   PASS     medium
test-node-worker-file-groupowner-ovs-conf-db-s390x        PASS     medium
test-node-worker-file-groupowner-ovs-sys-id-conf-s390x    PASS     medium
[root@m1319001 ~]#

Comment 20 errata-xmlrpc 2022-07-14 12:40:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Compliance Operator bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:5537


Note You need to log in before you can comment on or make changes to this bug.