Bug 2167304

Summary: [4.12 clone] [rook clone] Security and VA issues with ODF operator
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Nitin Goyal <nigoyal>
Component: rookAssignee: Subham Rai <srai>
Status: CLOSED ERRATA QA Contact: Daniel Osypenko <dosypenk>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.10CC: akgunjal, ebenahar, kramdoss, mrajanna, muagarwa, nbecker, nberry, ocs-bugs, odf-bz-bot, security-response-team, shaali, srai, tnielsen, uchapaga
Target Milestone: ---Keywords: Reopened, Security
Target Release: ODF 4.12.3   
Hardware: Unspecified   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: 2166417 Environment:
Last Closed: 2023-05-23 09:17:28 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2166417, 2167308, 2180725    
Bug Blocks:    

Comment 3 Nitin Goyal 2023-02-07 05:02:28 UTC
Travis Please close the bug if you believe we have already resolved the issues that we could.

Comment 4 Travis Nielsen 2023-02-07 15:16:41 UTC
Closing since all issues addressed or by design

Comment 5 Mudit Agarwal 2023-02-08 03:16:50 UTC
Travis, 

I guess https://github.com/rook/rook/pull/11219 would have gone in 4.12 as well. Please check.
Also, is it possible to backport it to 4.10/4.11?

Reopening the bug to assess the backport part.

Comment 6 Subham Rai 2023-02-08 05:28:15 UTC
(In reply to Mudit Agarwal from comment #5)
> Travis, 
> 
> I guess https://github.com/rook/rook/pull/11219 would have gone in 4.12 as
> well. Please check.
> Also, is it possible to backport it to 4.10/4.11?
> 
> Reopening the bug to assess the backport part.

seems like it is not in 4.12 
https://github.com/red-hat-storage/rook/blob/release-4.12/pkg/operator/ceph/cluster/crash/crash.go#L265

I'll create a manual bp to 4.12

Comment 14 Subham Rai 2023-04-21 07:43:55 UTC
no RDT is required.

Comment 19 Subham Rai 2023-04-27 10:51:03 UTC
check the securityContext of the container `ceph-crash` inside pod `rook-ceph-crashcollector`. It should 167 which ceph user id.

Comment 20 Daniel Osypenko 2023-05-08 18:29:09 UTC
(In reply to Subham Rai from comment #19)
> check the securityContext of the container `ceph-crash` inside pod
> `rook-ceph-crashcollector`. It should 167 which ceph user id.

Checked securityContext of rook-ceph-crashcollector

 securityContext:
      privileged: true
      runAsGroup: 167
      runAsNonRoot: true
      runAsUser: 167


Version details: 

OC version:
Client Version: 4.12.0-202208031327
Kustomize Version: v4.5.4
Server Version: 4.13.0-0.nightly-2023-05-04-090524
Kubernetes Version: v1.26.3+b404935

OCS verison:
ocs-operator.v4.12.3-rhodf              OpenShift Container Storage   4.12.3-rhodf   ocs-operator.v4.12.2-rhodf              Succeeded

Cluster version
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.13.0-0.nightly-2023-05-04-090524   True        False         4h4m    Cluster version is 4.13.0-0.nightly-2023-05-04-090524

Rook version:
rook: v4.12.3-0.669491b168239d162daa0b7066531d06542e3778
go: go1.18.10

Ceph version:
ceph version 16.2.10-160.el8cp (6977980612de1db28e41e0a90ff779627cde7a8c) pacific (stable)

Comment 26 errata-xmlrpc 2023-05-23 09:17:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat OpenShift Data Foundation 4.12.3 Security and Bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:3265