Bug 2167304
| Summary: | [4.12 clone] [rook clone] Security and VA issues with ODF operator | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat OpenShift Data Foundation | Reporter: | Nitin Goyal <nigoyal> |
| Component: | rook | Assignee: | Subham Rai <srai> |
| Status: | CLOSED ERRATA | QA Contact: | Daniel Osypenko <dosypenk> |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.10 | CC: | akgunjal, ebenahar, kramdoss, mrajanna, muagarwa, nbecker, nberry, ocs-bugs, odf-bz-bot, security-response-team, shaali, srai, tnielsen, uchapaga |
| Target Milestone: | --- | Keywords: | Reopened, Security |
| Target Release: | ODF 4.12.3 | ||
| Hardware: | Unspecified | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | No Doc Update | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | 2166417 | Environment: | |
| Last Closed: | 2023-05-23 09:17:28 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 2166417, 2167308, 2180725 | ||
| Bug Blocks: | |||
|
Comment 3
Nitin Goyal
2023-02-07 05:02:28 UTC
Closing since all issues addressed or by design Travis, I guess https://github.com/rook/rook/pull/11219 would have gone in 4.12 as well. Please check. Also, is it possible to backport it to 4.10/4.11? Reopening the bug to assess the backport part. (In reply to Mudit Agarwal from comment #5) > Travis, > > I guess https://github.com/rook/rook/pull/11219 would have gone in 4.12 as > well. Please check. > Also, is it possible to backport it to 4.10/4.11? > > Reopening the bug to assess the backport part. seems like it is not in 4.12 https://github.com/red-hat-storage/rook/blob/release-4.12/pkg/operator/ceph/cluster/crash/crash.go#L265 I'll create a manual bp to 4.12 no RDT is required. check the securityContext of the container `ceph-crash` inside pod `rook-ceph-crashcollector`. It should 167 which ceph user id. (In reply to Subham Rai from comment #19) > check the securityContext of the container `ceph-crash` inside pod > `rook-ceph-crashcollector`. It should 167 which ceph user id. Checked securityContext of rook-ceph-crashcollector securityContext: privileged: true runAsGroup: 167 runAsNonRoot: true runAsUser: 167 Version details: OC version: Client Version: 4.12.0-202208031327 Kustomize Version: v4.5.4 Server Version: 4.13.0-0.nightly-2023-05-04-090524 Kubernetes Version: v1.26.3+b404935 OCS verison: ocs-operator.v4.12.3-rhodf OpenShift Container Storage 4.12.3-rhodf ocs-operator.v4.12.2-rhodf Succeeded Cluster version NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.13.0-0.nightly-2023-05-04-090524 True False 4h4m Cluster version is 4.13.0-0.nightly-2023-05-04-090524 Rook version: rook: v4.12.3-0.669491b168239d162daa0b7066531d06542e3778 go: go1.18.10 Ceph version: ceph version 16.2.10-160.el8cp (6977980612de1db28e41e0a90ff779627cde7a8c) pacific (stable) Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat OpenShift Data Foundation 4.12.3 Security and Bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:3265 |