Bug 2079975

Summary: RGW pod is failing to deploy
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Pratik Surve <prsurve>
Component: rookAssignee: Travis Nielsen <tnielsen>
Status: CLOSED DUPLICATE QA Contact: Neha Berry <nberry>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: 4.11CC: madam, ocs-bugs, odf-bz-bot
Target Milestone: ---Keywords: TestBlocker
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-04-28 17:47:20 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Pratik Surve 2022-04-28 16:19:17 UTC
Description of problem (please be detailed as possible and provide log
snippests):

RGW pod is failing to deploy

A version of all relevant components (if applicable):
OCP version:- 4.11.0-0.nightly-2022-04-26-181148
ODF version:- 4.11.0-56
CEPH version:- ceph version 16.2.7-107.el8cp (3106079e34bb001fa0999e9b975bd5e8a413f424) pacific (stable)

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
yes

Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?
yes

Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Deploy ODF 4.11 over on-prem cluster
2. Check rgw deployment status
3.


Actual results:

Error creating: pods "rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-56d8c575dd-" is forbidden: unable to validate against any security context constraint: [provider "anyuid": Forbidden: not usable by user or serviceaccount, spec.volumes[2]: Invalid value: "hostPath": hostPath volumes are not allowed to be used, spec.volumes[3]: Invalid value: "hostPath": hostPath volumes are not allowed to be used, spec.initContainers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed, spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed, spec.containers[1].securityContext.privileged: Invalid value: true: Privileged containers are not allowed, provider "nonroot": Forbidden: not usable by user or serviceaccount, provider "noobaa": Forbidden: not usable by user or serviceaccount, provider "noobaa-endpoint": Forbidden: not usable by user or serviceaccount, provider "hostmount-anyuid": Forbidden: not usable by user or serviceaccount, provider "machine-api-termination-handler": Forbidden: not usable by user or serviceaccount, provider "hostnetwork": Forbidden: not usable by user or serviceaccount, provider "hostaccess": Forbidden: not usable by user or serviceaccount, provider "rook-ceph": Forbidden: not usable by user or serviceaccount, provider "node-exporter": Forbidden: not usable by user or serviceaccount, provider "rook-ceph-csi": Forbidden: not usable by user or serviceaccount, provider "privileged": Forbidden: not usable by user or serviceaccount]


Expected results:

Pod should be in running state

Additional info:

Comment 3 Travis Nielsen 2022-04-28 17:47:20 UTC
This issue is being tracked with BZ 2075581

*** This bug has been marked as a duplicate of bug 2075581 ***