This bug was initially created as a copy of Bug #2167304 I am copying this bug because: +++ This bug was initially created as a clone of Bug #2166417 +++ Description of problem (please be detailed as possible and provide log snippests): ODF shows few VA issues during Pen testing Version of all relevant components (if applicable): ocp-4.10 + odf-4.10 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? yes Is there any workaround available to the best of your knowledge? No Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 1 Can this issue reproducible? yes Can this issue reproduce from the UI? If this is a regression, please provide more details to justify this: Steps to Reproduce: pen testing of ODF component to identify VA issues Actual results: Below are 6 VA issues with ODF component, that has been identified part of pen testing by IBM Cloud VA Team --------------Descriptions--------- We have a client environment running on IBM ROKS with CP4i and API Connect (Account 2541236). We recently had a penetration test performed on the environment which discovered several potential vulnerabilities in OpenShift. The client is going into production shortly with this environment and requires IBM to quickly provide an ETA for closure on each issue by either: mitigation (security or configuration changes to eliminate the vulnerability), or... resolution (official product patch with a fix or confirmation the issue is a false positive) Below is a highlight of each issue as well as attachments with a report from the penetration test team that shows how each issue was discovered and information about the risk of the vulnerability. We also have a prior ticket open with RedHat which can be referenced for more context on these vulnerabilities. Red Hat 03356923. RedHat requested we open a ticket with IBM cloud support for further discussion on resolution of these issues and they requested specifically for all issues to be in a single ticket. Issue 115897: It was observed that some containers were running with root user and CAP_SYS_ADMIN capability. Hackers can escape the containers and gain access to other parts of the machine or infrastructure by abusing the misconfigurations. Issue 116117: It was observed that some pods have permission to create secrets, configmaps, services, deployments etc. This can become handy for attackers if the service account is privileged, and they have access to such a token. Issue 115558: It was observed that the some of the security contexts are missing and some are not configured properly in pod specifications as per best practices. The Security Context should be specified for all the pods as per the best practices. Issue 115559: all pods in a project are accessible from other pods and network endpoints. To isolate one or more pods in a project, you can create NetworkPolicy objects in that project to indicate the allowed incoming connections. Issue 115910: On communicating from a pod created in default namespace with the in-scope pods, it was observed that the communication between was not encrypted / SSL enabled Issue 115909: sensitive information like bearer tokens are exposed in pod logs and also keys in environment variables ------------------end-------------- Expected results: Additional info: --- Additional comment from RHEL Program Management on 2023-02-01 18:49:14 UTC --- This bug having no release flag set previously, is now set with release flag 'odf‑4.13.0' to '?', and so is being proposed to be fixed at the ODF 4.13.0 release. Note that the 3 Acks (pm_ack, devel_ack, qa_ack), if any previously set while release flag was missing, have now been reset since the Acks are to be set against a release flag. --- Additional comment from Shaikh I Ali on 2023-02-01 18:54:04 UTC --- The zip file attached above i.e 'va issue details' has more VA detail information --- Additional comment from akgunjal.com on 2023-02-02 09:41:06 UTC --- The customer has done the above pen testing under openshift-storage namespace and found above security issues. The reports are attached here and a brief description of all 6 issues given. Please check and update the plan for fixing these security issues.
Subham, please open a backport PR to release-4.10 for https://github.com/red-hat-storage/rook/pull/447
Already, PR is merged, no rdt required.
Verified! Job: https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-deploy-ocs-cluster/24827/ [sdurgbun ~]$ oc version Client Version: 4.9.0-202109101042.p0.git.96e95ce.assembly.stream-96e95ce Server Version: 4.10.59 Kubernetes Version: v1.23.17+16bcd69 [sdurgbun ~]$ oc get clusterserviceversions.operators.coreos.com --namespace openshift-storage NAME DISPLAY VERSION REPLACES PHASE mcg-operator.v4.10.13 NooBaa Operator 4.10.13 mcg-operator.v4.10.12 Succeeded ocs-operator.v4.10.13 OpenShift Container Storage 4.10.13 ocs-operator.v4.10.12 Succeeded odf-csi-addons-operator.v4.10.13 CSI Addons 4.10.13 odf-csi-addons-operator.v4.10.12 Succeeded odf-operator.v4.10.13 OpenShift Data Foundation 4.10.13 odf-operator.v4.10.12 Succeeded [sdurgbun ~]$ oc get pod --namespace openshift-storage rook-ceph-crashcollector-956ba06552ad84e36aea2f95d200428e-rm5kb -oyaml | grep -A 5 "securityContext" securityContext: capabilities: add: - MKNOD privileged: true runAsGroup: 167 --
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenShift Data Foundation 4.10.13 Bug Fix Update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:3608