+++ This bug was initially created as a clone of Bug #2167308 +++ +++ This bug was initially created as a clone of Bug #2166417 +++ Description of problem (please be detailed as possible and provide log snippests): ODF shows few VA issues during Pen testing Version of all relevant components (if applicable): ocp-4.10 + odf-4.10 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? yes Is there any workaround available to the best of your knowledge? No Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 1 Can this issue reproducible? yes Can this issue reproduce from the UI? If this is a regression, please provide more details to justify this: Steps to Reproduce: pen testing of ODF component to identify VA issues Actual results: Below are 6 VA issues with ODF component, that has been identified part of pen testing by IBM Cloud VA Team --------------Descriptions--------- We have a client environment running on IBM ROKS with CP4i and API Connect (Account 2541236). We recently had a penetration test performed on the environment which discovered several potential vulnerabilities in OpenShift. The client is going into production shortly with this environment and requires IBM to quickly provide an ETA for closure on each issue by either: mitigation (security or configuration changes to eliminate the vulnerability), or... resolution (official product patch with a fix or confirmation the issue is a false positive) Below is a highlight of each issue as well as attachments with a report from the penetration test team that shows how each issue was discovered and information about the risk of the vulnerability. We also have a prior ticket open with RedHat which can be referenced for more context on these vulnerabilities. Red Hat 03356923. RedHat requested we open a ticket with IBM cloud support for further discussion on resolution of these issues and they requested specifically for all issues to be in a single ticket. Issue 115897: It was observed that some containers were running with root user and CAP_SYS_ADMIN capability. Hackers can escape the containers and gain access to other parts of the machine or infrastructure by abusing the misconfigurations. Issue 116117: It was observed that some pods have permission to create secrets, configmaps, services, deployments etc. This can become handy for attackers if the service account is privileged, and they have access to such a token. Issue 115558: It was observed that the some of the security contexts are missing and some are not configured properly in pod specifications as per best practices. The Security Context should be specified for all the pods as per the best practices. Issue 115559: all pods in a project are accessible from other pods and network endpoints. To isolate one or more pods in a project, you can create NetworkPolicy objects in that project to indicate the allowed incoming connections. Issue 115910: On communicating from a pod created in default namespace with the in-scope pods, it was observed that the communication between was not encrypted / SSL enabled Issue 115909: sensitive information like bearer tokens are exposed in pod logs and also keys in environment variables ------------------end-------------- Expected results: Additional info: --- Additional comment from RHEL Program Management on 2023-02-01 18:49:14 UTC --- This bug having no release flag set previously, is now set with release flag 'odf‑4.13.0' to '?', and so is being proposed to be fixed at the ODF 4.13.0 release. Note that the 3 Acks (pm_ack, devel_ack, qa_ack), if any previously set while release flag was missing, have now been reset since the Acks are to be set against a release flag. --- Additional comment from Shaikh I Ali on 2023-02-01 18:54:04 UTC --- The zip file attached above i.e 'va issue details' has more VA detail information --- Additional comment from akgunjal.com on 2023-02-02 09:41:06 UTC --- The customer has done the above pen testing under openshift-storage namespace and found above security issues. The reports are attached here and a brief description of all 6 issues given. Please check and update the plan for fixing these security issues. --- Additional comment from RHEL Program Management on 2023-02-06 07:46:02 UTC --- This bug having no release flag set previously, is now set with release flag 'odf‑4.13.0' to '?', and so is being proposed to be fixed at the ODF 4.13.0 release. Note that the 3 Acks (pm_ack, devel_ack, qa_ack), if any previously set while release flag was missing, have now been reset since the Acks are to be set against a release flag. --- Additional comment from Nimrod Becker on 2023-02-06 08:40:46 UTC --- What is the request here? Some of the permissions for the operator are needed, we can't remove them. Its not by mistake. --- Additional comment from Nitin Goyal on 2023-02-06 08:59:08 UTC --- (In reply to Nimrod Becker from comment #2) > What is the request here? Some of the permissions for the operator are > needed, we can't remove them. > Its not by mistake. You are correct, Nimrod, that the permissions are required and cannot be removed. There are six issues added to the bug; at the very least, we can look at them and see where they are complaining about noobaa and provide an explanation and fix if necessary, and then we can always close the bug as wontfix or not a bug if nothing required. --- Additional comment from Nimrod Becker on 2023-02-06 09:19:45 UTC --- Got you. OK --- Additional comment from Nimrod Becker on 2023-02-06 09:28:02 UTC --- Issues : 115897 Intended, in some flows we consume PVs and need the described permissions and access 116117 Intended, we can create pods for PV pools, we create secret and configmaps for the various BackingStores we configure and use. 115559 Intended, our operator provide a control plane, our endpoints an s3 service, and core exports metrics for both autoscaling and diag. Only pod which might be a good candidate for further restriction (but need to verify this ) is the DB 115910 I don't see this as a concern since the data itself sent by the customer/app can use HTTPs (if chosen) and will be encrypted. I don't see a need to re-encrypt already encrypted, and the rest is simply control and not sensitive data. ---------------- 115558 Need to delve deeper and see what exactly is missing, AFAIK we do set SCCs on our pods. 115909 Need to check what specifically is exposed --- Additional comment from Nimrod Becker on 2023-02-06 09:55:27 UTC --- Utkarsh tried to get the logs from the original BZ and got access denied. Any ideas on which group / perms we should ask for to get to that BZ? --- Additional comment from Avinash Hanwate on 2023-02-08 08:21:53 UTC --- (In reply to Nimrod Becker from comment #6) > Utkarsh tried to get the logs from the original BZ and got access denied. > Any ideas on which group / perms we should ask for to get to that BZ? I added Utkarsh to the original BZ. Let me know if you want anyone else to be added. --- Additional comment from Utkarsh Srivastava on 2023-02-14 10:52:07 UTC --- Hi, I don't see noobaa related containers/pods in 115909. Is there something I am missing (extremely likely)? Regards, Utkarsh Srivastava --- Additional comment from Nitin Goyal on 2023-02-14 15:03:28 UTC --- (In reply to Utkarsh Srivastava from comment #8) > Hi, > > I don't see noobaa related containers/pods in 115909. Is there something I > am missing (extremely likely)? > > Regards, > Utkarsh Srivastava mcg must not have this issue in that case. --- Additional comment from RHEL Program Management on 2023-03-02 15:59:15 UTC --- This BZ is being approved for ODF 4.13.0 release, upon receipt of the 3 ACKs (PM,Devel,QA) for the release flag 'odf‑4.13.0 --- Additional comment from RHEL Program Management on 2023-03-02 15:59:15 UTC --- Since this bug has been approved for ODF 4.13.0 release, through release flag 'odf-4.13.0+', the Target Release is being set to 'ODF 4.13.0