Bug 2058542

Summary: Cloud credentials in Logs
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: gowtham <gshanmug>
Component: odf-drAssignee: gowtham <gshanmug>
odf-dr sub component: multicluster-orchestrator QA Contact: Shrivaibavi Raghaventhiran <sraghave>
Status: CLOSED CURRENTRELEASE Docs Contact:
Severity: high    
Priority: unspecified CC: kramdoss, madam, mmuench, muagarwa, ocs-bugs, odf-bz-bot, rperiyas, uchapaga
Version: 4.10   
Target Milestone: ---   
Target Release: ODF 4.10.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: 4.10.0-175 Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-04-21 09:12:49 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description gowtham 2022-02-25 10:09:21 UTC
Description of problem (please be detailed as possible and provide log
snippets):

ODF-MCO operator logging security credentials.

2022-02-24T10:38:51.867Z	INFO	controller-runtime.manager.controller.secret	Creating a s3 secret	{"reconciler group": "", "reconciler kind": "Secret", "name": "756bd87b55371f0a9a791269d78efdaeb2617fc", "namespace": "spoke-cluster", "secret": {"metadata":{"name":"756bd87b55371f0a9a791269d78efdaeb2617fc","namespace":"openshift-dr-system","creationTimestamp":null,"labels":{"multicluster.odf.openshift.io/created-by":"mirrorpeersecret"}},"data":{"AWS_ACCESS_KEY_ID":"**************************,"AWS_SECRET_ACCESS_KEY":"*******************"},"type":"Opaque"}}



Version of all relevant components (if applicable):


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?


Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Import 2 managed clusters in ACM.
2. Install ODFMCO on the Hub cluster.
3. Install ODF operator on each managed cluster.
4. Create OCS StorageSystem on each managed cluster.
5. Do not install the ODR HUB operator(Ramen hub operator) on the hub cluster.
6. Make sure "openshift-dr-system" is not present.
6. Create MirrorPeer CR with "manageS3" flag enabled.
7. Check ODFMCO operator logs on the openshift-operators namespace. It will log AWS credentials.
 


Actual results:
the cloud API credentials are logged when creating/updating the s3 secret.

Expected results:
Logs should not have any credentials information.

Additional info:

Comment 6 Shrivaibavi Raghaventhiran 2022-03-22 14:21:13 UTC
Tested version:
----------------
OCP - 4.10.0-0.nightly-2022-03-19-230512
ODF - quay.io/rhceph-dev/ocs-registry:4.10.0-201
ACM - 2.4.2

Steps followed:
---------------
Steps to reproduce as mentioned above

Observations:
--------------

Did not see any AWS cloud credentials in the odfmo-controller-manager pod. Hence moving the BZ to verified state.

Attaching the logs of odfmo-controller-manager pod for reference.