Good catch. We do have a mechanism to protect these kinds of secrets. We just need to mark it in our schema and apparently we missed this occurrence.
Here are the steps I did to check the bug: 1. Running the command $oc logs noobaa-core-0 | sed -r "s/[[:cntrl:]]\[[0-9]{1,3}m//g" > noobaa-core-0.log 2. Searching in the file 'noobaa-core-0.log' all the occurrences of "'AWS', access_key", and checked that the 'access key', and 'secret key' aren't exposed. Here is a snippet from the file noobaa-core-0.log: Jun-25 8:57:15.826 [WebServer/35] [L0] core.server.system_services.redirector:: publish_to_cluster: server_inter_process load_system_store undefined [ 'fcall://fcall(39gbudw)', 'ws://[::ffff:127.0.0.1]:49530/(39prbpo)', 'ws://[::ffff:127.0.0.1]:49596/(4khxl3v)' ] Jun-25 8:57:18.841 [WebServer/35] [L0] core.server.system_services.account_server:: check_external_connection: { name: 'noobaa-default-backing-store', endpoint_type: 'AWS', endpoint: 'https://s3.us-east-2.amazonaws.com', identity: SENSITIVE-654d4907542c645c, secret: SENSITIVE-b3e8f1c792549bd3 } Jun-25 8:57:19.046 [WebServer/35] [L0] core.server.system_services.account_server:: add_external_connection: { name: 'noobaa-default-backing-store', endpoint_type: 'AWS', endpoint: 'https://s3.us-east-2.amazonaws.com', identity: SENSITIVE-654d4907542c645c, secret: SENSITIVE-b3e8f1c792549bd3 } Jun-25 8:57:19.047 [WebServer/35] [L0] core.server.system_services.account_server:: check_external_connection: { name: 'noobaa-default-backing-store', endpoint_type: 'AWS', endpoint: 'https://s3.us-east-2.amazonaws.com', identity: SENSITIVE-654d4907542c645c, secret: SENSITIVE-b3e8f1c792549bd3 } Jun-25 8:57:19.160 [WebServer/35] [L0] core.server.system_services.system_store:: SystemStore.make_changes: { update: { accounts: [ { _id: 5ef466d5220ec40023e62159, sync_credentials_cache: [ { name: 'noobaa-default-backing-store', endpoint: 'https://s3.us-east-2.amazonaws.com', endpoint_type: 'AWS', access_key: SENSITIVE-654d4907542c645c, secret_key: SENSITIVE-b3e8f1c792549bd3, auth_method: 'AWS_V4' } ] } ] } } Jun-25 8:57:19.167 [WebServer/35] [L0] core.server.system_services.redirector:: publish_to_cluster: server_inter_process load_system_store undefined [ 'fcall://fcall(39gbudw)', 'ws://[::ffff:127.0.0.1]:49530/(39prbpo)', 'ws://[::ffff:127.0.0.1]:49596/(4khxl3v)', 'ws://[::ffff:10.131.0.10]:45740/(451t0lu)' ] Jun-25 8:57:19.183 [WebServer/35] [L0] core.server.notifications.dispatcher:: Adding ActivityLog entry { event: 'account.connection_create', level: 'info', system: 5ef466d4220ec40023e6214d, actor: 5ef466d5220ec40023e62159, account: 5ef466d5220ec40023e62159, desc: SENSITIVE-da1aea2f1d67ae33 } Jun-25 8:57:19.186 [WebServer/35] [L0] core.server.system_services.pool_server:: Creating new cloud_pool { _id: 5ef466ef220ec40023e62160, system: 5ef466d4220ec40023e6214d, name: 'noobaa-default-backing-store', resource_type: 'CLOUD', pool_node_type: 'BLOCK_STORE_S3', storage_stats: { blocks_size: 0, last_update: 1593075259186 } } Jun-25 8:57:19.186 [WebServer/35] [L0] core.server.system_services.pool_server:: got connection for cloud pool: { name: 'noobaa-default-backing-store', endpoint: 'https://s3.us-east-2.amazonaws.com', endpoint_type: 'AWS', access_key: SENSITIVE-654d4907542c645c, secret_key: SENSITIVE-b3e8f1c792549bd3, auth_method: 'AWS_V4' } Jun-25 8:57:19.186 [WebServer/35] [L0] core.server.system_services.system_store:: SystemStore.make_changes: { insert: { pools: [ { _id: 5ef466ef220ec40023e62160, system: 5ef466d4220ec40023e6214d, name: 'noobaa-default-backing-store', resource_type: 'CLOUD', pool_node_type: 'BLOCK_STORE_S3', storage_stats: { blocks_size: 0, last_update: 1593075259186 }, cloud_pool_info: { endpoint: 'https://s3.us-east-2.amazonaws.com', target_bucket: 'nb.1593075321425.apps.ikave-aws.qe.rh-ocs.com', auth_method: 'AWS_V4', access_keys: { access_key: SENSITIVE-654d4907542c645c, secret_key: SENSITIVE-b3e8f1c792549bd3, account_id: 5ef466d5220ec40023e62159 }, endpoint_type: 'AWS', backingstore: { name: 'noobaa-default-backing-store', namespace: 'openshift-storage' } } } ]
Great ! moving to vefiried ?
Additional information about the cluster I used: OCP version: Client Version: 4.3.8 Server Version: 4.5.0-0.nightly-2020-06-23-035950 Kubernetes Version: v1.18.3+c44581d OCS verison: ocs-operator.v4.5.0-460.ci OpenShift Container Storage 4.5.0-460.ci Succeeded cluster version NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.5.0-0.nightly-2020-06-23-035950 True False 3h51m Cluster version is 4.5.0-0.nightly-2020-06-23-035950 Rook version rook: 4.5-27.acf5b22b.release_4.5 go: go1.13.4 Ceph version ceph version 14.2.8-59.el8cp (53387608e81e6aa2487c952a604db06faa5b2cd0) nautilus (stable)
Created attachment 1698805 [details] noobaa-core-0 logs
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenShift Container Storage 4.5.0 bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:3754