Bug 1882397

Summary: MCG decompression problem with snappy on s390x arch
Product: [Red Hat Storage] Red Hat OpenShift Container Storage Reporter: Venkat <vpiniset>
Component: Multi-Cloud Object GatewayAssignee: Nimrod Becker <nbecker>
Status: CLOSED ERRATA QA Contact: Parikshith <pbyregow>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.6CC: ebenahar, etamir, jalbo, jthottan, kramdoss, muagarwa, nbecker, ocs-bugs, tdesala, uweigand
Target Milestone: ---Keywords: AutomationBackLog
Target Release: OCS 4.6.0   
Hardware: s390x   
OS: Linux   
Whiteboard:
Fixed In Version: v4.6.0-128.ci Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-12-17 06:24:38 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Noobaa-Operator-Log
none
Noobaa-DB-Log
none
Noobaa-Core-Log
none
Testcase-Log
none
Testcase-Flow
none
Noobaa-Endpoint-Log
none
Investigation log none

Description Venkat 2020-09-24 13:18:52 UTC
Description of problem (please be detailed as possible and provide log
snippests):

Facing Bad gateway issue while verifying bucket policy on IBM s390x arch.
While executing couple of testcases under Tier-1 observed this behaviour.

Version of all relevant components (if applicable):
OCP : 4.5
OCS : 4.6
ocs-operator : v4.6.0-561.ci

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
Couple testcases failed due to this error under Tier-1 from OCS-CI


Is there any workaround available to the best of your knowledge?
NO

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
3

Can this issue reproducible?
Yes

Can this issue reproduce from the UI?
NO

If this is a regression, please provide more details to justify this:
NA

Steps to Reproduce:
1. Install OCS with 4.6 build
2. Run test_object_actions testcase from OCS-CI


Actual results:

03:56:57 - MainThread - ocs_ci.ocs.resources.bucket_policy - INFO - bucket_policy: {'Version': '2020-09-17', 'Statement': [{'Action': ['s3:PutObject'], 'Principal': 'obc-account.oc-bucket-736075606eb643b293f9e04b22ee106b.5f6340f5', 'Resource': ['arn:aws:s3:::oc-bucket-736075606eb643b293f9e04b22ee106b/*'], 'Effect': 'Allow', 'Sid': 'statement'}]}
03:56:57 - MainThread - tests.manage.mcg.test_bucket_policy - INFO - Creating bucket policy on bucket: oc-bucket-736075606eb643b293f9e04b22ee106b with principal: obc-account.oc-bucket-736075606eb643b293f9e04b22ee106b.5f6340f5
03:56:58 - MainThread - tests.manage.mcg.test_bucket_policy - INFO - Put bucket policy response from Admin: {'ResponseMetadata': {'RequestId': 'kf6p8v6b-fgh6h6-1fq', 'HostId': 'kf6p8v6b-fgh6h6-1fq', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amz-request-id': 'kf6p8v6b-fgh6h6-1fq', 'x-amz-id-2': 'kf6p8v6b-fgh6h6-1fq', 'access-control-allow-origin': '*', 'access-control-allow-credentials': 'true', 'access-control-allow-methods': 'GET,POST,PUT,DELETE,OPTIONS', 'access-control-allow-headers': 'Content-Type,Content-MD5,Authorization,X-Amz-User-Agent,X-Amz-Date,ETag,X-Amz-Content-Sha256', 'access-control-expose-headers': 'ETag,X-Amz-Version-Id', 'date': 'Thu, 17 Sep 2020 10:56:58 GMT', 'content-length': '0', 'set-cookie': '1a4aa612fe797ac8466d7ee00e5520d5=3de20c6ab12da44ebf20a35ec104befc; path=/; HttpOnly; Secure'}, 'RetryAttempts': 0}}
03:56:58 - MainThread - tests.manage.mcg.test_bucket_policy - INFO - Getting Bucket policy on bucket: oc-bucket-736075606eb643b293f9e04b22ee106b
03:56:58 - MainThread - tests.manage.mcg.test_bucket_policy - INFO - Got bucket policy: {"version":"2020-09-17","statement":[{"action":["s3:putobject"],"principal":["obc-account.oc-bucket-736075606eb643b293f9e04b22ee106b.5f6340f5"],"resource":["arn:aws:s3:::oc-bucket-736075606eb643b293f9e04b22ee106b/*"],"effect":"allow","sid":"statement"}]}
03:56:58 - MainThread - tests.manage.mcg.test_bucket_policy - INFO - Adding object on bucket: oc-bucket-736075606eb643b293f9e04b22ee106b
03:56:59 - MainThread - tests.manage.mcg.test_bucket_policy - INFO - Verifying whether user: obc-account.oc-bucket-736075606eb643b293f9e04b22ee106b.5f6340f5 is denied to Get object
03:57:15 - MainThread - tests.manage.mcg.test_bucket_policy - INFO - {'Error': {'Code': '502', 'Message': 'Bad Gateway'}, 'ResponseMetadata': {'HTTPStatusCode': 502, 'HTTPHeaders': {'content-length': '107', 'cache-control': 'no-cache', 'content-type': 'text/html', 'connection': 'close'}, 'MaxAttemptsReached': True, 'RetryAttempts': 4}}
03:57:15 - MainThread - ocs_ci.ocs.resources.bucket_policy - INFO - http response:
{'Error': {'Code': '502', 'Message': 'Bad Gateway'}, 'ResponseMetadata': {'HTTPStatusCode': 502, 'HTTPHeaders': {'content-length': '107', 'cache-control': 'no-cache', 'content-type': 'text/html', 'connection': 'close'}, 'MaxAttemptsReached': True, 'RetryAttempts': 4}}
03:57:15 - MainThread - ocs_ci.ocs.resources.bucket_policy - INFO - metadata: {'HTTPStatusCode': 502, 'HTTPHeaders': {'content-length': '107', 'cache-control': 'no-cache', 'content-type': 'text/html', 'connection': 'close'}, 'MaxAttemptsReached': True, 'RetryAttempts': 4}
03:57:15 - MainThread - ocs_ci.ocs.resources.bucket_policy - INFO - headers: {'content-length': '107', 'cache-control': 'no-cache', 'content-type': 'text/html', 'connection': 'close'}
03:57:15 - MainThread - ocs_ci.ocs.resources.bucket_policy - INFO - status code: 502
03:57:15 - MainThread - ocs_ci.ocs.resources.bucket_policy - INFO - Error: {'Code': '502', 'Message': 'Bad Gateway'}

Expected results:

Should not be any errors.

Additional info:

(.venv) [root@ocsvm2 ocs-ci]# vrun-ci tests/manage/mcg/test_bucket_policy.py::TestS3BucketPolicy::test_object_actions
==================================================================================== test session starts =====================================================================================
platform linux -- Python 3.6.8, pytest-5.3.5, py-1.9.0, pluggy-0.13.1
rootdir: /root/ocs-ci, inifile: pytest.ini
plugins: reportportal-1.10.0, logger-0.5.1, metadata-1.10.0, html-2.1.1, marker-bugzilla-0.9.4, ordering-0.6
collected 1 item                                                                                                                                                                             

tests/manage/mcg/test_bucket_policy.py::TestS3BucketPolicy::test_object_actions 
--------------------------------------------------------------------------------------- live log setup ---------------------------------------------------------------------------------------
03:55:47 - MainThread - tests.conftest - INFO - All logs located at /tmp/ocs-ci-logs-1600340143
03:55:47 - MainThread - ocs_ci.utility.utils - INFO - Executing command: /root/ocs-ci/bin/oc version --client
03:55:47 - MainThread - ocs_ci.utility.utils - INFO - OpenShift Client version: Client Version: 4.5.0-0.nightly-s390x-2020-07-24-085757

03:55:47 - MainThread - ocs_ci.ocs.version - INFO - collecting ocs version
03:55:47 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get clusterversion version -o yaml
03:55:48 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get namespace  -o yaml
03:55:49 - MainThread - ocs_ci.ocs.version - INFO - found storage namespaces ['openshift-cluster-storage-operator', 'openshift-kube-storage-version-migrator', 'openshift-kube-storage-version-migrator-operator', 'openshift-storage']
03:55:49 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-cluster-storage-operator --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get pod  -n openshift-cluster-storage-operator -o yaml
03:55:50 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-kube-storage-version-migrator --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get pod  -n openshift-kube-storage-version-migrator -o yaml
03:55:51 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-kube-storage-version-migrator-operator --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get pod  -n openshift-kube-storage-version-migrator-operator -o yaml
03:55:52 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get pod  -n openshift-storage -o yaml
03:55:59 - MainThread - ocs_ci.ocs.version - INFO - ClusterVersion .spec.channel: stable-4.5
03:55:59 - MainThread - ocs_ci.ocs.version - INFO - ClusterVersion .status.desired.version: 4.5.4
03:55:59 - MainThread - ocs_ci.ocs.version - INFO - ClusterVersion .status.desired.image: quay.io/openshift-release-dev/ocp-release@sha256:5a498bddd301b561c4ca0797fcbead6d8f465ac3bdd04e04ca62ecd07b87ba70
03:55:59 - MainThread - ocs_ci.ocs.version - INFO - storage namespace openshift-cluster-storage-operator
03:55:59 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6d0daa432589e39729efab2c44d6c7a27047809c5d9fef5951587d1e6b04bf6 {'quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a6d0daa432589e39729efab2c44d6c7a27047809c5d9fef5951587d1e6b04bf6'}
03:55:59 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9af0d2c5a3fb12367de477e0bf928977406020f054077c0e8c6d18785e39569 {'quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9af0d2c5a3fb12367de477e0bf928977406020f054077c0e8c6d18785e39569'}
03:55:59 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d8e1649e16bb7fe27c0b6d32ab0c4e7d523728c8aef8b51dc859fb5449826810 {'quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d8e1649e16bb7fe27c0b6d32ab0c4e7d523728c8aef8b51dc859fb5449826810'}
03:55:59 - MainThread - ocs_ci.ocs.version - INFO - storage namespace openshift-kube-storage-version-migrator
03:55:59 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f3fdf75f77893438d0a7ac4307b6caa7334d629d8c804747f5b7c49c90f983b1 {'quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f3fdf75f77893438d0a7ac4307b6caa7334d629d8c804747f5b7c49c90f983b1'}
03:55:59 - MainThread - ocs_ci.ocs.version - INFO - storage namespace openshift-kube-storage-version-migrator-operator
03:55:59 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:278b11ead7db93716d752655dbcdc6f4f885d743a4fefaba99e991e9f7dabc8c {'quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:278b11ead7db93716d752655dbcdc6f4f885d743a4fefaba99e991e9f7dabc8c'}
03:55:59 - MainThread - ocs_ci.ocs.version - INFO - storage namespace openshift-storage
03:55:59 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/rhceph-dev/cephcsi@sha256:99a92c29dd4fe94db8d1a8af0c375ba2cc0994a1f0a72d7833de5cf1f3cf6152 {'quay.io/rhceph-dev/cephcsi@sha256:8b787bb5ae515aff9904d5ca6c3adac45047fbcfc63baec8b5b3fed38e6b7a88'}
03:55:59 - MainThread - ocs_ci.ocs.version - INFO - image registry.redhat.io/openshift4/ose-csi-driver-registrar@sha256:7a814c6d75c1dd7eb67977cacca379184b54750151dcd9ff698d7852263fbc9f {'registry.redhat.io/openshift4/ose-csi-driver-registrar@sha256:6f2a45cbb31828535658d689a10fb75bbc64b09783a794f8ce13ea563d0016d3'}
03:55:59 - MainThread - ocs_ci.ocs.version - INFO - image registry.redhat.io/openshift4/ose-csi-external-attacher@sha256:eb7596df3ae25878c69d0ebb187a22fe29ce493457402fa9560a4f32efd5fd09 {'registry.redhat.io/openshift4/ose-csi-external-attacher@sha256:599bad486aa309bf0ce29cc2a66d80ab96735a6747f60ad29778a613dfd435ab'}
03:55:59 - MainThread - ocs_ci.ocs.version - INFO - image registry.redhat.io/openshift4/ose-csi-external-provisioner-rhel7@sha256:0f35049599d8cc80f3a611fd3d02965317284a0151e98e0177e182fe733ee47c {'registry.redhat.io/openshift4/ose-csi-external-provisioner-rhel7@sha256:0f35049599d8cc80f3a611fd3d02965317284a0151e98e0177e182fe733ee47c'}
03:55:59 - MainThread - ocs_ci.ocs.version - INFO - image registry.redhat.io/openshift4/ose-csi-external-resizer-rhel7@sha256:f792370a94bca8620b24e3cf8d608d4b6f7ecb3e7aa9b4be230172b019b0b586 {'registry.redhat.io/openshift4/ose-csi-external-resizer-rhel7@sha256:774fa067b77c714ab11efef61d53ba9b3580889f2fc9a1bca56e8a3fe736f53d'}
03:55:59 - MainThread - ocs_ci.ocs.version - INFO - image registry.redhat.io/openshift4/ose-csi-external-snapshotter-rhel7@sha256:bd81f802e9abc7869f6967828a304e84fa6a34f62ccbe96be3fdd8bf8eb143cb {'registry.redhat.io/openshift4/ose-csi-external-snapshotter-rhel7@sha256:7896c9c052149e98e4856c775f86d35269d9ea41ca8f223865f7afb02273885f'}
03:55:59 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/rhceph-dev/mcg-core@sha256:c8ff99f3cd9588aef369835ffee8ef7f039430f620ac2ee1445d08764dca0c4f {'quay.io/rhceph-dev/mcg-core@sha256:a7d370726680456d4574c6dc9b183d1ec716a8746d3434845b5bc5f1b42aa3c6'}
03:55:59 - MainThread - ocs_ci.ocs.version - INFO - image registry.redhat.io/rhscl/mongodb-36-rhel7@sha256:ba74027bb4b244df0b0823ee29aa927d729da33edaa20ebdf51a2430cc6b4e95 {'registry.redhat.io/rhscl/mongodb-36-rhel7@sha256:ade39968dc5609fe68b2eb1554476e27287f41603adea30c8282529d5803a16d'}
03:55:59 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/rhceph-dev/mcg-operator@sha256:596e42cca7096fa8b85a7646269ffd0ba5ec8b35c8c578e866c1381a0dd485e7 {'quay.io/rhceph-dev/mcg-operator@sha256:364617d1057e53d71bc47504081810df92f3aff2e5829921ebd3d3ffe95eb58b'}
03:55:59 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/rhceph-dev/ocs-operator@sha256:da31060d63117dc037a8a51c615a96ad56250e3374c4f6ad05601475466114ec {'quay.io/rhceph-dev/ocs-operator@sha256:b96c41c504bd2533545dd934ffd792e41c660d3162c87e12c3a8894aac15c75b'}
03:55:59 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/rhceph-dev/rhceph@sha256:eafd1acb0ada5d7cf93699056118aca19ed7a22e4938411d307ef94048746cc8 {'quay.io/rhceph-dev/rhceph@sha256:0ca69f34a821a1cd9bee1a0b6e00cf1703e72d8202e2a44ef5166304902c4768'}
03:55:59 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/rhceph-dev/rook-ceph@sha256:6a545564b5e676e7936c901144baea0b56a0ac568ad003e388c059f02fefdc39 {'quay.io/rhceph-dev/rook-ceph@sha256:6a545564b5e676e7936c901144baea0b56a0ac568ad003e388c059f02fefdc39'}
03:55:59 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/venkat_pinisetti/custom_repo:latest {'quay.io/venkat_pinisetti/custom_repo@sha256:c0b76994d8190ddd461307eb7143bdc3db432037080c77ba48b44669fd0a9a5e'}
03:55:59 - MainThread - tests.conftest - INFO - human readable ocs version info written into /root/ocp4-workdir-tb4/ocs_version.2020-09-17T03:55:59.479309
03:55:59 - MainThread - ocs_ci.utility.utils - INFO - testrun_name: OCS4-6-Downstream-OCP4-6-BAREMETAL-UPI-1AZ-RHCOS-3M-3W-
03:55:59 - Dummy-1 - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get Pod  -A -o yaml
03:55:59 - Dummy-2 - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get StorageClass  -A -o yaml
03:55:59 - Dummy-3 - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get CephFileSystem  -A -o yaml
03:55:59 - Dummy-4 - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get CephBlockPool  -A -o yaml
03:55:59 - Dummy-5 - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get PersistentVolume  -A -o yaml
03:55:59 - Dummy-6 - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get PersistentVolumeClaim  -A -o yaml
03:55:59 - Dummy-7 - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get Namespace  -A -o yaml
03:56:26 - MainThread - tests.conftest - INFO - Checking for Ceph Health OK 
03:56:26 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc wait --for condition=ready pod -l app=rook-ceph-tools -n openshift-storage --timeout=120s
03:56:27 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get pod -l 'app=rook-ceph-tools' -o jsonpath='{.items[0].metadata.name}'
03:56:28 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage exec rook-ceph-tools-75cbd849bd-m2lnn -- ceph health
03:56:30 - MainThread - ocs_ci.utility.utils - INFO - Ceph cluster health is HEALTH_OK.
03:56:30 - MainThread - tests.conftest - INFO - Ceph health check passed at setup
03:56:30 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get Pod  -n openshift-storage --selector=noobaa-operator=deployment -o yaml
03:56:31 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get Pod  -n openshift-storage --selector=noobaa-core=noobaa -o yaml
03:56:32 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get Pod noobaa-operator-7f6c978865-dxbl8 -n openshift-storage -o yaml
03:56:33 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig rsh noobaa-operator-7f6c978865-dxbl8 bash -c "md5sum /usr/local/bin/noobaa-operator"
03:56:35 - MainThread - ocs_ci.ocs.resources.pod - INFO - md5sum of file /usr/local/bin/noobaa-operator: 8aef77a6415a355f82cd92a95dab9c11
03:56:35 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-ingress-operator --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get secret router-ca -n openshift-ingress-operator -o yaml
03:56:36 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get noobaa  -n openshift-storage -o yaml
03:56:36 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get secret noobaa-admin -n openshift-storage -o yaml
03:56:38 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get StorageCluster  -n openshift-storage -o yaml
03:56:39 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-marketplace --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get CatalogSource ocs-catalogsource -n openshift-marketplace -o yaml
03:56:39 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-marketplace --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get packagemanifest  -n openshift-marketplace --selector=ocs-operator-internal=true -o yaml
03:56:40 - MainThread - ocs_ci.ocs.resources.ocs - INFO - Check if OCS operator: ocs-operator.v4.6.0-561.ci is in Succeeded phase.
03:56:40 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get csv ocs-operator.v4.6.0-561.ci -n openshift-storage -o yaml
03:56:42 - MainThread - ocs_ci.ocs.ocp - INFO - Resource ocs-operator.v4.6.0-561.ci is in phase: Succeeded!
03:56:42 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get csv ocs-operator.v4.6.0-561.ci -n openshift-storage -o yaml
03:56:43 - MainThread - /root/ocs-ci/ocs_ci/ocs/resources/mcg.py - INFO - Checking for RGW pod/s on baremetal platform
03:56:43 - MainThread - ocs_ci.ocs.ocp - INFO - Waiting for a resource(s) of kind Pod identified by name '' using selector app=rook-ceph-rgw at column name STATUS to reach desired condition Running
03:56:43 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get Pod  -n openshift-storage --selector=app=rook-ceph-rgw -o yaml
03:56:44 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get Pod rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-86c786c7s88h -n openshift-storage
03:56:44 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get Pod  -n openshift-storage -o yaml
03:56:52 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get Pod rook-ceph-rgw-ocs-storagecluster-cephobjectstore-b-6d47d4c5dqsh -n openshift-storage
03:56:52 - MainThread - ocs_ci.ocs.ocp - INFO - 2 resources already reached condition!
--------------------------------------------------------------------------------------- live log call ----------------------------------------------------------------------------------------
03:56:52 - MainThread - /root/ocs-ci/ocs_ci/ocs/resources/objectbucket.py - INFO - Creating bucket: oc-bucket-736075606eb643b293f9e04b22ee106b
03:56:52 - MainThread - ocs_ci.ocs.resources.ocs - INFO - Adding ObjectBucketClaim with name oc-bucket-736075606eb643b293f9e04b22ee106b
03:56:52 - MainThread - ocs_ci.utility.templating - INFO - apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
  name: oc-bucket-736075606eb643b293f9e04b22ee106b
  namespace: openshift-storage
spec:
  bucketName: oc-bucket-736075606eb643b293f9e04b22ee106b
  storageClassName: openshift-storage.noobaa.io

03:56:52 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig create -f /tmp/ObjectBucketClaim2hrjl5jh -o yaml
03:56:53 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get ObjectBucketClaim oc-bucket-736075606eb643b293f9e04b22ee106b -n openshift-storage -o yaml
03:56:53 - MainThread - /root/ocs-ci/ocs_ci/ocs/resources/objectbucket.py - INFO - Waiting for oc-bucket-736075606eb643b293f9e04b22ee106b to be healthy
03:56:53 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get obc oc-bucket-736075606eb643b293f9e04b22ee106b -n openshift-storage -o yaml
03:56:54 - MainThread - /root/ocs-ci/ocs_ci/ocs/resources/objectbucket.py - INFO - oc-bucket-736075606eb643b293f9e04b22ee106b status is Bound
03:56:54 - MainThread - /root/ocs-ci/ocs_ci/ocs/resources/objectbucket.py - INFO - oc-bucket-736075606eb643b293f9e04b22ee106b is healthy
03:56:54 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get ObjectBucketClaim oc-bucket-736075606eb643b293f9e04b22ee106b -n openshift-storage -o yaml
03:56:55 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get ObjectBucket obc-openshift-storage-oc-bucket-736075606eb643b293f9e04b22ee106b -n openshift-storage -o yaml
03:56:55 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get secret oc-bucket-736075606eb643b293f9e04b22ee106b -n openshift-storage -o yaml
03:56:56 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get ConfigMap oc-bucket-736075606eb643b293f9e04b22ee106b -n openshift-storage -o yaml
03:56:56 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get noobaa  -n openshift-storage -o yaml
03:56:57 - MainThread - ocs_ci.ocs.resources.bucket_policy - INFO - version: 2020-09-17
03:56:57 - MainThread - ocs_ci.ocs.resources.bucket_policy - INFO - principal_list: obc-account.oc-bucket-736075606eb643b293f9e04b22ee106b.5f6340f5
03:56:57 - MainThread - ocs_ci.ocs.resources.bucket_policy - INFO - actions_list: ['PutObject']
03:56:57 - MainThread - ocs_ci.ocs.resources.bucket_policy - INFO - resource: ['oc-bucket-736075606eb643b293f9e04b22ee106b/*']
03:56:57 - MainThread - ocs_ci.ocs.resources.bucket_policy - INFO - effect: Allow
03:56:57 - MainThread - ocs_ci.ocs.resources.bucket_policy - INFO - sid: statement
03:56:57 - MainThread - ocs_ci.ocs.resources.bucket_policy - INFO - bucket_policy: {'Version': '2020-09-17', 'Statement': [{'Action': ['s3:PutObject'], 'Principal': 'obc-account.oc-bucket-736075606eb643b293f9e04b22ee106b.5f6340f5', 'Resource': ['arn:aws:s3:::oc-bucket-736075606eb643b293f9e04b22ee106b/*'], 'Effect': 'Allow', 'Sid': 'statement'}]}
03:56:57 - MainThread - tests.manage.mcg.test_bucket_policy - INFO - Creating bucket policy on bucket: oc-bucket-736075606eb643b293f9e04b22ee106b with principal: obc-account.oc-bucket-736075606eb643b293f9e04b22ee106b.5f6340f5
03:56:58 - MainThread - tests.manage.mcg.test_bucket_policy - INFO - Put bucket policy response from Admin: {'ResponseMetadata': {'RequestId': 'kf6p8v6b-fgh6h6-1fq', 'HostId': 'kf6p8v6b-fgh6h6-1fq', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amz-request-id': 'kf6p8v6b-fgh6h6-1fq', 'x-amz-id-2': 'kf6p8v6b-fgh6h6-1fq', 'access-control-allow-origin': '*', 'access-control-allow-credentials': 'true', 'access-control-allow-methods': 'GET,POST,PUT,DELETE,OPTIONS', 'access-control-allow-headers': 'Content-Type,Content-MD5,Authorization,X-Amz-User-Agent,X-Amz-Date,ETag,X-Amz-Content-Sha256', 'access-control-expose-headers': 'ETag,X-Amz-Version-Id', 'date': 'Thu, 17 Sep 2020 10:56:58 GMT', 'content-length': '0', 'set-cookie': '1a4aa612fe797ac8466d7ee00e5520d5=3de20c6ab12da44ebf20a35ec104befc; path=/; HttpOnly; Secure'}, 'RetryAttempts': 0}}
03:56:58 - MainThread - tests.manage.mcg.test_bucket_policy - INFO - Getting Bucket policy on bucket: oc-bucket-736075606eb643b293f9e04b22ee106b
03:56:58 - MainThread - tests.manage.mcg.test_bucket_policy - INFO - Got bucket policy: {"version":"2020-09-17","statement":[{"action":["s3:putobject"],"principal":["obc-account.oc-bucket-736075606eb643b293f9e04b22ee106b.5f6340f5"],"resource":["arn:aws:s3:::oc-bucket-736075606eb643b293f9e04b22ee106b/*"],"effect":"allow","sid":"statement"}]}
03:56:58 - MainThread - tests.manage.mcg.test_bucket_policy - INFO - Adding object on bucket: oc-bucket-736075606eb643b293f9e04b22ee106b
03:56:59 - MainThread - tests.manage.mcg.test_bucket_policy - INFO - Verifying whether user: obc-account.oc-bucket-736075606eb643b293f9e04b22ee106b.5f6340f5 is denied to Get object
03:57:15 - MainThread - tests.manage.mcg.test_bucket_policy - INFO - {'Error': {'Code': '502', 'Message': 'Bad Gateway'}, 'ResponseMetadata': {'HTTPStatusCode': 502, 'HTTPHeaders': {'content-length': '107', 'cache-control': 'no-cache', 'content-type': 'text/html', 'connection': 'close'}, 'MaxAttemptsReached': True, 'RetryAttempts': 4}}
03:57:15 - MainThread - ocs_ci.ocs.resources.bucket_policy - INFO - http response:
{'Error': {'Code': '502', 'Message': 'Bad Gateway'}, 'ResponseMetadata': {'HTTPStatusCode': 502, 'HTTPHeaders': {'content-length': '107', 'cache-control': 'no-cache', 'content-type': 'text/html', 'connection': 'close'}, 'MaxAttemptsReached': True, 'RetryAttempts': 4}}
03:57:15 - MainThread - ocs_ci.ocs.resources.bucket_policy - INFO - metadata: {'HTTPStatusCode': 502, 'HTTPHeaders': {'content-length': '107', 'cache-control': 'no-cache', 'content-type': 'text/html', 'connection': 'close'}, 'MaxAttemptsReached': True, 'RetryAttempts': 4}
03:57:15 - MainThread - ocs_ci.ocs.resources.bucket_policy - INFO - headers: {'content-length': '107', 'cache-control': 'no-cache', 'content-type': 'text/html', 'connection': 'close'}
03:57:15 - MainThread - ocs_ci.ocs.resources.bucket_policy - INFO - status code: 502
03:57:15 - MainThread - ocs_ci.ocs.resources.bucket_policy - INFO - Error: {'Code': '502', 'Message': 'Bad Gateway'}
FAILED                                                                                                                                                                                 [100%]
------------------------------------------------------------------------------------- live log teardown --------------------------------------------------------------------------------------
03:57:16 - MainThread - tests.conftest - INFO - Cleaning up bucket oc-bucket-736075606eb643b293f9e04b22ee106b
03:57:16 - MainThread - /root/ocs-ci/ocs_ci/ocs/resources/objectbucket.py - INFO - Deleting bucket: oc-bucket-736075606eb643b293f9e04b22ee106b
03:57:16 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig delete obc oc-bucket-736075606eb643b293f9e04b22ee106b
03:57:16 - MainThread - /root/ocs-ci/ocs_ci/ocs/resources/objectbucket.py - INFO - Verifying deletion of oc-bucket-736075606eb643b293f9e04b22ee106b
03:57:16 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get obc  -n openshift-storage -o yaml
03:57:17 - MainThread - /root/ocs-ci/ocs_ci/ocs/resources/objectbucket.py - INFO - oc-bucket-736075606eb643b293f9e04b22ee106b was deleted successfuly
03:57:17 - Dummy-8 - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get Pod  -A -o yaml
03:57:17 - Dummy-7 - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get StorageClass  -A -o yaml
03:57:17 - Dummy-5 - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get CephFileSystem  -A -o yaml
03:57:17 - Dummy-6 - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get CephBlockPool  -A -o yaml
03:57:17 - Dummy-9 - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get PersistentVolume  -A -o yaml
03:57:17 - Dummy-1 - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get PersistentVolumeClaim  -A -o yaml
03:57:17 - Dummy-10 - ocs_ci.utility.utils - INFO - Executing command: oc --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig get Namespace  -A -o yaml

tests/manage/mcg/test_bucket_policy.py::TestS3BucketPolicy::test_object_actions ERROR                                                                                                  [100%]

=========================================================================================== ERRORS ===========================================================================================
________________________________________________________________ ERROR at teardown of TestS3BucketPolicy.test_object_actions _________________________________________________________________

exclude_labels = ['must-gather']

    def get_status_after_execution(exclude_labels=None):
        """
        Set the environment status and assign it into ENV_STATUS_PRE dictionary.
        In addition compare the dict before the execution and after using DeepDiff
    
        Args:
            exclude_labels (list): App labels to ignore leftovers
    
        Raises:
             ResourceLeftoversException: In case there are leftovers in the
                environment after the execution
        """
        get_environment_status(ENV_STATUS_POST, exclude_labels=exclude_labels)
    
        pod_diff = compare_dicts(
            ENV_STATUS_PRE['pod'], ENV_STATUS_POST['pod']
        )
        sc_diff = compare_dicts(
            ENV_STATUS_PRE['sc'], ENV_STATUS_POST['sc']
        )
        cephfs_diff = compare_dicts(
            ENV_STATUS_PRE['cephfs'], ENV_STATUS_POST['cephfs']
        )
        cephbp_diff = compare_dicts(
            ENV_STATUS_PRE['cephbp'], ENV_STATUS_POST['cephbp']
        )
        pv_diff = compare_dicts(
>           ENV_STATUS_PRE['pv'], ENV_STATUS_POST['pv']
        )

ocs_ci/utility/environment_check.py:171: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

before = None, after = None

    def compare_dicts(before, after):
        """
        Comparing 2 dicts and providing diff list of [added items, removed items]
    
        Args:
            before (dict): Dictionary before execution
            after (dict): Dictionary after execution
    
        Returns:
            list: List of 2 lists - ('added' and 'removed' are lists)
        """
        added = []
        removed = []
        uid_before = [
            uid.get('metadata').get(
                'generateName', uid.get('metadata').get('name')
>           ) for uid in before
        ]
E       TypeError: 'NoneType' object is not iterable

ocs_ci/utility/environment_check.py:53: TypeError
----------------------------------------------------------------------------------- Captured stderr setup ------------------------------------------------------------------------------------
Traceback (most recent call last):
  File "/root/.venv/lib64/python3.6/site-packages/gevent/threadpool.py", line 142, in __run_task
    thread_result.set(func(*args, **kwargs))
  File "/root/ocs-ci/ocs_ci/utility/environment_check.py", line 95, in assign_get_values
    ns = item.get('spec').get('claimRef').get('namespace')
AttributeError: 'NoneType' object has no attribute 'get'
2020-09-17T10:56:00Z (<ThreadPoolWorker at 0x3ff9f68cb90 thread_ident=0x3ff9ba7d910 threadpool-hub=<missing>>, <function assign_get_values at 0x3ffa2f9c840>) failed with AttributeError

---------------------------------------------------------------------------------- Captured stderr teardown ----------------------------------------------------------------------------------
Traceback (most recent call last):
  File "/root/.venv/lib64/python3.6/site-packages/gevent/threadpool.py", line 142, in __run_task
    thread_result.set(func(*args, **kwargs))
  File "/root/ocs-ci/ocs_ci/utility/environment_check.py", line 95, in assign_get_values
    ns = item.get('spec').get('claimRef').get('namespace')
AttributeError: 'NoneType' object has no attribute 'get'
2020-09-17T10:57:19Z (<ThreadPoolWorker at 0x3ff7be4cb90 thread_ident=0x3ff72f3d910 threadpool-hub=<missing>>, <function assign_get_values at 0x3ffa2f9c840>) failed with AttributeError

========================================================================================== FAILURES ==========================================================================================
___________________________________________________________________________ TestS3BucketPolicy.test_object_actions ___________________________________________________________________________

self = <tests.manage.mcg.test_bucket_policy.TestS3BucketPolicy object at 0x3ff9f69aac8>, mcg_obj = <ocs_ci.ocs.resources.mcg.MCG object at 0x3ff9d9a3da0>
bucket_factory = <function bucket_factory_fixture.<locals>._create_buckets at 0x3ff74ee67b8>

    @pytest.mark.polarion_id("OCS-2156")
    @tier1
    def test_object_actions(self, mcg_obj, bucket_factory):
        """
        Test to verify different object actions and cross account access to buckets
        """
        data = "Sample string content to write to a new S3 object"
        object_key = "ObjKey-" + str(uuid.uuid4().hex)
    
        # Creating multiple obc users (accounts)
        obc = bucket_factory(amount=1, interface='OC')
        obc_obj = OBC(obc[0].name)
    
        # Admin sets policy on obc bucket with obc account principal
        bucket_policy_generated = gen_bucket_policy(
            user_list=obc_obj.obc_account,
            actions_list=['PutObject'],
            resources_list=[f'{obc_obj.bucket_name}/{"*"}']
        )
        bucket_policy = json.dumps(bucket_policy_generated)
    
        logger.info(f'Creating bucket policy on bucket: {obc_obj.bucket_name} with principal: {obc_obj.obc_account}')
        put_policy = put_bucket_policy(mcg_obj, obc_obj.bucket_name, bucket_policy)
        logger.info(f'Put bucket policy response from Admin: {put_policy}')
    
        # Get Policy
        logger.info(f'Getting Bucket policy on bucket: {obc_obj.bucket_name}')
        get_policy = get_bucket_policy(mcg_obj, obc_obj.bucket_name)
        logger.info(f"Got bucket policy: {get_policy['Policy']}")
    
        # Verifying whether obc account can put object
        logger.info(f'Adding object on bucket: {obc_obj.bucket_name}')
        assert s3_put_object(obc_obj, obc_obj.bucket_name, object_key, data), "Failed: Put Object"
    
        # Verifying whether Get action is not allowed
        logger.info(f'Verifying whether user: {obc_obj.obc_account} is denied to Get object')
        try:
>           s3_get_object(obc_obj, obc_obj.bucket_name, object_key)

tests/manage/mcg/test_bucket_policy.py:220: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

s3_obj = <ocs_ci.ocs.resources.objectbucket.OBC object at 0x3ff9d97b160>, bucketname = 'oc-bucket-736075606eb643b293f9e04b22ee106b', object_key = 'ObjKey-2330c3ef835a473db5af4defaf738e2d'
versionid = ''

    def s3_get_object(s3_obj, bucketname, object_key, versionid=''):
        """
        Simple Boto3 client based Get object
    
        Args:
            s3_obj (obj): MCG or OBC object
            bucketname (str): Name of the bucket
            object_key (str): Unique object Identifier
            versionid (str): Unique version number of an object
    
        Returns:
            dict : Get object response
    
        """
>       return s3_obj.s3_client.get_object(Bucket=bucketname, Key=object_key, VersionId=versionid)

ocs_ci/ocs/bucket_utils.py:593: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <botocore.client.S3 object at 0x3ff9c800860>, args = ()
kwargs = {'Bucket': 'oc-bucket-736075606eb643b293f9e04b22ee106b', 'Key': 'ObjKey-2330c3ef835a473db5af4defaf738e2d', 'VersionId': ''}

    def _api_call(self, *args, **kwargs):
        # We're accepting *args so that we can give a more helpful
        # error message than TypeError: _api_call takes exactly
        # 1 argument.
        if args:
            raise TypeError(
                "%s() only accepts keyword arguments." % py_operation_name)
        # The "self" in this scope is referring to the BaseClient.
>       return self._make_api_call(operation_name, kwargs)

../.venv/lib64/python3.6/site-packages/botocore/client.py:316: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <botocore.client.S3 object at 0x3ff9c800860>, operation_name = 'GetObject'
api_params = {'Bucket': 'oc-bucket-736075606eb643b293f9e04b22ee106b', 'Key': 'ObjKey-2330c3ef835a473db5af4defaf738e2d', 'VersionId': ''}

    def _make_api_call(self, operation_name, api_params):
        operation_model = self._service_model.operation_model(operation_name)
        service_name = self._service_model.service_name
        history_recorder.record('API_CALL', {
            'service': service_name,
            'operation': operation_name,
            'params': api_params,
        })
        if operation_model.deprecated:
            logger.debug('Warning: %s.%s() is deprecated',
                         service_name, operation_name)
        request_context = {
            'client_region': self.meta.region_name,
            'client_config': self.meta.config,
            'has_streaming_input': operation_model.has_streaming_input,
            'auth_type': operation_model.auth_type,
        }
        request_dict = self._convert_to_request_dict(
            api_params, operation_model, context=request_context)
    
        service_id = self._service_model.service_id.hyphenize()
        handler, event_response = self.meta.events.emit_until_response(
            'before-call.{service_id}.{operation_name}'.format(
                service_id=service_id,
                operation_name=operation_name),
            model=operation_model, params=request_dict,
            request_signer=self._request_signer, context=request_context)
    
        if event_response is not None:
            http, parsed_response = event_response
        else:
            http, parsed_response = self._make_request(
                operation_model, request_dict, request_context)
    
        self.meta.events.emit(
            'after-call.{service_id}.{operation_name}'.format(
                service_id=service_id,
                operation_name=operation_name),
            http_response=http, parsed=parsed_response,
            model=operation_model, context=request_context
        )
    
        if http.status_code >= 300:
            error_code = parsed_response.get("Error", {}).get("Code")
            error_class = self.exceptions.from_code(error_code)
>           raise error_class(parsed_response, operation_name)
E           botocore.exceptions.ClientError: An error occurred (502) when calling the GetObject operation (reached max retries: 4): Bad Gateway

../.venv/lib64/python3.6/site-packages/botocore/client.py:635: ClientError

During handling of the above exception, another exception occurred:

self = <tests.manage.mcg.test_bucket_policy.TestS3BucketPolicy object at 0x3ff9f69aac8>, mcg_obj = <ocs_ci.ocs.resources.mcg.MCG object at 0x3ff9d9a3da0>
bucket_factory = <function bucket_factory_fixture.<locals>._create_buckets at 0x3ff74ee67b8>

    @pytest.mark.polarion_id("OCS-2156")
    @tier1
    def test_object_actions(self, mcg_obj, bucket_factory):
        """
        Test to verify different object actions and cross account access to buckets
        """
        data = "Sample string content to write to a new S3 object"
        object_key = "ObjKey-" + str(uuid.uuid4().hex)
    
        # Creating multiple obc users (accounts)
        obc = bucket_factory(amount=1, interface='OC')
        obc_obj = OBC(obc[0].name)
    
        # Admin sets policy on obc bucket with obc account principal
        bucket_policy_generated = gen_bucket_policy(
            user_list=obc_obj.obc_account,
            actions_list=['PutObject'],
            resources_list=[f'{obc_obj.bucket_name}/{"*"}']
        )
        bucket_policy = json.dumps(bucket_policy_generated)
    
        logger.info(f'Creating bucket policy on bucket: {obc_obj.bucket_name} with principal: {obc_obj.obc_account}')
        put_policy = put_bucket_policy(mcg_obj, obc_obj.bucket_name, bucket_policy)
        logger.info(f'Put bucket policy response from Admin: {put_policy}')
    
        # Get Policy
        logger.info(f'Getting Bucket policy on bucket: {obc_obj.bucket_name}')
        get_policy = get_bucket_policy(mcg_obj, obc_obj.bucket_name)
        logger.info(f"Got bucket policy: {get_policy['Policy']}")
    
        # Verifying whether obc account can put object
        logger.info(f'Adding object on bucket: {obc_obj.bucket_name}')
        assert s3_put_object(obc_obj, obc_obj.bucket_name, object_key, data), "Failed: Put Object"
    
        # Verifying whether Get action is not allowed
        logger.info(f'Verifying whether user: {obc_obj.obc_account} is denied to Get object')
        try:
            s3_get_object(obc_obj, obc_obj.bucket_name, object_key)
        except boto3exception.ClientError as e:
            logger.info(e.response)
            response = HttpResponseParser(e.response)
            if response.error['Code'] == 'AccessDenied':
                logger.info('Get Object action has been denied access')
            else:
>               raise UnexpectedBehaviour(f"{e.response} received invalid error code {response.error['Code']}")
E               ocs_ci.ocs.exceptions.UnexpectedBehaviour: {'Error': {'Code': '502', 'Message': 'Bad Gateway'}, 'ResponseMetadata': {'HTTPStatusCode': 502, 'HTTPHeaders': {'content-length': '107', 'cache-control': 'no-cache', 'content-type': 'text/html', 'connection': 'close'}, 'MaxAttemptsReached': True, 'RetryAttempts': 4}} received invalid error code 502

tests/manage/mcg/test_bucket_policy.py:227: UnexpectedBehaviour
----------------------------------------------------------------------------------- Captured stderr setup ------------------------------------------------------------------------------------
Traceback (most recent call last):
  File "/root/.venv/lib64/python3.6/site-packages/gevent/threadpool.py", line 142, in __run_task
    thread_result.set(func(*args, **kwargs))
  File "/root/ocs-ci/ocs_ci/utility/environment_check.py", line 95, in assign_get_values
    ns = item.get('spec').get('claimRef').get('namespace')
AttributeError: 'NoneType' object has no attribute 'get'
2020-09-17T10:56:00Z (<ThreadPoolWorker at 0x3ff9f68cb90 thread_ident=0x3ff9ba7d910 threadpool-hub=<missing>>, <function assign_get_values at 0x3ffa2f9c840>) failed with AttributeError

====================================================================================== warnings summary ======================================================================================
/root/.venv/lib64/python3.6/site-packages/_pytest/config/__init__.py:891
/root/.venv/lib64/python3.6/site-packages/_pytest/config/__init__.py:891
  /root/.venv/lib64/python3.6/site-packages/_pytest/config/__init__.py:891: PytestAssertRewriteWarning: Module already imported so cannot be rewritten: pytest_reportportal
    self._mark_plugins_for_rewrite(hook)

-- Docs: https://docs.pytest.org/en/latest/warnings.html
===================================================================== 1 failed, 2 warnings, 1 error in 116.86s (0:01:56) =====================================================================
sys:1: ResourceWarning: unclosed <ssl.SSLSocket fd=14, family=AddressFamily.AF_INET, type=2049, proto=6, laddr=('9.30.198.151', 56156), raddr=('9.20.220.63', 443)>
(.venv) [root@ocsvm2 ocs-ci]#

Comment 2 Nimrod Becker 2020-09-30 12:10:52 UTC
Please attach logs

Comment 4 Venkat 2020-10-06 06:58:06 UTC
Created attachment 1719291 [details]
Noobaa-Operator-Log

Comment 5 Venkat 2020-10-06 06:58:52 UTC
Created attachment 1719292 [details]
Noobaa-DB-Log

Comment 6 Venkat 2020-10-06 06:59:41 UTC
Created attachment 1719293 [details]
Noobaa-Core-Log

Comment 7 Venkat 2020-10-06 07:01:03 UTC
Created attachment 1719294 [details]
Testcase-Log

Comment 8 Venkat 2020-10-06 07:01:40 UTC
Created attachment 1719295 [details]
Testcase-Flow

Comment 9 Venkat 2020-10-06 07:17:38 UTC
Created attachment 1719296 [details]
Noobaa-Endpoint-Log

Comment 10 Venkat 2020-10-06 13:35:15 UTC
Created attachment 1719404 [details]
Investigation log

Comment 11 Venkat 2020-10-06 13:37:32 UTC
Followed below steps as part of investigation with @Jacky,Albo

Steps:

1. Created a local file(sample1.txt)
2. Uploaded file to bucket(first.bucket) successfully
3. Tried to download file from bucket to locally(sample2.txt). However it was failed with below error.

Next Steps:

Jacky will into the logs which I'm attaching now


Logs:

/aws # aws s3 cp --endpoint=https://s3.openshift-storage.svc:443 --no-verify-ssl sample1.txt s3://first.bucket/sample1.txt
/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py:979: InsecureRequestWarning: Unverified HTTPS request is being made to host 's3.openshift-storage.svc'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  warnings.warn(
/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py:979: InsecureRequestWarning: Unverified HTTPS request is being made to host 's3.openshift-storage.svc'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  warnings.warn(
/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py:979: InsecureRequestWarning: Unverified HTTPS request is being made to host 's3.openshift-storage.svc'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  warnings.warn(
/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py:979: InsecureRequestWarning: Unverified HTTPS request is being made to host 's3.openshift-storage.svc'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  warnings.warn(
/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py:979: InsecureRequestWarning: Unverified HTTPS request is being made to host 's3.openshift-storage.svc'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  warnings.warn(
/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py:979: InsecureRequestWarning: Unverified HTTPS request is being made to host 's3.openshift-storage.svc'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  warnings.warn(
upload: ./sample1.txt to s3://first.bucket/sample1.txt           
/aws # 


/aws # aws s3 cp --no-verify-ssl --endpoint=https://s3.openshift-storage.svc:443 s3://first.bucket/sample1.txt sample2.txt
/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py:979: InsecureRequestWarning: Unverified HTTPS request is being made to host 's3.openshift-storage.svc'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  warnings.warn(
...
...
/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py:979: InsecureRequestWarning: Unverified HTTPS request is being made to host 's3.openshift-storage.svc'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  warnings.warn(
/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py:979: InsecureRequestWarning: Unverified HTTPS request is being made to host 's3.openshift-storage.svc'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  warnings.warn(
/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py:979: InsecureRequestWarning: Unverified HTTPS request is being made to host 's3.openshift-storage.svc'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  warnings.warn(
download failed: s3://first.bucket/sample1.txt to ./sample2.txt Connection was closed before we received a valid response from endpoint URL: "https://s3.openshift-storage.svc:443/first.bucket/sample1.txt".

/aws # ls
sample1.txt
/aws #

Ref for noobaa endpoint logs : https://bugzilla.redhat.com/attachment.cgi?id=1719404

Comment 12 Jacky Albo 2020-10-06 13:52:54 UTC
So with help of Venkat, we disabled the compression of a second bucket and then it works - So it is looking like an issue with the snappy compression on top of Z arch

Comment 13 Ulrich Weigand 2020-10-07 10:15:45 UTC
(In reply to Jacky Albo from comment #12)
> So with help of Venkat, we disabled the compression of a second bucket and
> then it works - So it is looking like an issue with the snappy compression
> on top of Z arch

Hi Jacky, is this something we can reproduce stand-along just using snappy or do we need to run all of noobaa?   If there's a platform-specific problem with snappy we'll be happy to help investigate.

Comment 14 Jacky Albo 2020-10-07 10:51:29 UTC
(In reply to Ulrich Weigand from comment #13)
> (In reply to Jacky Albo from comment #12)
> > So with help of Venkat, we disabled the compression of a second bucket and
> > then it works - So it is looking like an issue with the snappy compression
> > on top of Z arch
> 
> Hi Jacky, is this something we can reproduce stand-along just using snappy
> or do we need to run all of noobaa?   If there's a platform-specific problem
> with snappy we'll be happy to help investigate.

Hey Ulrich, we are suspecting snappy on top of IBM Z. we were using an old version 1.1.7 which seems to fail(we want to verify) but we also want to check 1.1.8 as it maybe fixes the issues.
So if you can help us with the following:
run the snappy tests on top of 1.1.7 and check if it fails https://github.com/google/snappy#tests-and-benchmarks
and then run it again on top of latest 1.1.8 and see if it fixes the issues.
please also mind this: Snappy assumes little-endian throughout, and needs to byte-swap data in several places if running on a big-endian platform.

Comment 15 Ulrich Weigand 2020-10-07 12:41:15 UTC
I've built snappy-1.1.7 on IBM Z and ran the unit tests without issues:

[uweigand@oc3748833570 snappy-1.1.7]$ ./build/snappy_unittest 
Running microbenchmarks.
WARNING: Compiled with assertions enabled, will be slow.
WARNING: Compiled without optimization, will be slow.
Benchmark            Time(ns)    CPU(ns) Iterations
---------------------------------------------------
BM_UFlat/0             273100     264288        743 369.5MB/s  html
BM_UFlat/1            2563600    2553640        100 262.2MB/s  urls
BM_UFlat/2               6305       6305      26143 18.2GB/s  jpg
BM_UFlat/3                623        623     307692 305.7MB/s  jpg_200
BM_UFlat/4              35004      35004       5259 2.7GB/s  pdf
BM_UFlat/5            1056087    1056087        183 369.9MB/s  html4
BM_UFlat/6             967948     967958        215 149.8MB/s  txt1
BM_UFlat/7             832586     832590        237 143.4MB/s  txt2
BM_UFlat/8            2545910    2545930        100 159.9MB/s  txt3
BM_UFlat/9            3507970    3500290        100 131.3MB/s  txt4
BM_UFlat/10            250565     250568        821 451.4MB/s  pb
BM_UFlat/11           1013203    1013213        192 173.5MB/s  gaviota
BM_UIOVec/0            580520     580420        340 168.3MB/s  html
BM_UIOVec/1           4899380    4899150        100 136.7MB/s  urls
BM_UIOVec/2              7294       7294      23228 15.7GB/s  jpg
BM_UIOVec/3              1202       1202     155038 158.6MB/s  jpg_200
BM_UIOVec/4             78722      75328       2719 1.3GB/s  pdf
BM_UValidate/0         121744     117602       1740 830.4MB/s  html
BM_UValidate/1        1367925    1328721        147 503.9MB/s  urls
BM_UValidate/2            554        554     215053 206.9GB/s  jpg
BM_UValidate/3            372        372     555555 511.7MB/s  jpg_200
BM_UValidate/4          10952      10952      17953 8.7GB/s  pdf
BM_ZFlat/0             696946     696957        281 140.1MB/s  html (22.31 %)
BM_ZFlat/1            9398280    9397410        100 71.2MB/s  urls (47.78 %)
BM_ZFlat/2              23858      23858       5748 4.8GB/s  jpg (99.95 %)
BM_ZFlat/3               2880       2880      74349 66.2MB/s  jpg_200 (73.00 %)
BM_ZFlat/4              77167      77168       2760 1.2GB/s  pdf (83.30 %)
BM_ZFlat/5            2670450    2670470        100 146.3MB/s  html4 (22.52 %)
BM_ZFlat/6            2606160    2606190        100 55.7MB/s  txt1 (57.88 %)
BM_ZFlat/7            2225400    2225420        100 53.6MB/s  txt2 (61.91 %)
BM_ZFlat/8            6785170    6785220        100 60.0MB/s  txt3 (54.99 %)
BM_ZFlat/9            9139910    9139950        100 50.3MB/s  txt4 (66.26 %)
BM_ZFlat/10            610166     610175        336 185.3MB/s  pb (19.68 %)
BM_ZFlat/11           2150550    2150580        100 81.7MB/s  gaviota (37.72 %)

Running correctness tests.
Crazy decompression lengths not checked on 64-bit build
All tests passed.


Is there anything else I should be testing?

Comment 16 Venkat 2020-10-07 13:11:27 UTC
I build snappy-1.1.8 on Z and didn't observe any issues in unit tests

[root@ocsvm2 snappy]# ./build/snappy_unittest 
Running microbenchmarks.
WARNING: Compiled with assertions enabled, will be slow.
WARNING: Compiled without optimization, will be slow.
Benchmark            Time(ns)    CPU(ns) Iterations
---------------------------------------------------
BM_UFlat/0             322510     285070        699 342.6MB/s  html
BM_UFlat/1            2899060    2877930        100 232.7MB/s  urls
BM_UFlat/2               5723       5685      34722 20.2GB/s  jpg
BM_UFlat/3                735        698     270270 273.2MB/s  jpg_200
BM_UFlat/4              30219      30008       6633 3.2GB/s  pdf
BM_UFlat/5            1151541    1111145        179 351.6MB/s  html4
BM_UFlat/6            1300909    1266677        155 114.5MB/s  txt1
BM_UFlat/7            1070079    1071375        176 111.4MB/s  txt2
BM_UFlat/8            3298810    3279140        100 124.1MB/s  txt3
BM_UFlat/9            4390680    4379370        100 104.9MB/s  txt4
BM_UFlat/10            238867     237990        829 475.2MB/s  pb
BM_UFlat/11           1346576    1373402        144 128.0MB/s  gaviota
BM_UIOVec/0            313127     304832        673 320.4MB/s  html
BM_UIOVec/1           2763230    2778320        100 241.0MB/s  urls
BM_UIOVec/2              6173       5817      25348 19.7GB/s  jpg
BM_UIOVec/3              1283       1289     153846 147.9MB/s  jpg_200
BM_UIOVec/4             34935      33998       5851 2.8GB/s  pdf
BM_UValidate/0         136162     138365       1499 705.8MB/s  html
BM_UValidate/1        1478303    1505015        132 444.9MB/s  urls
BM_UValidate/2            679        674     235294 169.8GB/s  jpg
BM_UValidate/3            433        424     465116 449.4MB/s  jpg_200
BM_UValidate/4          13209      12990      15232 7.3GB/s  pdf
BM_ZFlat/0             809714     797008        249 122.5MB/s  html (22.31 %)
BM_ZFlat/1           10190700   10109470        100 66.2MB/s  urls (47.78 %)
BM_ZFlat/2              26991      27401       7601 4.2GB/s  jpg (99.95 %)
BM_ZFlat/3               3713       3638      51813 52.4MB/s  jpg_200 (73.00 %)
BM_ZFlat/4             100196      99559       2001 980.9MB/s  pdf (83.30 %)
BM_ZFlat/5            3227490    3268850        100 119.5MB/s  html4 (22.52 %)
BM_ZFlat/6            2908660    2874960        100 50.5MB/s  txt1 (57.88 %)
BM_ZFlat/7            2449890    2386580        100 50.0MB/s  txt2 (61.91 %)
BM_ZFlat/8            8021930    7817580        100 52.1MB/s  txt3 (54.99 %)
BM_ZFlat/9           10034460    9834090        100 46.7MB/s  txt4 (66.26 %)
BM_ZFlat/10            767171     754140        263 150.0MB/s  pb (19.68 %)
BM_ZFlat/11           2585760    2578570        100 68.2MB/s  gaviota (37.72 %)
BM_ZFlatAll/0        42188290   41585160        100 67.2MB/s  12 files
BM_ZFlatIncreasingTableSize/0     450571     456592        434 67.9MB/s  7 tables

Running correctness tests.
Crazy decompression lengths not checked on 64-bit build
All tests passed.
[root@ocsvm2 snappy]#

Please suggest next steps.

Comment 17 Ulrich Weigand 2020-10-07 13:29:06 UTC
OK, quick update.  I just built mainline noobaa-core from github (running "npm install" and "rpm run build"), and then executed the following command which appears to reproduce the error we've seen in the ocs-ci test case:

uweigand@m8345019:~/noobaa-core$ node src/tools/coding_speed.js --size 1000 --decode --md5 --sha256 --compare
OpenSSL 1.1.1c  28 May 2019 setting up
init_rand_seed: starting ...
read_rand_seed: opening /dev/random ...
NO LOCAL CONFIG
Arguments: {
  "size": 1000,
  "decode": true,
  "md5": true,
  "sha256": true,
  "compare": true,
  "forks": 1,
  "encode": true,
  "erase": true,
  "ec": false,
  "verbose": false,
  "sse_c": false
}
(node:53580) [DEP0091] DeprecationWarning: crypto.DEFAULT_ENCODING is deprecated.
(node:53580) [DEP0010] DeprecationWarning: crypto.createCredentials is deprecated. Use tls.createSecureContext instead.
(node:53580) [DEP0011] DeprecationWarning: crypto.Credentials is deprecated. Use tls.SecureContext instead.
read_rand_seed: reading 32 bytes from /dev/random ...
read_rand_seed: got 32 bytes from /dev/random, total 32 ...
init_rand_seed: seeding with 32 bytes
rand_seed: OpenSSL 1.1.1c  28 May 2019 seeding randomness
Unhandled rejection Error: had chunk errors

Unhandled rejection Error: had chunk errors
CHUNK ERRORS: nb_snappy_uncompress: invalid data

    at p.promise.then.catch.err (/home/uweigand/noobaa-core/src/tools/coding_speed.js:146:19)
    at tryCatcher (/home/uweigand/noobaa-core/node_modules/bluebird/js/release/util.js:16:23)
    at Promise._settlePromiseFromHandler (/home/uweigand/noobaa-core/node_modules/bluebird/js/release/promise.js:547:31)
    at Promise._settlePromise (/home/uweigand/noobaa-core/node_modules/bluebird/js/release/promise.js:604:18)
    at Promise._settlePromise0 (/home/uweigand/noobaa-core/node_modules/bluebird/js/release/promise.js:649:10)
    at Promise._settlePromises (/home/uweigand/noobaa-core/node_modules/bluebird/js/release/promise.js:725:18)
    at _drainQueueStep (/home/uweigand/noobaa-core/node_modules/bluebird/js/release/async.js:93:12)
    at _drainQueue (/home/uweigand/noobaa-core/node_modules/bluebird/js/release/async.js:86:9)
    at Async._drainQueues (/home/uweigand/noobaa-core/node_modules/bluebird/js/release/async.js:102:5)
    at Immediate.Async.drainQueues [as _onImmediate] (/home/uweigand/noobaa-core/node_modules/bluebird/js/release/async.js:15:14)
    at runCallback (timers.js:705:18)
    at tryOnImmediate (timers.js:676:5)
    at processImmediate (timers.js:658:5)

generate_entropy: entropy_avail 2977
init_rand_seed: done


Will continue to investigate.

Comment 18 Ulrich Weigand 2020-10-07 17:21:14 UTC
Found the problem.  As you suggested, it has to do with whether snappy is build using cmake or not.  When building with cmake, the infrastructure will predefine a variable SNAPPY_IS_BIG_ENDIAN on big endian platforms, causing the snappy code to perform all appropriate byte swaps.

But when built without cmake, that test isn't run and the variable is never defined, which means the code assumes little endian always.

The following quick hack gets it to work correctly on IBM Z:

diff --git a/src/native/common.gypi b/src/native/common.gypi
index 1a8374904..555fe85f8 100644
--- a/src/native/common.gypi
+++ b/src/native/common.gypi
@@ -82,7 +82,7 @@
                 'defines!': ['DEBUG', '_DEBUG'],
                 'cflags!': ['-Os', '-O0', '-O1', '-O2'],
                 'cflags_cc!': ['-Os', '-O0', '-O1', '-O2'],
-                'cflags': ['-O3'],
+                'cflags': ['-O3', '-DSNAPPY_IS_BIG_ENDIAN'],
                 'xcode_settings': {
                     'GCC_GENERATE_DEBUGGING_SYMBOLS': 'NO',
                     'GCC_INLINES_ARE_PRIVATE_EXTERN': 'YES',

Of course this is not a correct fix, but I'm not really familiar with the node-gyp infrastructure here.  Is there some place where predefines that would have otherwise been set by a cmake-time check are now set instead?  That variable should be added there.


With this patch I now see the following output instead:

uweigand@m8345019:~/noobaa-core$ node src/tools/coding_speed.js --size 1000 --decode --md5 --sha256 --compare 
OpenSSL 1.1.1c  28 May 2019 setting up
init_rand_seed: starting ...
read_rand_seed: opening /dev/random ...
NO LOCAL CONFIG
Arguments: {
  "size": 1000,
  "decode": true,
  "md5": true,
  "sha256": true,
  "compare": true,
  "forks": 1,
  "encode": true,
  "erase": true,
  "ec": false,
  "verbose": false,
  "sse_c": false
}
(node:477) [DEP0091] DeprecationWarning: crypto.DEFAULT_ENCODING is deprecated.
(node:477) [DEP0010] DeprecationWarning: crypto.createCredentials is deprecated. Use tls.createSecureContext instead.
(node:477) [DEP0011] DeprecationWarning: crypto.Credentials is deprecated. Use tls.SecureContext instead.
read_rand_seed: reading 32 bytes from /dev/random ...
read_rand_seed: got 32 bytes from /dev/random, total 32 ...
init_rand_seed: seeding with 32 bytes
rand_seed: OpenSSL 1.1.1c  28 May 2019 seeding randomness
generate_entropy: entropy_avail 3071
init_rand_seed: done
Chunk Coder Speed: 222.8 MB/sec (average 222.8)
Chunk Coder Speed: 263.9 MB/sec (average 242.9)
Chunk Coder Speed: 272.2 MB/sec (average 252.6)
AVERAGE CHUNK SIZE 4032985
MD5 = GSi8JUxoJgTmxU5e9CASPw==
SHA256 = KnAqG31ltDQqRkoSRLHfAN5M+pMGidUzn6TdL/7nJgY=

Comment 22 Venkat 2020-10-21 20:59:58 UTC
I've verified this issue on latest OCS builds and it is working as expected. 

[root@ocplnx31 ~]# oc get csv
NAME                         DISPLAY                       VERSION        REPLACES   PHASE
ocs-operator.v4.6.0-607.ci   OpenShift Container Storage   4.6.0-607.ci              Succeeded
[root@ocplnx31 ~]# oc version
Client Version: 4.5.16
Server Version: 4.5.15
Kubernetes Version: v1.18.3+2fbd7c7
[root@ocplnx31 ~]# 

13:47:48 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n default --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig rsh awscli-relay-pod ls -A1 /original             
13:47:49 - MainThread - ocs_ci.ocs.bucket_utils - INFO - Downloaded objects: ['airbus.jpg', 'apple.mp4', 'bolder.jpg', 'book.txt', 'canada.jpg', 'danny.webm', 'danny2.webm', 'danny3.webm', '
enwik8', 'goldman.webm', 'random1.txt', 'random10.txt', 'random2.txt', 'random3.txt', 'random4.txt', 'random5.txt', 'random6.txt', 'random7.txt', 'random8.txt', 'random9.txt', 'rome.jpg', 's
teve.webm']                                                                                                                                                                                   
13:47:49 - MainThread - ocs_ci.ocs.bucket_utils - INFO - Syncing all objects and directories from /original to s3://s3-bucket-d817b354a1d1446ca203169b09df140f
13:47:49 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n default --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig rsh awscli-relay-pod sh -c "AWS_CA_BUNDLE=/cert/ser
vice-ca.crt AWS_ACCESS_KEY_ID=***** AWS_SECRET_ACCESS_KEY=***** AWS_DEFAULT_REGION=us-east-2 aws s3 --endpoint=***** sync /original s3://s3-bucket-d817b354a1d1446ca203169b09df140f"         
13:47:59 - MainThread - tests.manage.mcg.test_object_integrity - INFO - Downloading all objects from MCG bucket to awscli pod                                         
13:47:59 - MainThread - ocs_ci.ocs.bucket_utils - INFO - Syncing all objects and directories from s3://s3-bucket-d817b354a1d1446ca203169b09df140f to /result                 
13:47:59 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n default --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig rsh awscli-relay-pod sh -c "AWS_CA_BUNDLE=/cert/ser
vice-ca.crt AWS_ACCESS_KEY_ID=***** AWS_SECRET_ACCESS_KEY=***** AWS_DEFAULT_REGION=us-east-2 aws s3 --endpoint=***** sync s3://s3-bucket-d817b354a1d1446ca203169b09df140f /result"
13:48:07 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n default --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig rsh awscli-relay-pod md5sum /original/airbus.jpg /r
esult/airbus.jpg                                                                                                                                                                              
13:48:09 - MainThread - ocs_ci.ocs.bucket_utils - INFO - Passed: MD5 comparison for /original/airbus.jpg and /result/airbus.jpg
13:48:09 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n default --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig rsh awscli-relay-pod md5sum /original/apple.mp4 /re
sult/apple.mp4                                                                                                                                                                                
13:48:11 - MainThread - ocs_ci.ocs.bucket_utils - INFO - Passed: MD5 comparison for /original/apple.mp4 and /result/apple.mp4                                                                
13:48:11 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n default --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig rsh awscli-relay-pod md5sum /original/bolder.jpg /r
esult/bolder.jpg                                                                                                                                                                              
13:48:13 - MainThread - ocs_ci.ocs.bucket_utils - INFO - Passed: MD5 comparison for /original/bolder.jpg and /result/bolder.jpg                                                              
13:48:13 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n default --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig rsh awscli-relay-pod md5sum /original/book.txt /res
ult/book.txt                                                                                                                                                                                  
13:48:15 - MainThread - ocs_ci.ocs.bucket_utils - INFO - Passed: MD5 comparison for /original/book.txt and /result/book.txt                                                                  
13:48:15 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n default --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig rsh awscli-relay-pod md5sum /original/canada.jpg /r
esult/canada.jpg                                                                                                                                                                              
13:48:16 - MainThread - ocs_ci.ocs.bucket_utils - INFO - Passed: MD5 comparison for /original/canada.jpg and /result/canada.jpg                                                               
13:48:16 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n default --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig rsh awscli-relay-pod md5sum /original/danny.webm /r
esult/danny.webm                                                                                                                                                                              
13:48:18 - MainThread - ocs_ci.ocs.bucket_utils - INFO - Passed: MD5 comparison for /original/danny.webm and /result/danny.webm
13:48:18 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n default --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig rsh awscli-relay-pod md5sum /original/danny2.webm /
result/danny2.webm                                                                                                                                                                            
13:48:20 - MainThread - ocs_ci.ocs.bucket_utils - INFO - Passed: MD5 comparison for /original/danny2.webm and /result/danny2.webm
13:48:20 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n default --kubeconfig /root/ocp4-workdir-tb4/auth/kubeconfig rsh awscli-relay-pod md5sum /original/danny3.webm /
result/danny3.webm

Comment 23 Prasad Desala 2020-10-22 09:57:08 UTC
Thank you Venkat. Moving this BZ to Verified state based on Comment22.

Comment 24 Mudit Agarwal 2020-10-28 14:51:16 UTC
Nimrod, do we need doc_text here. Flag was set automatically because the severity was high.

Comment 25 Nimrod Becker 2020-10-28 16:04:52 UTC
We don't it's part of the multiarch effort. When OCS will be released on Z the will be docs enough :)

Comment 28 errata-xmlrpc 2020-12-17 06:24:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat OpenShift Container Storage 4.6.0 security, bug fix, enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5605