Login
Log in using an SSO provider:
Fedora Account System
Red Hat Associate
Red Hat Customer
Login using a Red Hat Bugzilla account
Forgot Password
Create an Account
Red Hat Bugzilla – Attachment 1973842 Details for
Bug 2219311
[ocs-ci]test_multipart_upload_operations failed on MCG only environment
Home
New
Search
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh89 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
[?]
This site requires JavaScript to be enabled to function correctly, please enable it.
test console output
test-multipart-upload-operations.log (text/plain), 65.71 KB, created by
Bob Liu
on 2023-07-03 08:19:31 UTC
(
hide
)
Description:
test console output
Filename:
MIME Type:
Creator:
Bob Liu
Created:
2023-07-03 08:19:31 UTC
Size:
65.71 KB
patch
obsolete
>============================= test session starts ============================== >platform linux -- Python 3.9.16, pytest-6.2.5, py-1.11.0, pluggy-1.0.0 >rootdir: /root/ocs-ci, configfile: pytest.ini >plugins: flaky-3.7.0, repeat-0.9.1, ordering-0.6, metadata-1.11.0, logger-0.5.1, marker-bugzilla-0.9.4, html-3.1.1 >collected 1 item > >tests/manage/mcg/test_multipart_upload.py::TestS3MultipartUpload::test_multipart_upload_operations >-------------------------------- live log setup -------------------------------- >14:45:23 - MainThread - ocs_ci.utility.utils - INFO - testrun_name: OCS4-10-Downstream-OCP4-10-BAREMETAL-UPI-1AZ-RHCOS-3M-3W >14:45:23 - MainThread - ocs_ci.utility.utils - INFO - testrun_name: OCS4-10-Downstream-OCP4-10-BAREMETAL-UPI-1AZ-RHCOS-3M-3W >14:45:23 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc version -o json >14:45:24 - MainThread - ocs_ci.utility.utils - INFO - Retrieving the authentication config dictionary >14:45:24 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get lvmcluster -n openshift-storage -o yaml >14:45:26 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get cephcluster -n openshift-storage -o yaml >14:45:27 - MainThread - ocs_ci.ocs.cluster - INFO - Detected CephCluster is installed >14:45:27 - MainThread - ocs_ci.ocs.utils - INFO - Skipping Ceph toolbox setup due to running in MCG only mode >14:45:27 - MainThread - tests.conftest - INFO - All logs located at /tmp/ocs-ci-logs-1688366716 >14:45:27 - MainThread - tests.conftest - INFO - Skipping client download >14:45:27 - MainThread - tests.conftest - INFO - Skipping version reporting for development mode. >14:45:27 - MainThread - tests.conftest - INFO - PagerDuty service is not created because platform from ['openshiftdedicated', 'rosa'] is not used >14:45:27 - MainThread - ocs_ci.utility.utils - INFO - testrun_name: OCS4-10-Downstream-OCP4-10-BAREMETAL-UPI-1AZ-RHCOS-3M-3W >14:45:27 - MainThread - tests.conftest - INFO - Trying to create the AWS CLI service CA >14:45:27 - MainThread - ocs_ci.ocs.resources.ocs - INFO - Adding ConfigMap with name session-awscli-service-ca-ffc537796dd94f >14:45:27 - MainThread - ocs_ci.utility.templating - INFO - apiVersion: v1 >kind: ConfigMap >metadata: > annotations: > service.beta.openshift.io/inject-cabundle: 'true' > name: session-awscli-service-ca-ffc537796dd94f > namespace: openshift-storage > >14:45:27 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage create -f /tmp/ConfigMapkb5speb6 -o yaml >14:45:28 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get ConfigMap session-awscli-service-ca-ffc537796dd94f -n openshift-storage -o yaml >14:45:29 - MainThread - ocs_ci.ocs.resources.ocs - INFO - Adding StatefulSet with name s3cli >14:45:29 - MainThread - ocs_ci.utility.templating - INFO - apiVersion: apps/v1 >kind: StatefulSet >metadata: > name: s3cli > namespace: openshift-storage >spec: > podManagementPolicy: OrderedReady > replicas: 1 > selector: > matchLabels: > app: s3cli > template: > metadata: > labels: > app: s3cli > spec: > containers: > - command: > - /bin/sh > image: quay.io/ocsci/s3-cli-with-test-objects-multiarch:1.0 > name: s3cli > stdin: true > tty: true > volumeMounts: > - mountPath: /cert/service-ca.crt > name: service-ca > subPath: service-ca.crt > securityContext: > runAsUser: 0 > volumes: > - configMap: > name: session-awscli-service-ca-ffc537796dd94f > name: service-ca > updateStrategy: > type: RollingUpdate > volumeClaimTemplates: [] > >14:45:29 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage create -f /tmp/StatefulSet98742vri -o yaml >14:45:30 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get StatefulSet s3cli -n openshift-storage -o yaml >14:45:30 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=app=s3cli -o yaml >14:45:31 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Proxy cluster -o yaml >14:45:32 - MainThread - ocs_ci.ocs.ocp - INFO - Waiting for a resource(s) of kind ConfigMap identified by name 'session-awscli-service-ca-ffc537796dd94f' using selector None at column name DATA to reach desired condition 1 >14:45:32 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get ConfigMap session-awscli-service-ca-ffc537796dd94f -n openshift-storage -o yaml >14:45:33 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get ConfigMap session-awscli-service-ca-ffc537796dd94f -n openshift-storage >14:45:34 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get ConfigMap -n openshift-storage -o yaml >14:45:36 - MainThread - ocs_ci.ocs.ocp - INFO - status of session-awscli-service-ca-ffc537796dd94f at DATA reached condition! >14:45:36 - MainThread - ocs_ci.ocs.ocp - INFO - Waiting for a resource(s) of kind Pod identified by name 's3cli-0' using selector None at column name STATUS to reach desired condition Running >14:45:36 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod s3cli-0 -n openshift-storage -o yaml >14:45:36 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod s3cli-0 -n openshift-storage >14:45:37 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage -o yaml >14:45:41 - MainThread - ocs_ci.ocs.ocp - INFO - status of s3cli-0 at STATUS reached condition! >14:45:41 - MainThread - ocs_ci.helpers.helpers - INFO - Pod s3cli-0 reached state Running >14:45:41 - MainThread - tests.conftest - INFO - Looking for RGW service to expose >14:45:41 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Service -n openshift-storage --selector=app=rook-ceph-rgw -o yaml >14:45:41 - MainThread - tests.conftest - INFO - RGW service is not available >14:45:41 - MainThread - /root/ocs-ci/ocs_ci/ocs/resources/cloud_manager.py - INFO - Trying to load credentials from ocs-ci-data. This flow is only relevant when running under OCS-QE environments. >14:45:41 - MainThread - /root/ocs-ci/ocs_ci/utility/aws.py - INFO - Fetching authentication credentials from ocs-ci-data >14:45:43 - MainThread - /root/ocs-ci/ocs_ci/utility/aws.py - WARNING - Failed to fetch auth.yaml from ocs-ci-data >14:45:43 - MainThread - /root/ocs-ci/ocs_ci/ocs/resources/cloud_manager.py - WARNING - Failed to load credentials from ocs-ci-data. >Your local AWS credentials might be misconfigured. >Trying to load credentials from local auth.yaml instead >14:45:43 - MainThread - ocs_ci.utility.utils - INFO - Retrieving the authentication config dictionary >14:45:43 - MainThread - /root/ocs-ci/ocs_ci/ocs/resources/cloud_manager.py - WARNING - Local auth.yaml not found, or failed to load. All cloud clients will be instantiated as None. >14:45:43 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get StorageCluster -n openshift-storage -o yaml >14:45:44 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get storageclass ocs-storagecluster-ceph-rgw -o yaml >14:45:45 - MainThread - ocs_ci.utility.utils - WARNING - Command stderr: Error from server (NotFound): storageclasses.storage.k8s.io "ocs-storagecluster-ceph-rgw" not found > >14:45:45 - MainThread - ocs_ci.ocs.ocp - WARNING - Failed to get resource: ocs-storagecluster-ceph-rgw of kind: storageclass, selector: None, Error: Error during execution of command: oc get storageclass ocs-storagecluster-ceph-rgw -o yaml. >Error is Error from server (NotFound): storageclasses.storage.k8s.io "ocs-storagecluster-ceph-rgw" not found > >14:45:45 - MainThread - ocs_ci.ocs.ocp - WARNING - Number of attempts to get resource reached! >14:45:45 - Dummy-2 - ocs_ci.utility.utils - INFO - Executing command: oc get Pod -A -o yaml >14:45:45 - Dummy-3 - ocs_ci.utility.utils - INFO - Executing command: oc get StorageClass -A -o yaml >14:45:45 - Dummy-4 - ocs_ci.utility.utils - INFO - Executing command: oc get CephFileSystem -A -o yaml >14:45:45 - Dummy-5 - ocs_ci.utility.utils - INFO - Executing command: oc get CephBlockPool -A -o yaml >14:45:45 - Dummy-6 - ocs_ci.utility.utils - INFO - Executing command: oc get PersistentVolume -A -o yaml >14:45:45 - Dummy-7 - ocs_ci.utility.utils - INFO - Executing command: oc get PersistentVolumeClaim -A -o yaml >14:45:45 - Dummy-8 - ocs_ci.utility.utils - INFO - Executing command: oc get Namespace -A -o yaml >14:45:45 - Dummy-9 - ocs_ci.utility.utils - INFO - Executing command: oc get volumesnapshot -A -o yaml >14:46:25 - MainThread - tests.conftest - INFO - Skipping health checks for MCG only mode >14:46:25 - MainThread - tests.conftest - INFO - Skipping alert check for development mode >14:46:25 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get storagecluster -n openshift-storage -o yaml >14:46:26 - MainThread - tests.conftest - INFO - Changing minimum Noobaa endpoints to 2 >14:46:26 - MainThread - ocs_ci.ocs.ocp - INFO - Command: patch storagecluster ocs-storagecluster -n openshift-storage -p '{"spec":{"multiCloudGateway":{"endpoints":{"minCount":2}}}}' --type merge >14:46:26 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage patch storagecluster ocs-storagecluster -n openshift-storage -p '{"spec":{"multiCloudGateway":{"endpoints":{"minCount":2}}}}' --type merge >14:46:27 - MainThread - tests.conftest - INFO - Changing maximum Noobaa endpoints to 2 >14:46:27 - MainThread - ocs_ci.ocs.ocp - INFO - Command: patch storagecluster ocs-storagecluster -n openshift-storage -p '{"spec":{"multiCloudGateway":{"endpoints":{"maxCount":2}}}}' --type merge >14:46:27 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage patch storagecluster ocs-storagecluster -n openshift-storage -p '{"spec":{"multiCloudGateway":{"endpoints":{"maxCount":2}}}}' --type merge >14:46:28 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-s3=noobaa -o yaml >14:46:30 - MainThread - tests.conftest - INFO - Waiting for the NooBaa endpoints to stabilize. Current ready count: 10 >14:46:30 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 30 seconds before next iteration >14:47:00 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-s3=noobaa -o yaml >14:47:01 - MainThread - tests.conftest - INFO - Waiting for the NooBaa endpoints to stabilize. Current ready count: 10 >14:47:01 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 30 seconds before next iteration >14:47:31 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-s3=noobaa -o yaml >14:47:33 - MainThread - tests.conftest - INFO - NooBaa endpoints stabilized. Ready endpoints: 2 >14:47:33 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-operator=deployment -o yaml >14:47:34 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-core=noobaa -o yaml >14:47:35 - MainThread - ocs_ci.ocs.ocp - INFO - Waiting for a resource(s) of kind Pod identified by name 'noobaa-operator-75d6996c65-wxbbh' using selector None at column name STATUS to reach desired condition Running >14:47:35 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-operator-75d6996c65-wxbbh -n openshift-storage -o yaml >14:47:36 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-operator-75d6996c65-wxbbh -n openshift-storage >14:47:37 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage -o yaml >14:47:40 - MainThread - ocs_ci.ocs.ocp - INFO - status of noobaa-operator-75d6996c65-wxbbh at STATUS reached condition! >14:47:40 - MainThread - ocs_ci.helpers.helpers - INFO - Pod noobaa-operator-75d6996c65-wxbbh reached state Running >14:47:40 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-operator-75d6996c65-wxbbh -n openshift-storage -o yaml >14:47:41 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage rsh noobaa-operator-75d6996c65-wxbbh bash -c "md5sum /usr/local/bin/noobaa-operator" >14:47:44 - MainThread - ocs_ci.ocs.resources.pod - INFO - md5sum of file /usr/local/bin/noobaa-operator: 3b9df4f5df08e22208aa5ff5b3cb4d5d >14:47:44 - MainThread - /root/ocs-ci/ocs_ci/ocs/resources/mcg.py - INFO - Remote noobaa cli md5 hash: 3b9df4f5df08e22208aa5ff5b3cb4d5d >14:47:44 - MainThread - /root/ocs-ci/ocs_ci/ocs/resources/mcg.py - INFO - Local noobaa cli md5 hash: 3b9df4f5df08e22208aa5ff5b3cb4d5d >14:47:44 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-ingress-operator get secret router-ca -n openshift-ingress-operator -o yaml >14:47:45 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get noobaa -n openshift-storage -o yaml >14:47:46 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get secret noobaa-admin -n openshift-storage -o yaml >14:47:47 - MainThread - /root/ocs-ci/ocs_ci/ocs/resources/mcg.py - INFO - Sending MCG RPC query: >auth_api create_auth {'role': 'admin', 'system': 'noobaa', 'email': 'admin@noobaa.io', 'password': 'phht0vDIB6IGzj0OktPEug=='} >14:47:48 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage rsh s3cli-0 mkdir -p test_multipart_upload_operations >14:47:50 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage rsh s3cli-0 mkdir -p test_multipart_upload_operations/origin >14:47:53 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage rsh s3cli-0 mkdir -p test_multipart_upload_operations/result >-------------------------------- live log call --------------------------------- >14:47:55 - MainThread - /root/ocs-ci/ocs_ci/ocs/resources/objectbucket.py - INFO - Creating bucket: oc-bucket-1dbe4c9d4f714be381b393819ed2f2 >14:47:55 - MainThread - ocs_ci.ocs.resources.ocs - INFO - Adding ObjectBucketClaim with name oc-bucket-1dbe4c9d4f714be381b393819ed2f2 >14:47:55 - MainThread - ocs_ci.utility.templating - INFO - apiVersion: objectbucket.io/v1alpha1 >kind: ObjectBucketClaim >metadata: > name: oc-bucket-1dbe4c9d4f714be381b393819ed2f2 > namespace: openshift-storage >spec: > bucketName: oc-bucket-1dbe4c9d4f714be381b393819ed2f2 > storageClassName: openshift-storage.noobaa.io > >14:47:55 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage create -f /tmp/ObjectBucketClaim348x2hkz -o yaml >14:47:56 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get ObjectBucketClaim oc-bucket-1dbe4c9d4f714be381b393819ed2f2 -n openshift-storage -o yaml >14:47:57 - MainThread - /root/ocs-ci/ocs_ci/ocs/resources/objectbucket.py - INFO - Waiting for oc-bucket-1dbe4c9d4f714be381b393819ed2f2 to be healthy >14:47:57 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get obc oc-bucket-1dbe4c9d4f714be381b393819ed2f2 -n openshift-storage -o yaml >14:47:58 - MainThread - /root/ocs-ci/ocs_ci/ocs/resources/objectbucket.py - INFO - oc-bucket-1dbe4c9d4f714be381b393819ed2f2 status is Bound >14:47:58 - MainThread - /root/ocs-ci/ocs_ci/ocs/resources/objectbucket.py - INFO - oc-bucket-1dbe4c9d4f714be381b393819ed2f2 is healthy >14:47:58 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage rsh s3cli-0 sh -c "dd if=/dev/urandom of=test_multipart_upload_operations/origin/ObjKey-982b6298fafc4df6bc836b365eca407d bs=1MB count=500; split -a 1 -b 41m test_multipart_upload_operations/origin/ObjKey-982b6298fafc4df6bc836b365eca407d test_multipart_upload_operations/result/part" >14:48:11 - MainThread - ocs_ci.utility.utils - WARNING - Command stderr: 500+0 records in >500+0 records out >500000000 bytes (500 MB) copied, 10.5355 s, 47.5 MB/s > >14:48:11 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage rsh s3cli-0 sh -c "ls -1 test_multipart_upload_operations/result" >14:48:13 - MainThread - tests.manage.mcg.test_multipart_upload - INFO - Aborting any Multipart Upload on bucket:oc-bucket-1dbe4c9d4f714be381b393819ed2f2 >14:48:14 - MainThread - ocs_ci.ocs.bucket_utils - INFO - Aborting4 uploads >14:48:14 - MainThread - tests.manage.mcg.test_multipart_upload - INFO - Initiating Multipart Upload on Bucket: oc-bucket-1dbe4c9d4f714be381b393819ed2f2 with Key ObjKey-982b6298fafc4df6bc836b365eca407d >14:48:15 - MainThread - tests.manage.mcg.test_multipart_upload - INFO - Listing the Multipart Upload : {'ResponseMetadata': {'RequestId': 'ljmi21vh-4qp77n-j8i', 'HostId': 'ljmi21vh-4qp77n-j8i', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amz-request-id': 'ljmi21vh-4qp77n-j8i', 'x-amz-id-2': 'ljmi21vh-4qp77n-j8i', 'access-control-allow-origin': '*', 'access-control-allow-credentials': 'true', 'access-control-allow-methods': 'GET,POST,PUT,DELETE,OPTIONS', 'access-control-allow-headers': 'Content-Type,Content-MD5,Authorization,X-Amz-User-Agent,X-Amz-Date,ETag,X-Amz-Content-Sha256', 'access-control-expose-headers': 'ETag,X-Amz-Version-Id', 'content-type': 'application/xml', 'content-length': '585', 'date': 'Mon, 03 Jul 2023 06:48:15 GMT', 'keep-alive': 'timeout=5', 'set-cookie': '1a4aa612fe797ac8466d7ee00e5520d5=3620695ae4f33a628f840f17130975cd; path=/; HttpOnly', 'cache-control': 'private'}, 'RetryAttempts': 0}, 'Bucket': 'oc-bucket-1dbe4c9d4f714be381b393819ed2f2', 'MaxUploads': 1000, 'IsTruncated': False, 'Uploads': [{'UploadId': '64a26f2e3bdb72000ed3c7cd', 'Key': 'ObjKey-982b6298fafc4df6bc836b365eca407d', 'Initiated': datetime.datetime(2023, 7, 3, 6, 48, 14, tzinfo=tzutc()), 'StorageClass': 'STANDARD', 'Owner': {'DisplayName': 'NooBaa', 'ID': '123'}, 'Initiator': {'ID': '123', 'DisplayName': 'NooBaa'}}]} >14:48:15 - MainThread - tests.manage.mcg.test_multipart_upload - INFO - Uploading individual parts to the bucket oc-bucket-1dbe4c9d4f714be381b393819ed2f2 >14:48:15 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage rsh s3cli-0 sh -c "AWS_CA_BUNDLE=/cert/service-ca.crt AWS_ACCESS_KEY_ID=***** AWS_SECRET_ACCESS_KEY=***** AWS_DEFAULT_REGION=us-east-2 aws s3api --endpoint=***** upload-part --bucket oc-bucket-1dbe4c9d4f714be381b393819ed2f2 --key ObjKey-982b6298fafc4df6bc836b365eca407d --part-number 1 --body test_multipart_upload_operations/result/parta --upload-id 64a26f2e3bdb72000ed3c7cd" >14:48:21 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage rsh s3cli-0 sh -c "AWS_CA_BUNDLE=/cert/service-ca.crt AWS_ACCESS_KEY_ID=***** AWS_SECRET_ACCESS_KEY=***** AWS_DEFAULT_REGION=us-east-2 aws s3api --endpoint=***** upload-part --bucket oc-bucket-1dbe4c9d4f714be381b393819ed2f2 --key ObjKey-982b6298fafc4df6bc836b365eca407d --part-number 2 --body test_multipart_upload_operations/result/partb --upload-id 64a26f2e3bdb72000ed3c7cd" >14:48:29 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage rsh s3cli-0 sh -c "AWS_CA_BUNDLE=/cert/service-ca.crt AWS_ACCESS_KEY_ID=***** AWS_SECRET_ACCESS_KEY=***** AWS_DEFAULT_REGION=us-east-2 aws s3api --endpoint=***** upload-part --bucket oc-bucket-1dbe4c9d4f714be381b393819ed2f2 --key ObjKey-982b6298fafc4df6bc836b365eca407d --part-number 3 --body test_multipart_upload_operations/result/partc --upload-id 64a26f2e3bdb72000ed3c7cd" >14:48:36 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage rsh s3cli-0 sh -c "AWS_CA_BUNDLE=/cert/service-ca.crt AWS_ACCESS_KEY_ID=***** AWS_SECRET_ACCESS_KEY=***** AWS_DEFAULT_REGION=us-east-2 aws s3api --endpoint=***** upload-part --bucket oc-bucket-1dbe4c9d4f714be381b393819ed2f2 --key ObjKey-982b6298fafc4df6bc836b365eca407d --part-number 4 --body test_multipart_upload_operations/result/partd --upload-id 64a26f2e3bdb72000ed3c7cd" >14:48:43 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage rsh s3cli-0 sh -c "AWS_CA_BUNDLE=/cert/service-ca.crt AWS_ACCESS_KEY_ID=***** AWS_SECRET_ACCESS_KEY=***** AWS_DEFAULT_REGION=us-east-2 aws s3api --endpoint=***** upload-part --bucket oc-bucket-1dbe4c9d4f714be381b393819ed2f2 --key ObjKey-982b6298fafc4df6bc836b365eca407d --part-number 5 --body test_multipart_upload_operations/result/parte --upload-id 64a26f2e3bdb72000ed3c7cd" >14:48:49 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage rsh s3cli-0 sh -c "AWS_CA_BUNDLE=/cert/service-ca.crt AWS_ACCESS_KEY_ID=***** AWS_SECRET_ACCESS_KEY=***** AWS_DEFAULT_REGION=us-east-2 aws s3api --endpoint=***** upload-part --bucket oc-bucket-1dbe4c9d4f714be381b393819ed2f2 --key ObjKey-982b6298fafc4df6bc836b365eca407d --part-number 6 --body test_multipart_upload_operations/result/partf --upload-id 64a26f2e3bdb72000ed3c7cd" >14:48:58 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage rsh s3cli-0 sh -c "AWS_CA_BUNDLE=/cert/service-ca.crt AWS_ACCESS_KEY_ID=***** AWS_SECRET_ACCESS_KEY=***** AWS_DEFAULT_REGION=us-east-2 aws s3api --endpoint=***** upload-part --bucket oc-bucket-1dbe4c9d4f714be381b393819ed2f2 --key ObjKey-982b6298fafc4df6bc836b365eca407d --part-number 7 --body test_multipart_upload_operations/result/partg --upload-id 64a26f2e3bdb72000ed3c7cd" >14:49:05 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage rsh s3cli-0 sh -c "AWS_CA_BUNDLE=/cert/service-ca.crt AWS_ACCESS_KEY_ID=***** AWS_SECRET_ACCESS_KEY=***** AWS_DEFAULT_REGION=us-east-2 aws s3api --endpoint=***** upload-part --bucket oc-bucket-1dbe4c9d4f714be381b393819ed2f2 --key ObjKey-982b6298fafc4df6bc836b365eca407d --part-number 8 --body test_multipart_upload_operations/result/parth --upload-id 64a26f2e3bdb72000ed3c7cd" >14:49:11 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage rsh s3cli-0 sh -c "AWS_CA_BUNDLE=/cert/service-ca.crt AWS_ACCESS_KEY_ID=***** AWS_SECRET_ACCESS_KEY=***** AWS_DEFAULT_REGION=us-east-2 aws s3api --endpoint=***** upload-part --bucket oc-bucket-1dbe4c9d4f714be381b393819ed2f2 --key ObjKey-982b6298fafc4df6bc836b365eca407d --part-number 9 --body test_multipart_upload_operations/result/parti --upload-id 64a26f2e3bdb72000ed3c7cd" >14:49:17 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage rsh s3cli-0 sh -c "AWS_CA_BUNDLE=/cert/service-ca.crt AWS_ACCESS_KEY_ID=***** AWS_SECRET_ACCESS_KEY=***** AWS_DEFAULT_REGION=us-east-2 aws s3api --endpoint=***** upload-part --bucket oc-bucket-1dbe4c9d4f714be381b393819ed2f2 --key ObjKey-982b6298fafc4df6bc836b365eca407d --part-number 10 --body test_multipart_upload_operations/result/partj --upload-id 64a26f2e3bdb72000ed3c7cd" >14:49:24 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage rsh s3cli-0 sh -c "AWS_CA_BUNDLE=/cert/service-ca.crt AWS_ACCESS_KEY_ID=***** AWS_SECRET_ACCESS_KEY=***** AWS_DEFAULT_REGION=us-east-2 aws s3api --endpoint=***** upload-part --bucket oc-bucket-1dbe4c9d4f714be381b393819ed2f2 --key ObjKey-982b6298fafc4df6bc836b365eca407d --part-number 11 --body test_multipart_upload_operations/result/partk --upload-id 64a26f2e3bdb72000ed3c7cd" >14:49:31 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage rsh s3cli-0 sh -c "AWS_CA_BUNDLE=/cert/service-ca.crt AWS_ACCESS_KEY_ID=***** AWS_SECRET_ACCESS_KEY=***** AWS_DEFAULT_REGION=us-east-2 aws s3api --endpoint=***** upload-part --bucket oc-bucket-1dbe4c9d4f714be381b393819ed2f2 --key ObjKey-982b6298fafc4df6bc836b365eca407d --part-number 12 --body test_multipart_upload_operations/result/partl --upload-id 64a26f2e3bdb72000ed3c7cd" >14:49:36 - MainThread - tests.manage.mcg.test_multipart_upload - INFO - Listing the individual parts : {'ResponseMetadata': {'RequestId': 'ljmi3swh-fzqqz4-c67', 'HostId': 'ljmi3swh-fzqqz4-c67', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amz-request-id': 'ljmi3swh-fzqqz4-c67', 'x-amz-id-2': 'ljmi3swh-fzqqz4-c67', 'access-control-allow-origin': '*', 'access-control-allow-credentials': 'true', 'access-control-allow-methods': 'GET,POST,PUT,DELETE,OPTIONS', 'access-control-allow-headers': 'Content-Type,Content-MD5,Authorization,X-Amz-User-Agent,X-Amz-Date,ETag,X-Amz-Content-Sha256', 'access-control-expose-headers': 'ETag,X-Amz-Version-Id', 'content-type': 'application/xml', 'content-length': '2576', 'date': 'Mon, 03 Jul 2023 06:49:36 GMT', 'keep-alive': 'timeout=5', 'set-cookie': '1a4aa612fe797ac8466d7ee00e5520d5=900324da599bb59ee68ecd3b0ba3b987; path=/; HttpOnly', 'cache-control': 'private'}, 'RetryAttempts': 0}, 'Bucket': 'oc-bucket-1dbe4c9d4f714be381b393819ed2f2', 'Key': 'ObjKey-982b6298fafc4df6bc836b365eca407d', 'UploadId': '64a26f2e3bdb72000ed3c7cd', 'PartNumberMarker': 0, 'MaxParts': 1000, 'IsTruncated': False, 'Parts': [{'PartNumber': 1, 'LastModified': datetime.datetime(2023, 7, 3, 6, 48, 21, tzinfo=tzutc()), 'ETag': '"6cac5516e573bf3e7cd989c25dd9c425"', 'Size': 42991616}, {'PartNumber': 2, 'LastModified': datetime.datetime(2023, 7, 3, 6, 48, 29, tzinfo=tzutc()), 'ETag': '"f160fba30acaaef942756fc62f97a21e"', 'Size': 42991616}, {'PartNumber': 3, 'LastModified': datetime.datetime(2023, 7, 3, 6, 48, 36, tzinfo=tzutc()), 'ETag': '"43530a13f02ad7b3bb8a46a255c091ea"', 'Size': 42991616}, {'PartNumber': 4, 'LastModified': datetime.datetime(2023, 7, 3, 6, 48, 42, tzinfo=tzutc()), 'ETag': '"ec295b228996353d292357b7c89ab3fb"', 'Size': 42991616}, {'PartNumber': 5, 'LastModified': datetime.datetime(2023, 7, 3, 6, 48, 49, tzinfo=tzutc()), 'ETag': '"c645ce989ac8a9f5c935b234c25fb658"', 'Size': 42991616}, {'PartNumber': 6, 'LastModified': datetime.datetime(2023, 7, 3, 6, 48, 58, tzinfo=tzutc()), 'ETag': '"9f4b0a0b8c74d1dcad51cb7d00c725c9"', 'Size': 42991616}, {'PartNumber': 7, 'LastModified': datetime.datetime(2023, 7, 3, 6, 49, 5, tzinfo=tzutc()), 'ETag': '"d7450e3c122fa42d1b0f7a0edadf77fa"', 'Size': 42991616}, {'PartNumber': 8, 'LastModified': datetime.datetime(2023, 7, 3, 6, 49, 11, tzinfo=tzutc()), 'ETag': '"3e136002760371515e47d8a1b563bbb9"', 'Size': 42991616}, {'PartNumber': 9, 'LastModified': datetime.datetime(2023, 7, 3, 6, 49, 17, tzinfo=tzutc()), 'ETag': '"49ba6b59c3c8bc29b9b04fad4420643d"', 'Size': 42991616}, {'PartNumber': 10, 'LastModified': datetime.datetime(2023, 7, 3, 6, 49, 24, tzinfo=tzutc()), 'ETag': '"026d63d7a41279b1aaa4b1841b3d4c61"', 'Size': 42991616}, {'PartNumber': 11, 'LastModified': datetime.datetime(2023, 7, 3, 6, 49, 31, tzinfo=tzutc()), 'ETag': '"cbd0cd0b2b5af0744ee9f11b0f190724"', 'Size': 42991616}, {'PartNumber': 12, 'LastModified': datetime.datetime(2023, 7, 3, 6, 49, 36, tzinfo=tzutc()), 'ETag': '"af9fbf7f86cf32c0af8fc40600d6e896"', 'Size': 27092224}], 'Initiator': {'ID': '123', 'DisplayName': 'NooBaa'}, 'Owner': {'DisplayName': 'NooBaa', 'ID': '123'}, 'StorageClass': 'STANDARD'} >14:49:36 - MainThread - tests.manage.mcg.test_multipart_upload - INFO - Completing the Multipart Upload on bucket: oc-bucket-1dbe4c9d4f714be381b393819ed2f2 >14:49:37 - MainThread - tests.manage.mcg.test_multipart_upload - INFO - {'ResponseMetadata': {'RequestId': 'ljmi3t2w-3bozof-15zq', 'HostId': 'ljmi3t2w-3bozof-15zq', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amz-request-id': 'ljmi3t2w-3bozof-15zq', 'x-amz-id-2': 'ljmi3t2w-3bozof-15zq', 'access-control-allow-origin': '*', 'access-control-allow-credentials': 'true', 'access-control-allow-methods': 'GET,POST,PUT,DELETE,OPTIONS', 'access-control-allow-headers': 'Content-Type,Content-MD5,Authorization,X-Amz-User-Agent,X-Amz-Date,ETag,X-Amz-Content-Sha256', 'access-control-expose-headers': 'ETag,X-Amz-Version-Id', 'content-type': 'application/xml', 'content-length': '452', 'date': 'Mon, 03 Jul 2023 06:49:37 GMT', 'keep-alive': 'timeout=5', 'set-cookie': '1a4aa612fe797ac8466d7ee00e5520d5=3620695ae4f33a628f840f17130975cd; path=/; HttpOnly'}, 'RetryAttempts': 0}, 'Location': '/oc-bucket-1dbe4c9d4f714be381b393819ed2f2/ObjKey-982b6298fafc4df6bc836b365eca407d?uploadId=64a26f2e3bdb72000ed3c7cd', 'Bucket': 'oc-bucket-1dbe4c9d4f714be381b393819ed2f2', 'Key': 'ObjKey-982b6298fafc4df6bc836b365eca407d', 'ETag': '"95537d12dbc49122067d279acc257f1f-12"'} >14:49:37 - MainThread - tests.manage.mcg.test_multipart_upload - INFO - Downloading the completed multipart object from MCG bucket to awscli pod >14:49:37 - MainThread - ocs_ci.ocs.bucket_utils - INFO - Syncing all objects and directories from s3://oc-bucket-1dbe4c9d4f714be381b393819ed2f2 to test_multipart_upload_operations/result >14:49:37 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage rsh s3cli-0 sh -c "AWS_CA_BUNDLE=/cert/service-ca.crt AWS_ACCESS_KEY_ID=***** AWS_SECRET_ACCESS_KEY=***** AWS_DEFAULT_REGION=us-east-2 aws s3 --endpoint=***** sync s3://oc-bucket-1dbe4c9d4f714be381b393819ed2f2 test_multipart_upload_operations/result" >14:49:52 - MainThread - ocs_ci.utility.utils - WARNING - Command stderr: download failed: s3://oc-bucket-1dbe4c9d4f714be381b393819ed2f2/ObjKey-982b6298fafc4df6bc836b365eca407d to test_multipart_upload_operations/result/ObjKey-982b6298fafc4df6bc836b365eca407d Connection was closed before we received a valid response from endpoint URL: "*****/oc-bucket-1dbe4c9d4f714be381b393819ed2f2/ObjKey-982b6298fafc4df6bc836b365eca407d". >command terminated with exit code 1 > >14:49:53 - MainThread - ocs_ci.ocs.utils - INFO - Must gather image: quay.io/rhceph-dev/ocs-must-gather:latest-4.10 will be used. >14:49:53 - MainThread - ocs_ci.ocs.utils - INFO - OCS logs will be placed in location /tmp/failed_testcase_ocs_logs_1688366716/test_multipart_upload_operations_ocs_logs/ocs_must_gather >14:49:53 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc adm must-gather --image=quay.io/rhceph-dev/ocs-must-gather:latest-4.10 --dest-dir=/tmp/failed_testcase_ocs_logs_1688366716/test_multipart_upload_operations_ocs_logs/ocs_must_gather >15:06:39 - MainThread - ocs_ci.utility.utils - WARNING - Command stderr: > >Error running must-gather collection: > gather did not start for pod must-gather-6ffr9: unable to pull image: ImagePullBackOff: Back-off pulling image "quay.io/rhceph-dev/ocs-must-gather:latest-4.10" > >Falling back to `oc adm inspect clusteroperators.v1.config.openshift.io` to collect basic cluster information. >I0703 14:50:12.766073 17520 request.go:665] Waited for 1.173714568s due to client-side throttling, not priority and fairness, request: GET:https://api.isf-rackemc.rtp.raleigh.ibm.com:6443/apis/storage.isf.ibm.com/v1?timeout=32s >I0703 15:01:12.294604 17520 request.go:665] Waited for 1.180227402s due to client-side throttling, not priority and fairness, request: GET:https://api.isf-rackemc.rtp.raleigh.ibm.com:6443/apis/storage.isf.ibm.com/v1?timeout=32s >error running backup collection: errors occurred while gathering data: > [skipping gathering clusterroles.rbac.authorization.k8s.io/system:registry due to error: clusterroles.rbac.authorization.k8s.io "system:registry" not found, skipping gathering clusterrolebindings.rbac.authorization.k8s.io/registry-registry-role due to error: clusterrolebindings.rbac.authorization.k8s.io "registry-registry-role" not found, skipping gathering secrets/support due to error: secrets "support" not found, skipping gathering endpoints/host-etcd-2 due to error: endpoints "host-etcd-2" not found, skipping gathering sharedconfigmaps.sharedresource.openshift.io due to error: the server doesn't have a resource type "sharedconfigmaps", skipping gathering sharedsecrets.sharedresource.openshift.io due to error: the server doesn't have a resource type "sharedsecrets"] >error: gather did not start for pod must-gather-6ffr9: unable to pull image: ImagePullBackOff: Back-off pulling image "quay.io/rhceph-dev/ocs-must-gather:latest-4.10" > >15:06:39 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get pods -o name >15:06:41 - MainThread - ocs_ci.ocs.utils - ERROR - >15:06:41 - MainThread - ocs_ci.ocs.utils - ERROR - Failed during must gather logs! Error: Error during execution of command: oc adm must-gather --image=quay.io/rhceph-dev/ocs-must-gather:latest-4.10 --dest-dir=/tmp/failed_testcase_ocs_logs_1688366716/test_multipart_upload_operations_ocs_logs/ocs_must_gather. >Error is > >Error running must-gather collection: > gather did not start for pod must-gather-6ffr9: unable to pull image: ImagePullBackOff: Back-off pulling image "quay.io/rhceph-dev/ocs-must-gather:latest-4.10" > >Falling back to `oc adm inspect clusteroperators.v1.config.openshift.io` to collect basic cluster information. >I0703 14:50:12.766073 17520 request.go:665] Waited for 1.173714568s due to client-side throttling, not priority and fairness, request: GET:https://api.isf-rackemc.rtp.raleigh.ibm.com:6443/apis/storage.isf.ibm.com/v1?timeout=32s >I0703 15:01:12.294604 17520 request.go:665] Waited for 1.180227402s due to client-side throttling, not priority and fairness, request: GET:https://api.isf-rackemc.rtp.raleigh.ibm.com:6443/apis/storage.isf.ibm.com/v1?timeout=32s >error running backup collection: errors occurred while gathering data: > [skipping gathering clusterroles.rbac.authorization.k8s.io/system:registry due to error: clusterroles.rbac.authorization.k8s.io "system:registry" not found, skipping gathering clusterrolebindings.rbac.authorization.k8s.io/registry-registry-role due to error: clusterrolebindings.rbac.authorization.k8s.io "registry-registry-role" not found, skipping gathering secrets/support due to error: secrets "support" not found, skipping gathering endpoints/host-etcd-2 due to error: endpoints "host-etcd-2" not found, skipping gathering sharedconfigmaps.sharedresource.openshift.io due to error: the server doesn't have a resource type "sharedconfigmaps", skipping gathering sharedsecrets.sharedresource.openshift.io due to error: the server doesn't have a resource type "sharedsecrets"] >error: gather did not start for pod must-gather-6ffr9: unable to pull image: ImagePullBackOff: Back-off pulling image "quay.io/rhceph-dev/ocs-must-gather:latest-4.10" >Must-Gather Output: >15:06:41 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >15:06:42 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage rsh noobaa-db-pg-0 bash -c "pg_dump nbcore | gzip > /tmp/nbcore.gz" >15:06:45 - MainThread - ocs_ci.utility.utils - WARNING - Command stderr: Defaulted container "db" out of: db, init (init), initialize-database (init) > >15:06:45 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage cp noobaa-db-pg-0:/tmp/nbcore.gz /tmp/failed_testcase_ocs_logs_1688366716/test_multipart_upload_operations_ocs_logs/noobaa_db_dump/nbcore.gz >FAILED >------------------------------ live log teardown ------------------------------- >15:06:49 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage rsh s3cli-0 rm -rf test_multipart_upload_operations >15:06:51 - MainThread - tests.conftest - INFO - Cleaning up bucket oc-bucket-1dbe4c9d4f714be381b393819ed2f2 >15:06:51 - MainThread - /root/ocs-ci/ocs_ci/ocs/resources/objectbucket.py - INFO - Deleting bucket: oc-bucket-1dbe4c9d4f714be381b393819ed2f2 >15:06:51 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage delete obc oc-bucket-1dbe4c9d4f714be381b393819ed2f2 >15:06:53 - MainThread - /root/ocs-ci/ocs_ci/ocs/resources/objectbucket.py - INFO - Verifying deletion of oc-bucket-1dbe4c9d4f714be381b393819ed2f2 >15:06:53 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get obc -n openshift-storage -o yaml >15:06:54 - MainThread - /root/ocs-ci/ocs_ci/ocs/resources/objectbucket.py - INFO - oc-bucket-1dbe4c9d4f714be381b393819ed2f2 was deleted successfully >15:06:54 - Dummy-2 - ocs_ci.utility.utils - INFO - Executing command: oc get Pod -A -o yaml >15:06:54 - Dummy-8 - ocs_ci.utility.utils - INFO - Executing command: oc get StorageClass -A -o yaml >15:06:54 - Dummy-6 - ocs_ci.utility.utils - INFO - Executing command: oc get CephFileSystem -A -o yaml >15:06:54 - Dummy-10 - ocs_ci.utility.utils - INFO - Executing command: oc get CephBlockPool -A -o yaml >15:06:54 - Dummy-11 - ocs_ci.utility.utils - INFO - Executing command: oc get PersistentVolume -A -o yaml >15:06:54 - Dummy-7 - ocs_ci.utility.utils - INFO - Executing command: oc get PersistentVolumeClaim -A -o yaml >15:06:54 - Dummy-12 - ocs_ci.utility.utils - INFO - Executing command: oc get Namespace -A -o yaml >15:06:54 - Dummy-13 - ocs_ci.utility.utils - INFO - Executing command: oc get volumesnapshot -A -o yaml >15:07:32 - MainThread - tests.conftest - INFO - aws_client secret not found >15:07:32 - MainThread - tests.conftest - INFO - gcp_client secret not found >15:07:32 - MainThread - tests.conftest - INFO - azure_client secret not found >15:07:32 - MainThread - tests.conftest - INFO - ibmcos_client secret not found >15:07:32 - MainThread - tests.conftest - INFO - rgw_client secret not found >15:07:32 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage delete StatefulSet s3cli >15:07:33 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage delete ConfigMap session-awscli-service-ca-ffc537796dd94f >15:07:35 - MainThread - ocs_ci.ocs.utils - INFO - Must gather image: quay.io/rhceph-dev/ocs-must-gather:latest-4.10 will be used. >15:07:35 - MainThread - ocs_ci.ocs.utils - INFO - OCS logs will be placed in location /tmp/failed_testcase_ocs_logs_1688366716/test_multipart_upload_operations_ocs_logs/ocs_must_gather >15:07:35 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc adm must-gather --image=quay.io/rhceph-dev/ocs-must-gather:latest-4.10 --dest-dir=/tmp/failed_testcase_ocs_logs_1688366716/test_multipart_upload_operations_ocs_logs/ocs_must_gather >15:24:11 - MainThread - ocs_ci.utility.utils - WARNING - Command stderr: > >Error running must-gather collection: > gather did not start for pod must-gather-nxbjh: unable to pull image: ImagePullBackOff: Back-off pulling image "quay.io/rhceph-dev/ocs-must-gather:latest-4.10" > >Falling back to `oc adm inspect clusteroperators.v1.config.openshift.io` to collect basic cluster information. >I0703 15:07:55.152276 17645 request.go:665] Waited for 1.184381451s due to client-side throttling, not priority and fairness, request: GET:https://api.isf-rackemc.rtp.raleigh.ibm.com:6443/apis/storage.isf.ibm.com/v1?timeout=32s >I0703 15:18:43.111438 17645 request.go:665] Waited for 1.179532467s due to client-side throttling, not priority and fairness, request: GET:https://api.isf-rackemc.rtp.raleigh.ibm.com:6443/apis/storage.isf.ibm.com/v1?timeout=32s >error running backup collection: errors occurred while gathering data: > [skipping gathering clusterroles.rbac.authorization.k8s.io/system:registry due to error: clusterroles.rbac.authorization.k8s.io "system:registry" not found, skipping gathering clusterrolebindings.rbac.authorization.k8s.io/registry-registry-role due to error: clusterrolebindings.rbac.authorization.k8s.io "registry-registry-role" not found, skipping gathering secrets/support due to error: secrets "support" not found, skipping gathering endpoints/host-etcd-2 due to error: endpoints "host-etcd-2" not found, skipping gathering sharedconfigmaps.sharedresource.openshift.io due to error: the server doesn't have a resource type "sharedconfigmaps", skipping gathering sharedsecrets.sharedresource.openshift.io due to error: the server doesn't have a resource type "sharedsecrets"] >error: gather did not start for pod must-gather-nxbjh: unable to pull image: ImagePullBackOff: Back-off pulling image "quay.io/rhceph-dev/ocs-must-gather:latest-4.10" > >15:24:11 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get pods -o name >15:24:13 - MainThread - ocs_ci.ocs.utils - ERROR - >15:24:13 - MainThread - ocs_ci.ocs.utils - ERROR - Failed during must gather logs! Error: Error during execution of command: oc adm must-gather --image=quay.io/rhceph-dev/ocs-must-gather:latest-4.10 --dest-dir=/tmp/failed_testcase_ocs_logs_1688366716/test_multipart_upload_operations_ocs_logs/ocs_must_gather. >Error is > >Error running must-gather collection: > gather did not start for pod must-gather-nxbjh: unable to pull image: ImagePullBackOff: Back-off pulling image "quay.io/rhceph-dev/ocs-must-gather:latest-4.10" > >Falling back to `oc adm inspect clusteroperators.v1.config.openshift.io` to collect basic cluster information. >I0703 15:07:55.152276 17645 request.go:665] Waited for 1.184381451s due to client-side throttling, not priority and fairness, request: GET:https://api.isf-rackemc.rtp.raleigh.ibm.com:6443/apis/storage.isf.ibm.com/v1?timeout=32s >I0703 15:18:43.111438 17645 request.go:665] Waited for 1.179532467s due to client-side throttling, not priority and fairness, request: GET:https://api.isf-rackemc.rtp.raleigh.ibm.com:6443/apis/storage.isf.ibm.com/v1?timeout=32s >error running backup collection: errors occurred while gathering data: > [skipping gathering clusterroles.rbac.authorization.k8s.io/system:registry due to error: clusterroles.rbac.authorization.k8s.io "system:registry" not found, skipping gathering clusterrolebindings.rbac.authorization.k8s.io/registry-registry-role due to error: clusterrolebindings.rbac.authorization.k8s.io "registry-registry-role" not found, skipping gathering secrets/support due to error: secrets "support" not found, skipping gathering endpoints/host-etcd-2 due to error: endpoints "host-etcd-2" not found, skipping gathering sharedconfigmaps.sharedresource.openshift.io due to error: the server doesn't have a resource type "sharedconfigmaps", skipping gathering sharedsecrets.sharedresource.openshift.io due to error: the server doesn't have a resource type "sharedsecrets"] >error: gather did not start for pod must-gather-nxbjh: unable to pull image: ImagePullBackOff: Back-off pulling image "quay.io/rhceph-dev/ocs-must-gather:latest-4.10" >Must-Gather Output: >15:24:13 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >15:24:14 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage rsh noobaa-db-pg-0 bash -c "pg_dump nbcore | gzip > /tmp/nbcore.gz" >15:24:17 - MainThread - ocs_ci.utility.utils - WARNING - Command stderr: Defaulted container "db" out of: db, init (init), initialize-database (init) > >15:24:17 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage cp noobaa-db-pg-0:/tmp/nbcore.gz /tmp/failed_testcase_ocs_logs_1688366716/test_multipart_upload_operations_ocs_logs/noobaa_db_dump/nbcore.gz > >tests/manage/mcg/test_multipart_upload.py::TestS3MultipartUpload::test_multipart_upload_operations ERROR > >==================================== ERRORS ==================================== >_ ERROR at teardown of TestS3MultipartUpload.test_multipart_upload_operations __ > >exclude_labels = ['must-gather', 's3cli', 'must-gather-helper-pod'] > > def get_status_after_execution(exclude_labels=None): > """ > Set the environment status and assign it into ENV_STATUS_PRE dictionary. > In addition compare the dict before the execution and after using DeepDiff > > Args: > exclude_labels (list): App labels to ignore leftovers > > Raises: > ResourceLeftoversException: In case there are leftovers in the > environment after the execution > """ > get_environment_status(config.RUN["ENV_STATUS_POST"], exclude_labels=exclude_labels) > > pod_diff = compare_dicts( > config.RUN["ENV_STATUS_PRE"]["pod"], config.RUN["ENV_STATUS_POST"]["pod"] > ) > sc_diff = compare_dicts( > config.RUN["ENV_STATUS_PRE"]["sc"], config.RUN["ENV_STATUS_POST"]["sc"] > ) > pv_diff = compare_dicts( > config.RUN["ENV_STATUS_PRE"]["pv"], config.RUN["ENV_STATUS_POST"]["pv"] > ) > pvc_diff = compare_dicts( > config.RUN["ENV_STATUS_PRE"]["pvc"], config.RUN["ENV_STATUS_POST"]["pvc"] > ) > namespace_diff = compare_dicts( > config.RUN["ENV_STATUS_PRE"]["namespace"], > config.RUN["ENV_STATUS_POST"]["namespace"], > ) > volumesnapshot_diff = compare_dicts( > config.RUN["ENV_STATUS_PRE"]["vs"], config.RUN["ENV_STATUS_POST"]["vs"] > ) > if config.RUN["cephcluster"]: > cephfs_diff = compare_dicts( > config.RUN["ENV_STATUS_PRE"]["cephfs"], > config.RUN["ENV_STATUS_POST"]["cephfs"], > ) > cephbp_diff = compare_dicts( > config.RUN["ENV_STATUS_PRE"]["cephbp"], > config.RUN["ENV_STATUS_POST"]["cephbp"], > ) > diffs_dict = { > "pods": pod_diff, > "storageClasses": sc_diff, > "cephfs": cephfs_diff, > "cephbp": cephbp_diff, > "pvs": pv_diff, > "pvcs": pvc_diff, > "namespaces": namespace_diff, > "vs": volumesnapshot_diff, > } > elif config.RUN["lvm"]: > lv_diff = compare_dicts( > config.RUN["ENV_STATUS_PRE"]["lv"], > config.RUN["ENV_STATUS_POST"]["lv"], > ) > diffs_dict = { > "pods": pod_diff, > "storageClasses": sc_diff, > "pvs": pv_diff, > "pvcs": pvc_diff, > "namespaces": namespace_diff, > "vs": volumesnapshot_diff, > "lv": lv_diff, > } > > leftover_detected = False > > leftovers = {"Leftovers added": [], "Leftovers removed": []} > for kind, kind_diff in diffs_dict.items(): > if not kind_diff: > continue > if kind_diff[0]: > leftovers["Leftovers added"].append({f"***{kind}***": kind_diff[0]}) > leftover_detected = True > if kind_diff[1]: > leftovers["Leftovers removed"].append({f"***{kind}***": kind_diff[1]}) > leftover_detected = True > if leftover_detected: >> raise exceptions.ResourceLeftoversException( > f"\nThere are leftovers in the environment after test case:" > f"\nResources added:\n{yaml.dump(leftovers['Leftovers added'])}" > f"\nResources " > f"removed:\n {yaml.dump(leftovers['Leftovers removed'])}" > ) >E ocs_ci.ocs.exceptions.ResourceLeftoversException: >E There are leftovers in the environment after test case: >E Resources added: >E - '***pods***': >E - apiVersion: v1 >E kind: Pod >E metadata: >E annotations: >E k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.128.9.30/23"],"mac_address":"0a:58:0a:80:09:1e","gateway_ips":["10.128.8.1"],"ip_address":"10.128.9.30/23","gateway_ip":"10.128.8.1"}}' >E k8s.v1.cni.cncf.io/network-status: "[{\n \"name\": \"ovn-kubernetes\",\n\ >E \ \"interface\": \"eth0\",\n \"ips\": [\n \"10.128.9.30\"\n\ >E \ ],\n \"mac\": \"0a:58:0a:80:09:1e\",\n \"default\": true,\n \ >E \ \"dns\": {}\n}]" >E k8s.v1.cni.cncf.io/networks-status: "[{\n \"name\": \"ovn-kubernetes\"\ >E ,\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.128.9.30\"\ >E \n ],\n \"mac\": \"0a:58:0a:80:09:1e\",\n \"default\": true,\n\ >E \ \"dns\": {}\n}]" >E openshift.io/scc: restricted >E creationTimestamp: '2023-07-03T06:46:02Z' >E generateName: noobaa-tester-546b4cd5cd- >E labels: >E app: noobaa >E noobaa-tester: deployment >E pod-template-hash: 546b4cd5cd >E name: noobaa-tester-546b4cd5cd-k9znf >E namespace: openshift-storage >E ownerReferences: >E - apiVersion: apps/v1 >E blockOwnerDeletion: true >E controller: true >E kind: ReplicaSet >E name: noobaa-tester-546b4cd5cd >E uid: d8b5a117-5da4-49bb-843f-aa5cab95b221 >E resourceVersion: '152046718' >E uid: 196a3f47-9d02-489f-ba2d-d13d37ae798c >E spec: >E containers: >E - command: >E - /bin/bash >E - -c >E - while true; do sleep 10; done; >E envFrom: >E - secretRef: >E name: noobaa-admin >E image: 9.115.251.75/www/noobaa-tester:s3-tests >E imagePullPolicy: IfNotPresent >E name: noobaa-tester >E resources: >E limits: >E cpu: 250m >E memory: 512Mi >E requests: >E cpu: 250m >E memory: 512Mi >E securityContext: >E capabilities: >E drop: >E - KILL >E - MKNOD >E - SETGID >E - SETUID >E runAsUser: 1000690000 >E terminationMessagePath: /dev/termination-log >E terminationMessagePolicy: File >E volumeMounts: >E - mountPath: /var/run/secrets/kubernetes.io/serviceaccount >E name: kube-api-access-xhrf6 >E readOnly: true >E dnsPolicy: ClusterFirst >E enableServiceLinks: true >E imagePullSecrets: >E - name: noobaa-dockercfg-8h2dk >E nodeName: compute-1-ru9.isf-rackemc.rtp.raleigh.ibm.com >E preemptionPolicy: PreemptLowerPriority >E priority: 0 >E restartPolicy: Always >E schedulerName: default-scheduler >E securityContext: >E fsGroup: 1000690000 >E seLinuxOptions: >E level: s0:c26,c20 >E serviceAccount: noobaa >E serviceAccountName: noobaa >E terminationGracePeriodSeconds: 30 >E tolerations: >E - effect: NoExecute >E key: node.kubernetes.io/not-ready >E operator: Exists >E tolerationSeconds: 300 >E - effect: NoExecute >E key: node.kubernetes.io/unreachable >E operator: Exists >E tolerationSeconds: 300 >E - effect: NoSchedule >E key: node.kubernetes.io/memory-pressure >E operator: Exists >E volumes: >E - name: kube-api-access-xhrf6 >E projected: >E defaultMode: 420 >E sources: >E - serviceAccountToken: >E expirationSeconds: 3607 >E path: token >E - configMap: >E items: >E - key: ca.crt >E path: ca.crt >E name: kube-root-ca.crt >E - downwardAPI: >E items: >E - fieldRef: >E apiVersion: v1 >E fieldPath: metadata.namespace >E path: namespace >E - configMap: >E items: >E - key: service-ca.crt >E path: service-ca.crt >E name: openshift-service-ca.crt >E status: >E conditions: >E - lastProbeTime: null >E lastTransitionTime: '2023-07-03T06:46:02Z' >E status: 'True' >E type: Initialized >E - lastProbeTime: null >E lastTransitionTime: '2023-07-03T06:46:05Z' >E status: 'True' >E type: Ready >E - lastProbeTime: null >E lastTransitionTime: '2023-07-03T06:46:05Z' >E status: 'True' >E type: ContainersReady >E - lastProbeTime: null >E lastTransitionTime: '2023-07-03T06:46:02Z' >E status: 'True' >E type: PodScheduled >E containerStatuses: >E - containerID: cri-o://7b6e2f850693ce430139ce68e2ec8b9d2708b385e64a515ec28b571532377d46 >E image: 9.115.251.75/www/noobaa-tester:s3-tests >E imageID: 9.115.251.75/www/noobaa-tester@sha256:1387021e07fba40284dd598770572867fec544e1e7b56e6487c52966f3fdcb91 >E lastState: {} >E name: noobaa-tester >E ready: true >E restartCount: 0 >E started: true >E state: >E running: >E startedAt: '2023-07-03T06:46:05Z' >E hostIP: 9.42.107.221 >E phase: Running >E podIP: 10.128.9.30 >E podIPs: >E - ip: 10.128.9.30 >E qosClass: Guaranteed >E startTime: '2023-07-03T06:46:02Z' >E >E Resources removed: >E [] > >ocs_ci/utility/environment_check.py:255: ResourceLeftoversException >=================================== FAILURES =================================== >____________ TestS3MultipartUpload.test_multipart_upload_operations ____________ > >self = <tests.manage.mcg.test_multipart_upload.TestS3MultipartUpload object at 0x7f0741a0fc70> >mcg_obj = <ocs_ci.ocs.resources.mcg.MCG object at 0x7f07141dad30> >awscli_pod_session = <ocs_ci.ocs.resources.pod.Pod object at 0x7f07419b05b0> >bucket_factory = <function bucket_factory_fixture.<locals>._create_buckets at 0x7f071127bdc0> >test_directory_setup = SetupDirs(origin_dir='test_multipart_upload_operations/origin', result_dir='test_multipart_upload_operations/result') > > @pytest.mark.polarion_id("OCS-1387") > @tier1 > def test_multipart_upload_operations( > self, mcg_obj, awscli_pod_session, bucket_factory, test_directory_setup > ): > """ > Test Multipart upload operations on bucket and verifies the integrity of the downloaded object > """ > bucket, key, origin_dir, res_dir, object_path, parts = setup( > awscli_pod_session, bucket_factory, test_directory_setup > ) > > # Abort all Multipart Uploads for this Bucket (optional, for starting over) > logger.info(f"Aborting any Multipart Upload on bucket:{bucket}") > abort_all_multipart_upload(mcg_obj, bucket, key) > > # Create & list Multipart Upload on the Bucket > logger.info(f"Initiating Multipart Upload on Bucket: {bucket} with Key {key}") > upload_id = create_multipart_upload(mcg_obj, bucket, key) > logger.info( > f"Listing the Multipart Upload : {list_multipart_upload(mcg_obj, bucket)}" > ) > > # Uploading individual parts to the Bucket > logger.info(f"Uploading individual parts to the bucket {bucket}") > uploaded_parts = upload_parts( > mcg_obj, awscli_pod_session, bucket, key, res_dir, upload_id, parts > ) > > # Listing the Uploaded parts > logger.info( > f"Listing the individual parts : {list_uploaded_parts(mcg_obj, bucket, key, upload_id)}" > ) > > # Completing the Multipart Upload > logger.info(f"Completing the Multipart Upload on bucket: {bucket}") > logger.info( > complete_multipart_upload(mcg_obj, bucket, key, upload_id, uploaded_parts) > ) > > # Checksum Validation: Downloading the object after completing Multipart Upload and verifying its integrity > logger.info( > "Downloading the completed multipart object from MCG bucket to awscli pod" > ) >> sync_object_directory(awscli_pod_session, object_path, res_dir, mcg_obj) > >tests/manage/mcg/test_multipart_upload.py:106: >_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ >ocs_ci/ocs/bucket_utils.py:344: in sync_object_directory > podobj.exec_cmd_on_pod( >ocs_ci/ocs/resources/pod.py:174: in exec_cmd_on_pod > return self.ocp.exec_oc_cmd( >ocs_ci/ocs/ocp.py:163: in exec_oc_cmd > out = run_cmd( >ocs_ci/utility/utils.py:477: in run_cmd > completed_process = exec_cmd( >_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > >cmd = ['oc', '-n', 'openshift-storage', 'rsh', 's3cli-0', 'sh', ...] >secrets = ['JsBbMYIPhqEv17Fsvm0L', 'e9cORmxCU7FryC/aXIlwyaiQPWEeUZ7gwa4tR+YS', 'https://s3.openshift-storage.svc:443'] >timeout = 600, ignore_error = False, threading_lock = None, silent = False >kwargs = {} >masked_cmd = 'oc -n openshift-storage rsh s3cli-0 sh -c "AWS_CA_BUNDLE=/cert/service-ca.crt AWS_ACCESS_KEY_ID=***** AWS_SECRET_ACCE...-2 aws s3 --endpoint=***** sync s3://oc-bucket-1dbe4c9d4f714be381b393819ed2f2 test_multipart_upload_operations/result"' >completed_process = CompletedProcess(args=['oc', '-n', 'openshift-storage', 'rsh', 's3cli-0', 'sh', '-c', 'AWS_CA_BUNDLE=/cert/service-ca....ucket-1dbe4c9d4f714be381b393819ed2f2/ObjKey-982b6298fafc4df6bc836b365eca407d".\ncommand terminated with exit code 1\n') >masked_stdout = '' >masked_stderr = 'download failed: s3://oc-bucket-1dbe4c9d4f714be381b393819ed2f2/ObjKey-982b6298fafc4df6bc836b365eca407d to test_multip...bucket-1dbe4c9d4f714be381b393819ed2f2/ObjKey-982b6298fafc4df6bc836b365eca407d".\ncommand terminated with exit code 1\n' > > def exec_cmd( > cmd, > secrets=None, > timeout=600, > ignore_error=False, > threading_lock=None, > silent=False, > **kwargs, > ): > """ > Run an arbitrary command locally > > If the command is grep and matching pattern is not found, then this function > returns "command terminated with exit code 1" in stderr. > > Args: > cmd (str): command to run > secrets (list): A list of secrets to be masked with asterisks > This kwarg is popped in order to not interfere with > subprocess.run(``**kwargs``) > timeout (int): Timeout for the command, defaults to 600 seconds. > ignore_error (bool): True if ignore non zero return code and do not > raise the exception. > threading_lock (threading.Lock): threading.Lock object that is used > for handling concurrent oc commands > silent (bool): If True will silent errors from the server, default false > > Raises: > CommandFailed: In case the command execution fails > > Returns: > (CompletedProcess) A CompletedProcess object of the command that was executed > CompletedProcess attributes: > args: The list or str args passed to run(). > returncode (str): The exit code of the process, negative for signals. > stdout (str): The standard output (None if not captured). > stderr (str): The standard error (None if not captured). > > """ > masked_cmd = mask_secrets(cmd, secrets) > log.info(f"Executing command: {masked_cmd}") > if isinstance(cmd, str): > cmd = shlex.split(cmd) > if threading_lock and cmd[0] == "oc": > threading_lock.acquire() > completed_process = subprocess.run( > cmd, > stdout=subprocess.PIPE, > stderr=subprocess.PIPE, > stdin=subprocess.PIPE, > timeout=timeout, > **kwargs, > ) > if threading_lock and cmd[0] == "oc": > threading_lock.release() > masked_stdout = mask_secrets(completed_process.stdout.decode(), secrets) > if len(completed_process.stdout) > 0: > log.debug(f"Command stdout: {masked_stdout}") > else: > log.debug("Command stdout is empty") > > masked_stderr = mask_secrets(completed_process.stderr.decode(), secrets) > if len(completed_process.stderr) > 0: > if not silent: > log.warning(f"Command stderr: {masked_stderr}") > else: > log.debug("Command stderr is empty") > log.debug(f"Command return code: {completed_process.returncode}") > if completed_process.returncode and not ignore_error: > if ( > "grep" in masked_cmd > and b"command terminated with exit code 1" in completed_process.stderr > ): > log.info(f"No results found for grep command: {masked_cmd}") > else: >> raise CommandFailed( > f"Error during execution of command: {masked_cmd}." > f"\nError is {masked_stderr}" > ) >E ocs_ci.ocs.exceptions.CommandFailed: Error during execution of command: oc -n openshift-storage rsh s3cli-0 sh -c "AWS_CA_BUNDLE=/cert/service-ca.crt AWS_ACCESS_KEY_ID=***** AWS_SECRET_ACCESS_KEY=***** AWS_DEFAULT_REGION=us-east-2 aws s3 --endpoint=***** sync s3://oc-bucket-1dbe4c9d4f714be381b393819ed2f2 test_multipart_upload_operations/result". >E Error is download failed: s3://oc-bucket-1dbe4c9d4f714be381b393819ed2f2/ObjKey-982b6298fafc4df6bc836b365eca407d to test_multipart_upload_operations/result/ObjKey-982b6298fafc4df6bc836b365eca407d Connection was closed before we received a valid response from endpoint URL: "*****/oc-bucket-1dbe4c9d4f714be381b393819ed2f2/ObjKey-982b6298fafc4df6bc836b365eca407d". >E command terminated with exit code 1 > >ocs_ci/utility/utils.py:640: CommandFailed >=============================== warnings summary =============================== >venv/lib64/python3.9/site-packages/pkg_resources/__init__.py:121 > /root/ocs-ci/venv/lib64/python3.9/site-packages/pkg_resources/__init__.py:121: DeprecationWarning: pkg_resources is deprecated as an API > warnings.warn("pkg_resources is deprecated as an API", DeprecationWarning) > >venv/lib64/python3.9/site-packages/pkg_resources/__init__.py:2870 >venv/lib64/python3.9/site-packages/pkg_resources/__init__.py:2870 >venv/lib64/python3.9/site-packages/pkg_resources/__init__.py:2870 >venv/lib64/python3.9/site-packages/pkg_resources/__init__.py:2870 >venv/lib64/python3.9/site-packages/pkg_resources/__init__.py:2870 >venv/lib64/python3.9/site-packages/pkg_resources/__init__.py:2870 > /root/ocs-ci/venv/lib64/python3.9/site-packages/pkg_resources/__init__.py:2870: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('google')`. > Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages > declare_namespace(pkg) > >venv/lib64/python3.9/site-packages/pkg_resources/__init__.py:2870 >venv/lib64/python3.9/site-packages/pkg_resources/__init__.py:2870 > /root/ocs-ci/venv/lib64/python3.9/site-packages/pkg_resources/__init__.py:2870: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('google.cloud')`. > Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages > declare_namespace(pkg) > >venv/lib64/python3.9/site-packages/pkg_resources/__init__.py:2349 >venv/lib64/python3.9/site-packages/pkg_resources/__init__.py:2349 >venv/lib64/python3.9/site-packages/pkg_resources/__init__.py:2349 > /root/ocs-ci/venv/lib64/python3.9/site-packages/pkg_resources/__init__.py:2349: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('google')`. > Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages > declare_namespace(parent) > >venv/lib64/python3.9/site-packages/pkg_resources/__init__.py:2870 > /root/ocs-ci/venv/lib64/python3.9/site-packages/pkg_resources/__init__.py:2870: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('google.logging')`. > Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages > declare_namespace(pkg) > >venv/lib64/python3.9/site-packages/pkg_resources/__init__.py:2870 >venv/lib64/python3.9/site-packages/pkg_resources/__init__.py:2870 > /root/ocs-ci/venv/lib64/python3.9/site-packages/pkg_resources/__init__.py:2870: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('zope')`. > Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages > declare_namespace(pkg) > >venv/lib64/python3.9/site-packages/google/rpc/__init__.py:20 > /root/ocs-ci/venv/lib64/python3.9/site-packages/google/rpc/__init__.py:20: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('google.rpc')`. > Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages > pkg_resources.declare_namespace(__name__) > >-- Docs: https://docs.pytest.org/en/stable/warnings.html >=========================== short test summary info ============================ >FAILED tests/manage/mcg/test_multipart_upload.py::TestS3MultipartUpload::test_multipart_upload_operations >ERROR tests/manage/mcg/test_multipart_upload.py::TestS3MultipartUpload::test_multipart_upload_operations >============= 1 failed, 16 warnings, 1 error in 2337.37s (0:38:57) =============
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 2219311
: 1973842