Description of problem (please be detailed as possible and provide log snippests): Noobaa is not following the latest AWS API guidelines [1] for lifecycle retention rules for bucket. 1. For expiration rule, it needs "Prefix" attribute which is deprecated in recent AWS APIs. It does not allow latest Filter tag. Steps: A. Create s3 bucket : --- AWS_ACCESS_KEY_ID=qp9UiupeK4BMomgGzEXA AWS_SECRET_ACCESS_KEY=N4tZLm0X5l5NDxBdIKoGN/dzWXWPWvfbgEUGw937 aws --endpoint https://s3-openshift-storage.apps.kjosysdspnq.ceeindia.support --no-verify-ssl s3 mb s3://test-bucket-cli --- B. Added expiration rule with filter option: --- $ cat > expire-lfrule-withoutprefix.json << EOF { "Rules": [ { "Expiration": { "Days": 1, "ExpiredObjectDeleteMarker": false }, "ID": "data-expire-withoutprefix", "Filter": { "Prefix": "" }, "Status": "Enabled" } ] } EOF --- Without prefix attribute, and having filter attribute, the put-lifecycle API method gives error. --- AWS_ACCESS_KEY_ID=qp9UiupeK4BMomgGzEXA AWS_SECRET_ACCESS_KEY=N4tZLm0X5l5NDxBdIKoGN/dzWXWPWvfbgEUGw937 aws --endpoint https://s3-openshift-storage.apps.kjosysdspnq.ceeindia.support --no-verify-ssl s3api put-bucket-lifecycle-configuration --bucket test-bucket-cli --lifecycle-configuration file://expire-lfrule-withoutprefix.json An error occurred (InternalError) when calling the PutBucketLifecycleConfiguration operation (reached max retries: 2): We encountered an internal error. Please try again. --- As per doc [1], "Prefix identifying one or more objects to which the rule applies. This is no longer used; use Filter instead." Below logs are observed in noobaa-endpoint: --- ^[[32mDec-3 5:43:13.017^[[35m [Endpoint/13] ^[[31m[ERROR]^[[39m core.endpoint.s3.s3_rest:: S3 ERROR <?xml version="1.0" encoding="UTF-8"?><Error><Code>InternalError</Code><Message>We encountered an internal error. Please try again.</Message><Resource>/test-bucket-cli?lifecycle</Resource><RequestId>kwpynwc7-jsaof-l7v</RequestId></Error> PUT /test-bucket-cli?lifecycle {"accept-encoding":"identity","content-md5":"JyB3vvVh83U0l7yFyMOaSQ==","user-agent":"aws-cli/2.4.0 Python/3.8.8 Linux/4.18.0-348.el8.x86_64 exe/x86_64.rhel.8 prompt/off command/s3api.put-bucket-lifecycle-configuration","x-amz-date":"20211203T054312Z","x-amz-content-sha256":"78f6ea36f30eb65161876599fc8064ded628eacd966c3876b69f470dd20c07e3","authorization":"AWS4-HMAC-SHA256 Credential=qp9UiupeK4BMomgGzEXA/20211203/us-east-1/s3/aws4_request, SignedHeaders=content-md5;host;x-amz-content-sha256;x-amz-date, Signature=a0e0cd848d497ce37b7b84be014493045240bcbb65944258b6f1bfca497c094c","content-length":"294","host":"s3-openshift-storage.apps.kjosysdspnq.ceeindia.support","x-forwarded-host":"s3-openshift-storage.apps.kjosysdspnq.ceeindia.support","x-forwarded-port":"443","x-forwarded-proto":"https","forwarded":"for=49.36.238.231;host=s3-openshift-storage.apps.kjosysdspnq.ceeindia.support;proto=https","x-forwarded-for":"49.36.238.231"} TypeError: Cannot read property '0' of undefined at /root/node_modules/noobaa-core/src/endpoint/s3/ops/s3_put_bucket_lifecycle.js:29:32 at arrayMap (/root/node_modules/noobaa-core/node_modules/lodash/lodash.js:639:23) at Function.map (/root/node_modules/noobaa-core/node_modules/lodash/lodash.js:9580:14) at Object.put_bucket_lifecycle [as handler] (/root/node_modules/noobaa-core/src/endpoint/s3/ops/s3_put_bucket_lifecycle.js:22:31) at handle_request (/root/node_modules/noobaa-core/src/endpoint/s3/s3_rest.js:183:28) at runMicrotasks (<anonymous>) at processTicksAndRejections (internal/process/task_queues.js:95:5) at async s3_rest (/root/node_modules/noobaa-core/src/endpoint/s3/s3_rest.js:102:9) --- C. After removing filter parameter and adding Prefix parameter, its working fine: --- $ cat > expire-lfrule.json << EOF { "Rules": [ { "Expiration": { "Days": 1, "ExpiredObjectDeleteMarker": false }, "ID": "data-expire", "Prefix": "", "Status": "Enabled" } ] } EOF --- --- AWS_ACCESS_KEY_ID=qp9UiupeK4BMomgGzEXA AWS_SECRET_ACCESS_KEY=N4tZLm0X5l5NDxBdIKoGN/dzWXWPWvfbgEUGw937 aws --endpoint https://s3-openshift-storage.apps.kjosysdspnq.ceeindia.support --no-verify-ssl s3api put-bucket-lifecycle-configuration --bucket test-bucket-cli --lifecycle-configuration file://expire-lfrule.json AWS_ACCESS_KEY_ID=qp9UiupeK4BMomgGzEXA AWS_SECRET_ACCESS_KEY=N4tZLm0X5l5NDxBdIKoGN/dzWXWPWvfbgEUGw937 aws --endpoint https://s3-openshift-storage.apps.kjosysdspnq.ceeindia.support --no-verify-ssl s3api get-bucket-lifecycle-configuration --bucket test-bucket-cli { "Rules": [ { "Expiration": { "Days": 1, "ExpiredObjectDeleteMarker": false }, "ID": "data-expire", "Prefix": "", "Status": "Enabled" } ] } --- 2. For delete marker, the Noobaa still needs a non-zero positive number in Days parameter, which is not required as per AWS API. A. Added expiration rule without Dats parameter: --- cat > expire-lfrule-delmark.json << EOF { "Rules": [ { "Expiration": { "ExpiredObjectDeleteMarker": true }, "ID": "data-expire-deletemark", "Prefix": "", "Status": "Enabled" } ] } EOF --- --- AWS_ACCESS_KEY_ID=qp9UiupeK4BMomgGzEXA AWS_SECRET_ACCESS_KEY=N4tZLm0X5l5NDxBdIKoGN/dzWXWPWvfbgEUGw937 aws --endpoint https://s3-openshift-storage.apps.kjosysdspnq.ceeindia.support --no-verify-ssl s3api put-bucket-lifecycle-configuration --bucket test-bucket-cli --lifecycle-configuration file://expire-lfrule-delmark.json An error occurred (InternalError) when calling the PutBucketLifecycleConfiguration operation (reached max retries: 2): We encountered an internal error. Please try again. --- Below logs are observed in noobaa-endpoint: --- ^[[32mDec-3 6:55:24.979^[[35m [Endpoint/13] ^[[31m[ERROR]^[[39m core.endpoint.s3.s3_rest:: S3 ERROR <?xml version="1.0" encoding="UTF-8"?><Error><Code>InternalError</Code><Message>We encountered an internal error. Please try again.</Message><Resource>/test-bucket-cli?lifecycle</Resource><RequestId>kwq18qwi-ggv1r0-jca</RequestId></Error> PUT /test-bucket-cli?lifecycle {"accept-encoding":"identity","content-md5":"JyB3vvVh83U0l7yFyMOaSQ==","user-agent":"aws-cli/2.4.0 Python/3.8.8 Linux/4.18.0-348.el8.x86_64 exe/x86_64.rhel.8 prompt/off command/s3api.put-bucket-lifecycle-configuration","x-amz-date":"20211203T065524Z","x-amz-content-sha256":"78f6ea36f30eb65161876599fc8064ded628eacd966c3876b69f470dd20c07e3","authorization":"AWS4-HMAC-SHA256 Credential=qp9UiupeK4BMomgGzEXA/20211203/us-east-1/s3/aws4_request, SignedHeaders=content-md5;host;x-amz-content-sha256;x-amz-date, Signature=a68b9b778c169d67d160feee2efb184b60ffec2853f72c5d9a76e553d75b892c","content-length":"294","host":"s3-openshift-storage.apps.kjosysdspnq.ceeindia.support","x-forwarded-host":"s3-openshift-storage.apps.kjosysdspnq.ceeindia.support","x-forwarded-port":"443","x-forwarded-proto":"https","forwarded":"for=49.36.238.231;host=s3-openshift-storage.apps.kjosysdspnq.ceeindia.support;proto=https","x-forwarded-for":"49.36.238.231"} TypeError: Cannot read property '0' of undefined at /root/node_modules/noobaa-core/src/endpoint/s3/ops/s3_put_bucket_lifecycle.js:29:32 at arrayMap (/root/node_modules/noobaa-core/node_modules/lodash/lodash.js:639:23) at Function.map (/root/node_modules/noobaa-core/node_modules/lodash/lodash.js:9580:14) at Object.put_bucket_lifecycle [as handler] (/root/node_modules/noobaa-core/src/endpoint/s3/ops/s3_put_bucket_lifecycle.js:22:31) at handle_request (/root/node_modules/noobaa-core/src/endpoint/s3/s3_rest.js:183:28) at runMicrotasks (<anonymous>) at processTicksAndRejections (internal/process/task_queues.js:95:5) at async s3_rest (/root/node_modules/noobaa-core/src/endpoint/s3/s3_rest.js:102:9) --- B. When using the parameter "ExpiredObjectDeleteMarker" with day field, it's working fine. --- cat > expire-lfrule-delmarkdays.json << EOF { "Rules": [ { "Expiration": { "Days":1, "ExpiredObjectDeleteMarker": true }, "ID": "data-expire-deletemarkdays", "Prefix": "", "Status": "Enabled" } ] } EOF $ AWS_ACCESS_KEY_ID=qp9UiupeK4BMomgGzEXA AWS_SECRET_ACCESS_KEY=N4tZLm0X5l5NDxBdIKoGN/dzWXWPWvfbgEUGw937 aws --endpoint https://s3-openshift-storage.apps.kjosysdspnq.ceeindia.support --no-verify-ssl s3api put-bucket-lifecycle-configuration --bucket test-bucket-cli --lifecycle-configuration file://expire-lfrule-delmarkdays.json urllib3/connectionpool.py:1013: InsecureRequestWarning: Unverified HTTPS request is being made to host 's3-openshift-storage.apps.kjosysdspnq.ceeindia.support'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings $ AWS_ACCESS_KEY_ID=qp9UiupeK4BMomgGzEXA AWS_SECRET_ACCESS_KEY=N4tZLm0X5l5NDxBdIKoGN/dzWXWPWvfbgEUGw937 aws --endpoint https://s3-openshift-storage.apps.kjosysdspnq.ceeindia.support --no-verify-ssl s3api get-bucket-lifecycle-configuration --bucket test-bucket-cli urllib3/connectionpool.py:1013: InsecureRequestWarning: Unverified HTTPS request is being made to host 's3-openshift-storage.apps.kjosysdspnq.ceeindia.support'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings { "Rules": [ { "Expiration": { "Days": 1, "ExpiredObjectDeleteMarker": false }, "ID": "data-expire", "Prefix": "", "Status": "Enabled" } ] } --- [1]https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLifecycleConfiguration.html Version of all relevant components (if applicable): OCS 4.7 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Yes Is there any workaround available to the best of your knowledge? NA Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 2 Can this issue reproducible? Yes, steps are mentioned above Can this issue reproduce from the UI? No If this is a regression, please provide more details to justify this: NA Steps to Reproduce: Mentioned above Actual results: Noobaa is not compatbile with AWS API rules for bucket retention. Expected results: The lifecycle rule creation should follow latest AWS APIs for compatibility with all S3 storages. Additional info: NA
Hi Liran, I have verified bug with OCP(4.11) + ODF(4.11.0-66) build. Butfound that it is partially fixed. Observations- 1. With prefix attribute, inside filter attribute, the put-lifecycle API method is passed as expected. $cat expire-lfrule-withoutprefix.json { "Rules": [ { "Expiration": { "Days": 1, "ExpiredObjectDeleteMarker": false }, "ID": "data-expire-withoutprefix", "Filter": { "Prefix": "" }, "Status": "Enabled" } ] [auth]$ AWS_ACCESS_KEY_ID=$NOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=$NOOBAA_SECRET_KEY aws --endpoint https://s3-openshift-storage.apps.asagare-bug202929.qe.rh-ocs.com --no-verify-ssl s3api put-bucket-lifecycle-configuration --bucket test-bucket-cli --lifecycle-configuration file://expire-lfrule-withoutprefix.json /usr/lib/python3.9/site-packages/urllib3/connectionpool.py:981: InsecureRequestWarning: Unverified HTTPS request is being made to host 's3-openshift-storage.apps.asagare-bug202929.qe.rh-ocs.com'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings warnings.warn [auth]$ AWS_ACCESS_KEY_ID=$NOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=$NOOBAA_SECRET_KEY aws --endpoint https://s3-openshift-storage.apps.asagare-bug202929.qe.rh-ocs.com --no-verify-ssl s3api get-bucket-lifecycle-configuration --bucket test-bucket-cli /usr/lib/python3.9/site-packages/urllib3/connectionpool.py:981: InsecureRequestWarning: Unverified HTTPS request is being made to host 's3-openshift-storage.apps.asagare-bug202929.qe.rh-ocs.com'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings warnings.warn( { "Rules": [ { "Expiration": { "Days": 1 }, "ID": "data-expire-withoutprefix", "Filter": { "Prefix": "" }, "Status": "Enabled" } ] } 2. For delete marker, the Noobaa is giving error whether we add Days parameter or not. [auth]$ cat expire-lfrule-delmark.json { "Rules": [ { "Expiration": { "ExpiredObjectDeleteMarker": true }, "ID": "data-expire-deletemark", "Prefix": "", "Status": "Enabled" } ] } [auth]$ AWS_ACCESS_KEY_ID=$NOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=$NOOBAA_SECRET_KEY aws --endpoint https://s3-openshift-storage.apps.asagare-bug202929.qe.rh-ocs.com --no-verify-ssl s3api put-bucket-lifecycle-configuration --bucket test-bucket-cli --lifecycle-configuration file://expire-lfrule-delmark.json /usr/lib/python3.9/site-packages/urllib3/connectionpool.py:981: InsecureRequestWarning: Unverified HTTPS request is being made to host 's3-openshift-storage.apps.asagare-bug202929.qe.rh-ocs.com'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings warnings.warn( An error occurred (InvalidArgument) when calling the PutBucketLifecycleConfiguration operation: Invalid Argument ----------------------------------------------------- After updating file with Days parameter. [auth]$ AWS_ACCESS_KEY_ID=$NOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=$NOOBAA_SECRET_KEY aws --endpoint https://s3-openshift-storage.apps.asagare-bug202929.qe.rh-ocs.com --no-verify-ssl s3api put-bucket-lifecycle-configuration --bucket test-bucket-cli --lifecycle-configuration file://expire-lfrule-delmark.json /usr/lib/python3.9/site-packages/urllib3/connectionpool.py:981: InsecureRequestWarning: Unverified HTTPS request is being made to host 's3-openshift-storage.apps.asagare-bug202929.qe.rh-ocs.com'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings warnings.warn( An error occurred (InvalidArgument) when calling the PutBucketLifecycleConfiguration operation: Invalid Argument [auth]$ cat expire-lfrule-delmark.json { "Rules": [ { "Expiration": { "Days": 1, "ExpiredObjectDeleteMarker": true }, "ID": "data-expire-deletemarkdays", "Prefix": "", "Status": "Enabled" } ] } Hence Creating new BZ for delete marker.
Thanks All, for clarifying the things. So understood that it was decided not to support ExpiredObjectDeleteMarker and scope for this epic was prefix and tags only. Delete markers were not. Marking this bug as verified as With prefix attribute, inside filter attribute, the put-lifecycle API method is passed as expected.
Hi Sonal, Thanks for the confirmation. I have also tested in Vmware environment.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, & bugfix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:6156