Bug 1827317 - [NooBaa] BackingStore state doesn't update when removing access, only updated once data is tried to be written
Summary: [NooBaa] BackingStore state doesn't update when removing access, only updated...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Container Storage
Classification: Red Hat Storage
Component: Multi-Cloud Object Gateway
Version: 4.3
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: OCS 4.5.0
Assignee: Romy Ayalon
QA Contact: Ben Eli
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-04-23 16:06 UTC by Ben Eli
Modified: 2020-09-23 09:04 UTC (History)
7 users (show)

Fixed In Version: ocs-olm-operator:4.5.0-421.ci
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-09-15 10:16:49 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github noobaa noobaa-core pull 6003 0 None closed Fixed test_store_validity 2020-12-07 09:33:19 UTC
Red Hat Product Errata RHBA-2020:3754 0 None None None 2020-09-15 10:17:21 UTC

Description Ben Eli 2020-04-23 16:06:08 UTC
Description of problem (please be detailed as possible and provide log
snippests):
When blocking a target bucket's IO using a bucket policy, the NooBaa backingstore should show as unhealthy.
However, it remains healthy for at least 30 minutes (didn't check longer intervals).
Only when writing new objects to the NooBucket, the backingstore then moves into an IO_ERROR stage. 

The bucket policy I use - 
{
    "Version": "2012-10-17",
    "Id": "DenyReadWrite",
    "Statement": [
        {
            "Effect": "Deny",
            "Principal": {
                "AWS": "*"
            },
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::BUCKETNAME/*",
                "arn:aws:s3:::BUCKETNAME"
            ]
        }
    ]
}


Version of all relevant components (if applicable):
ocs-operator.v4.4.0-413.ci
Also happened on 4.3.0-407.ci if I'm not wrong


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
No

Is there any workaround available to the best of your knowledge?
Writing to the NB bucket forces the status to update and then shows the backingstore as unhealthy

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
2

Can this issue reproducible?
Yes


If this is a regression, please provide more details to justify this:
The backingstore used to move to a `AUTH_ERROR` status after a few minutes in the past, now it just remains healthy

Steps to Reproduce:
1. Create a backingstore
2. Block all IO on the target bucket using a bucket policy
3. See that the backingstore remains healthy


Actual results:
Backingstore is shown as healthy

Expected results:
Backingstore is shown as unhealthy

Additional info:

Comment 11 Ben Eli 2020-06-11 06:02:59 UTC
BackingStore status once again changes to AUTH_ERROR in a timely manner in the tested case.

Verified.
ocs-operator.v4.5.0-448.ci
4.5.0-0.nightly-2020-06-10-201008

Comment 14 errata-xmlrpc 2020-09-15 10:16:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenShift Container Storage 4.5.0 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:3754


Note You need to log in before you can comment on or make changes to this bug.