Bug 1827317

Summary: [NooBaa] BackingStore state doesn't update when removing access, only updated once data is tried to be written
Product: [Red Hat Storage] Red Hat OpenShift Container Storage Reporter: Ben Eli <belimele>
Component: Multi-Cloud Object GatewayAssignee: Romy Ayalon <rayalon>
Status: CLOSED ERRATA QA Contact: Ben Eli <belimele>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.3CC: assingh, ebenahar, etamir, nbecker, ocs-bugs, ratamir, rayalon
Target Milestone: ---Keywords: AutomationBackLog, Regression
Target Release: OCS 4.5.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ocs-olm-operator:4.5.0-421.ci Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-09-15 10:16:49 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Ben Eli 2020-04-23 16:06:08 UTC
Description of problem (please be detailed as possible and provide log
snippests):
When blocking a target bucket's IO using a bucket policy, the NooBaa backingstore should show as unhealthy.
However, it remains healthy for at least 30 minutes (didn't check longer intervals).
Only when writing new objects to the NooBucket, the backingstore then moves into an IO_ERROR stage. 

The bucket policy I use - 
{
    "Version": "2012-10-17",
    "Id": "DenyReadWrite",
    "Statement": [
        {
            "Effect": "Deny",
            "Principal": {
                "AWS": "*"
            },
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::BUCKETNAME/*",
                "arn:aws:s3:::BUCKETNAME"
            ]
        }
    ]
}


Version of all relevant components (if applicable):
ocs-operator.v4.4.0-413.ci
Also happened on 4.3.0-407.ci if I'm not wrong


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
No

Is there any workaround available to the best of your knowledge?
Writing to the NB bucket forces the status to update and then shows the backingstore as unhealthy

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
2

Can this issue reproducible?
Yes


If this is a regression, please provide more details to justify this:
The backingstore used to move to a `AUTH_ERROR` status after a few minutes in the past, now it just remains healthy

Steps to Reproduce:
1. Create a backingstore
2. Block all IO on the target bucket using a bucket policy
3. See that the backingstore remains healthy


Actual results:
Backingstore is shown as healthy

Expected results:
Backingstore is shown as unhealthy

Additional info:

Comment 11 Ben Eli 2020-06-11 06:02:59 UTC
BackingStore status once again changes to AUTH_ERROR in a timely manner in the tested case.

Verified.
ocs-operator.v4.5.0-448.ci
4.5.0-0.nightly-2020-06-10-201008

Comment 14 errata-xmlrpc 2020-09-15 10:16:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenShift Container Storage 4.5.0 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:3754