Created attachment 1674499 [details] bs still appears after was deleted Description of problem (please be detailed as possible and provide log snippests): A noobaa buckets cleanup was performed on all existing clusters, but the default still appears in the openshift UI, when clicking on it - an error message is shown "Oh no! Something went wrong." . (screenshots attacked) Version of all relevant components (if applicable): ocs 4.3 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? no Is there any workaround available to the best of your knowledge? no Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? Can this issue reproducible? yes Can this issue reproduce from the UI? yes If this is a regression, please provide more details to justify this: Steps to Reproduce: 1.delete noobaa-default-backing-store using AWS CLI 2.operators -> installed operators -> openshift container storage ->backing store 3. click on the noobaa-default-backing-store Actual results: shouldn't be there Expected results: still there Additional info:
Hi, Deleting the target bucket won't cause deletion of the backingstore, the problematic behaviour here is this error message and the backingstore page that won't load. Can you please add screenshots of the error and the show more error too? logs would help as well.
*NOT AN AWS BUG - HAPPENS ON VMWARE ONLY WITH 4.3 OPERATOR* Error: Minified React error #31; visit https://reactjs.org/docs/error-decoder.html?invariant=31&args[]=object%20with%20keys%20%7Bname%2C%20namespace%7D&args[]= for the full message or use the non-minified dev environment for full errors and additional helpful warnings. at Yo (https://console-openshift-console.apps.ebenahar-26.qe.rh-ocs.com/static/vendors~main-chunk-5ba017a3a9c53cdb69d4.min.js:118:49028) at https://console-openshift-console.apps.ebenahar-26.qe.rh-ocs.com/static/vendors~main-chunk-5ba017a3a9c53cdb69d4.min.js:118:53835 at sa (https://console-openshift-console.apps.ebenahar-26.qe.rh-ocs.com/static/vendors~main-chunk-5ba017a3a9c53cdb69d4.min.js:118:61413) at Ku (https://console-openshift-console.apps.ebenahar-26.qe.rh-ocs.com/static/vendors~main-chunk-5ba017a3a9c53cdb69d4.min.js:118:100271) at Bu (https://console-openshift-console.apps.ebenahar-26.qe.rh-ocs.com/static/vendors~main-chunk-5ba017a3a9c53cdb69d4.min.js:118:84008) at Fu (https://console-openshift-console.apps.ebenahar-26.qe.rh-ocs.com/static/vendors~main-chunk-5ba017a3a9c53cdb69d4.min.js:118:81035) at Mu (https://console-openshift-console.apps.ebenahar-26.qe.rh-ocs.com/static/vendors~main-chunk-5ba017a3a9c53cdb69d4.min.js:118:79608) at https://console-openshift-console.apps.ebenahar-26.qe.rh-ocs.com/static/vendors~main-chunk-5ba017a3a9c53cdb69d4.min.js:118:41759 at t.unstable_runWithPriority (https://console-openshift-console.apps.ebenahar-26.qe.rh-ocs.com/static/vendors~main-chunk-5ba017a3a9c53cdb69d4.min.js:126:3878) at so (https://console-openshift-console.apps.ebenahar-26.qe.rh-ocs.com/static/vendors~main-chunk-5ba017a3a9c53cdb69d4.min.js:118:41488)
Hi Anna, can you please update the reproduction steps? The currently described steps are not correct (you can't delete a backing store with the AWS CLI)
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:1437