Bug 2021313 - [GSS] Cannot delete pool
Summary: [GSS] Cannot delete pool
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: Multi-Cloud Object Gateway
Version: 4.6
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ODF 4.10.0
Assignee: Kfir Payne
QA Contact: Mahesh Shetty
URL:
Whiteboard:
Depends On: 2049029
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-11-08 19:14 UTC by khover
Modified: 2023-08-09 16:49 UTC (History)
11 users (show)

Fixed In Version: 4.10.0-79
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-04-13 18:49:43 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage ocs-ci pull 5421 0 None Merged Automate MCG admission control webhooks tests 2022-06-14 13:12:16 UTC
Red Hat Product Errata RHSA-2022:1372 0 None None None 2022-04-13 18:50:25 UTC

Description khover 2021-11-08 19:14:25 UTC
Description of problem (please be detailed as possible and provide log
snippests):

% noobaa api pool_api delete_pool '{"name":"mcg-pv-pool-bs"}'

INFO[0001] ✅ Exists: NooBaa "noobaa"                    
INFO[0001] dbType was not supplied. according to image (registry.redhat.io/rhscl/mongodb-36-rhel7@sha256:ffc67a5d76944cbb073862a61b9928400bb948ca0cb220d6718116b9f5637c24) setting dbType to mongodb 
INFO[0001] ✅ Exists: Service "noobaa-mgmt"              
INFO[0001] ✅ Exists: Secret "noobaa-operator"           
INFO[0002] ✅ Exists: Secret "noobaa-admin"              
INFO[0002] ✈️  RPC: pool.delete_pool() Request: map[name:mcg-pv-pool-bs] 
WARN[0002] RPC: GetConnection creating connection to wss://localhost:52243/rpc/ 0xc0000b8af0 
INFO[0002] RPC: Connecting websocket (0xc0000b8af0) &{RPC:0xc00003d180 Address:wss://localhost:52243/rpc/ State:init WS:<nil> PendingRequests:map[] NextRequestID:0 Lock:{state:1 sema:0} ReconnectDelay:0s} 
INFO[0003] RPC: Connected websocket (0xc0000b8af0) &{RPC:0xc00003d180 Address:wss://localhost:52243/rpc/ State:init WS:<nil> PendingRequests:map[] NextRequestID:0 Lock:{state:1 sema:0} ReconnectDelay:0s} 
ERRO[0003] ⚠️  RPC: pool.delete_pool() Response Error: Code=CONNECTED_BUCKET_DELETING Message=Cannot delete pool 
FATA[0003] ❌ Cannot delete pool 

Version of all relevant components (if applicable):

OCS Version is : 4.6.7

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?

User cannot delete pool 

Is there any workaround available to the best of your knowledge?

Unknown

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?

Unknown

Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1.
2.
3.


Actual results:


Expected results:


Additional info:

Comment 3 khover 2021-11-09 13:49:16 UTC
Re:

The buckets needs to be updated to a new resource, then the data would move and the pool would be delectable
(see Code=CONNECTED_BUCKET_DELETING)

Is there a documented process for that ?

I dont see this as a blocker, but custy has the following and wants to fix. 


How can I resolve this warning: A NooBaa resource mcg-pv-pool-bs is in error state
I uninstalled MTC and BackingStore mcg-pv-pool-bs still there. 


"A NooBaa resource mcg-pv-pool-bs is in error state for more than 5m"

Comment 5 khover 2021-11-09 14:33:55 UTC
I didnt uninstall anything, the cu did. Migration Toolkit for Containers


What is the desired outcome for the customer? Why did he try to delete this to begin with?


[A]cu has the following and wants to fix

"A NooBaa resource mcg-pv-pool-bs is in error state for more than 5m"

Comment 6 Nimrod Becker 2021-11-09 14:48:39 UTC
Yes, but that is after trying to delete.
I'm trying to get the cu to where he tried to go, what was the initial intent?

Comment 7 khover 2021-11-09 15:05:51 UTC
As far as I know the initial intent was in original case desc.

I believe, they started trial and error to get rid of warning before opening case.

Problem Statement	warning: A NooBaa resource mcg-pv-pool-bs is in error state
Description	
How can I resolve this warning: A NooBaa resource mcg-pv-pool-bs is in error state
I uninstalled MTC and BackingStore mcg-pv-pool-bs still there. 

Is there anything I can ask cu to clarify ?

Comment 8 Nimrod Becker 2021-11-09 15:08:08 UTC
I believe we would want to know Why did they try to delete the pool

Comment 9 khover 2021-11-09 15:25:49 UTC
For context, 

cu deleted 

pvc-3662644d-e5d5-4764-b58f-04c72ab07f07  100Gi     RWO           Delete          Bound   openshift-storage/mcg-pv-pool-bs-noobaa-pvc-bfd6ba50                thin                                 36d
pvc-bfc981a7-b68b-40f1-aa8b-5eca1394eb71  100Gi     RWO           Delete          Bound   openshift-storage/mcg-pv-pool-bs-noobaa-pvc-83d86cb7                thin                                 36d
pvc-dd83efb5-ac63-4f09-b131-99e2aca7a093  100Gi     RWO           Delete          Bound   openshift-storage/mcg-pv-pool-bs-noobaa-pvc-5ce43f09                thin                                 36d


+ On reviewing the NooBaa operator logs, we found that the 'mcg-pv-pool-bs' is still getting reconciled:
-------------
2021-10-18T14:30:35.134034763Z time="2021-10-18T14:30:35Z" level=info msg="Create event detected for mcg-pv-pool-bs (openshift-storage), queuing Reconcile"
2021-10-18T14:30:35.134034763Z time="2021-10-18T14:30:35Z" level=info msg="checking which bucketclasses to reconcile. mapping backingstore openshift-storage/mcg-pv-pool-bs to bucketclasses"
2021-10-18T14:30:35.134130290Z time="2021-10-18T14:30:35Z" level=info msg="Create event detected for mcg-pv-pool-bs (openshift-storage), queuing Reconcile"
2021-10-18T14:30:35.135327397Z time="2021-10-18T14:30:35Z" level=info msg="Start ..." backingstore=openshift-storage/mcg-pv-pool-bs
2021-10-18T14:30:35.137800483Z time="2021-10-18T14:30:35Z" level=info msg="<U+2705> Exists: BackingStore \"mcg-pv-pool-bs\"\n"
2021-10-18T14:30:35.143367690Z time="2021-10-18T14:30:35Z" level=info msg="<U+274C> Not Found:  \"mcg-pv-pool-bs-noobaa-noobaa\"\n"
2021-10-18T14:30:35.145511646Z time="2021-10-18T14:30:35Z" level=info msg="<U+274C> Not Found: Secret \"backing-store-pv-pool-mcg-pv-pool-bs\"\n"
2021-10-18T14:30:35.163928330Z time="2021-10-18T14:30:35Z" level=info msg="<U+2705> Created: Secret \"backing-store-pv-pool-mcg-pv-pool-bs\"\n"
2021-10-18T14:30:35.165995045Z time="2021-10-18T14:30:35Z" level=info msg="<U+2705> Exists:  \"backing-store-pv-pool-mcg-pv-pool-bs\"\n"
2021-10-18T14:30:35.165995045Z time="2021-10-18T14:30:35Z" level=info msg="SetPhase: Verifying" backingstore=openshift-storage/mcg-pv-pool-bs
2021-10-18T14:30:35.165995045Z time="2021-10-18T14:30:35Z" level=info msg="SetPhase: Connecting" backingstore=openshift-storage/mcg-pv-pool-bs
2021-10-18T14:30:35.262280571Z time="2021-10-18T14:30:35Z" level=info msg="✈️  RPC: host.list_hosts() Request: {Query:{Pools:[mcg-pv-pool-bs]}}"
2021-10-18T14:30:35.281600571Z time="2021-10-18T14:30:35Z" level=info msg="SetPhase: Rejected" backingstore=openshift-storage/mcg-pv-pool-bs
2021-10-18T14:30:35.281600571Z time="2021-10-18T14:30:35Z" level=error msg="<U+274C> Persistent Error: Scaling down the number of nodes is not currently supported" backingstore=openshift-storage/mcg-pv-pool-bs


+ We reviewed the attached NooBaa DB and we could find the instances of 'mcg-pv-pool-bs' backingstore in the node details.
  Also, we found the 'mcg-pv-pool-bs' in pool details.

+ Can you please execute the following command to delete the 'mcg-pv-pool-bs' instances:
----------
# noobaa api node_api delete_node '{"name":"mcg-pv-pool-bs-noobaa-pod-bfd6ba50-noobaa_storage-cd9bff84"}'
# noobaa api node_api delete_node '{"name":"mcg-pv-pool-bs-noobaa-pod-5ce43f09-noobaa_storage-5676ba22"}'
# noobaa api node_api delete_node '{"name":"mcg-pv-pool-bs-noobaa-pod-83d86cb7-noobaa_storage-6d7ec927"}'
# noobaa api node_api delete_node '{"name":"mcg-pv-pool-bs-noobaa-pod-97db87b8-noobaa_storage-f88e7ba2"}'
# noobaa api node_api delete_node '{"name":"mcg-pv-pool-bs-noobaa-pod-78d1f007-noobaa_storage-1d3d158d"}'
# noobaa api node_api delete_node '{"name":"mcg-pv-pool-bs-noobaa-pod-afc37473-noobaa_storage-12a3d7c3"}'

# noobaa api pool_api delete_pool '{"name":"mcg-pv-pool-bs"}'


Please let me know if this is helpful or if you need more information from cu 


-------------

Comment 10 Nimrod Becker 2021-11-09 15:29:17 UTC
It is not.

Why has the customer tried to delete? What did he try to achieve? Disregarding for a second that he did something which cannot be achieved and so the system rejected it which is exactly as expected

Comment 29 errata-xmlrpc 2022-04-13 18:49:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.10.0 enhancement, security & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:1372


Note You need to log in before you can comment on or make changes to this bug.