Bug 2267606 - [4.14.z clone] csi-addons-controller-manager pod is reset after running the must-gather command
Summary: [4.14.z clone] csi-addons-controller-manager pod is reset after running the m...
Keywords:
Status: CLOSED DUPLICATE of bug 2278642
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ocs-operator
Version: unspecified
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Nikhil Ladha
QA Contact: Elad
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2024-03-04 06:03 UTC by Nikhil Ladha
Modified: 2024-09-26 04:25 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2024-05-28 09:34:06 UTC
Embargoed:


Attachments (Terms of Use)

Description Nikhil Ladha 2024-03-04 06:03:16 UTC
This bug was initially created as a copy of Bug #2257259

I am copying this bug because: 



Description of problem (please be detailed as possible and provide log
snippests):

 csi-addons-controller-manager pod is reset after running the must-gather command 
The csi-addons controller ends with the following msg:
```
2024-01-08T12:50:48.279Z	INFO	Stopping and waiting for non leader election runnables
2024-01-08T12:50:48.279Z	INFO	Stopping and waiting for leader election runnables
2024-01-08T12:50:48.279Z	INFO	Shutdown signal received, waiting for all workers to finish	{"controller": "persistentvolumeclaim", "controllerGroup": "", "controllerKind": "PersistentVolumeClaim"}
2024-01-08T12:50:48.279Z	INFO	Shutdown signal received, waiting for all workers to finish	{"controller": "volumereplication", "controllerGroup": "replication.storage.openshift.io", "controllerKind": "VolumeReplication"}
2024-01-08T12:50:48.279Z	INFO	Shutdown signal received, waiting for all workers to finish	{"controller": "reclaimspacecronjob", "controllerGroup": "csiaddons.openshift.io", "controllerKind": "ReclaimSpaceCronJob"}
2024-01-08T12:50:48.279Z	INFO	Shutdown signal received, waiting for all workers to finish	{"controller": "reclaimspacejob", "controllerGroup": "csiaddons.openshift.io", "controllerKind": "ReclaimSpaceJob"}
2024-01-08T12:50:48.279Z	INFO	Shutdown signal received, waiting for all workers to finish	{"controller": "csiaddonsnode", "controllerGroup": "csiaddons.openshift.io", "controllerKind": "CSIAddonsNode"}
2024-01-08T12:50:48.279Z	INFO	Shutdown signal received, waiting for all workers to finish	{"controller": "networkfence", "controllerGroup": "csiaddons.openshift.io", "controllerKind": "NetworkFence"}
2024-01-08T12:50:48.279Z	INFO	All workers finished	{"controller": "volumereplication", "controllerGroup": "replication.storage.openshift.io", "controllerKind": "VolumeReplication"}
2024-01-08T12:50:48.279Z	INFO	All workers finished	{"controller": "reclaimspacejob", "controllerGroup": "csiaddons.openshift.io", "controllerKind": "ReclaimSpaceJob"}
2024-01-08T12:50:48.279Z	INFO	All workers finished	{"controller": "reclaimspacecronjob", "controllerGroup": "csiaddons.openshift.io", "controllerKind": "ReclaimSpaceCronJob"}
2024-01-08T12:50:48.279Z	INFO	All workers finished	{"controller": "persistentvolumeclaim", "controllerGroup": "", "controllerKind": "PersistentVolumeClaim"}
2024-01-08T12:50:48.279Z	INFO	All workers finished	{"controller": "csiaddonsnode", "controllerGroup": "csiaddons.openshift.io", "controllerKind": "CSIAddonsNode"}
2024-01-08T12:50:48.279Z	INFO	All workers finished	{"controller": "networkfence", "controllerGroup": "csiaddons.openshift.io", "controllerKind": "NetworkFence"}
2024-01-08T12:50:48.279Z	INFO	Stopping and waiting for caches
2024-01-08T12:50:48.279Z	INFO	Stopping and waiting for webhooks
2024-01-08T12:50:48.279Z	INFO	Stopping and waiting for HTTP servers
2024-01-08T12:50:48.279Z	INFO	controller-runtime.metrics	Shutting down metrics server with timeout of 1 minute
2024-01-08T12:50:48.279Z	INFO	shutting down server	{"kind": "health probe", "addr": "[::]:8081"}
2024-01-08T12:50:48.279Z	INFO	Wait completed, proceeding to shutdown the manager
```

Version of all relevant components (if applicable):


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?



Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?
yes


Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:

1. Check oc get pods -nopenshift-storage | grep csi-addons
2. Run must-gather
3. Look for the csi-addons pod again


Actual results:
must-gather should get collected without restarting the csi-addons controller

Expected results:
must-gather gets collected but it restarts the csi-addons controller


Additional info:

Comment 2 krishnaram Karthick 2024-04-15 07:00:28 UTC
Moving the bug to 4.14.8 as we had had exceeded the fixes to be taken in 4.14.7.

Comment 8 Red Hat Bugzilla 2024-09-26 04:25:07 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.