Bug 2022308 - Pods in unstable state on node shutdown
Summary: Pods in unstable state on node shutdown
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: csi-driver
Version: 4.9
Hardware: ppc64le
OS: Linux
unspecified
medium
Target Milestone: ---
: ---
Assignee: yati padia
QA Contact: Elad
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-11-11 10:06 UTC by Sonia Garudi
Modified: 2023-08-09 16:37 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-11-23 00:41:01 UTC
Embargoed:


Attachments (Terms of Use)

Description Sonia Garudi 2021-11-11 10:06:44 UTC
Description of problem:
The pods hosted on a worker node become unstable after the kubelet service on the node is stopped. The node goes in 'NotReady' status and the pods move to either pending/terminating/init state. The older pods are not removed and the newer pods don't go in running state. On restarting the kubelet service on node, the pods go back to normal.


Version-Release number of selected component (if applicable):
4.9


How reproducible:


Steps to Reproduce:
1. SSH into worker node and stop kubelet service
[core@syd04-worker-2 ~]$ sudo systemctl start kubelet.service

[root@rdr-son49a-syd04-bastion-0 ~]# oc get nodes
NAME                                STATUS     ROLES    AGE   VERSION
syd04-master-0.rdr-son49a.ibm.com   Ready      master   19d   v1.22.0-rc.0+75ee307
syd04-master-1.rdr-son49a.ibm.com   Ready      master   19d   v1.22.0-rc.0+75ee307
syd04-master-2.rdr-son49a.ibm.com   Ready      master   19d   v1.22.0-rc.0+75ee307
syd04-worker-0.rdr-son49a.ibm.com   Ready      worker   19d   v1.22.0-rc.0+75ee307
syd04-worker-1.rdr-son49a.ibm.com   NotReady   worker   19d   v1.22.0-rc.0+75ee307
syd04-worker-2.rdr-son49a.ibm.com   Ready      worker   19d   v1.22.0-rc.0+75ee307

2. Check pods in the openshift-storage namespace hosted on the stopped node in previous step

[root@rdr-son49a-syd04-bastion-0 ~]# oc get pod -n openshift-storage -o wide
NAME                                                              READY   STATUS        RESTARTS       AGE     IP               NODE                                NOMINATED NODE   READINESS GATES
csi-cephfsplugin-88kqq                                            3/3     Running       0              19d     192.168.25.135   syd04-worker-2.rdr-son49a.ibm.com   <none>           <none>
csi-cephfsplugin-crzzr                                            3/3     Running       0              19d     192.168.25.196   syd04-worker-1.rdr-son49a.ibm.com   <none>           <none>
csi-cephfsplugin-provisioner-755c7569f4-5dxx5                     6/6     Terminating   0              19d     10.128.2.15      syd04-worker-1.rdr-son49a.ibm.com   <none>           <none>
csi-cephfsplugin-provisioner-755c7569f4-gs9nb                     6/6     Running       0              19d     10.131.0.167     syd04-worker-0.rdr-son49a.ibm.com   <none>           <none>
csi-cephfsplugin-provisioner-755c7569f4-hgjkp                     6/6     Running       0              17m     10.129.3.214     syd04-worker-2.rdr-son49a.ibm.com   <none>           <none>
csi-cephfsplugin-zt28k                                            3/3     Running       0              19d     192.168.25.69    syd04-worker-0.rdr-son49a.ibm.com   <none>           <none>
csi-rbdplugin-cc7r9                                               3/3     Running       0              19d     192.168.25.196   syd04-worker-1.rdr-son49a.ibm.com   <none>           <none>
csi-rbdplugin-n7m5v                                               3/3     Running       0              19d     192.168.25.135   syd04-worker-2.rdr-son49a.ibm.com   <none>           <none>
csi-rbdplugin-ncb69                                               3/3     Running       0              19d     192.168.25.69    syd04-worker-0.rdr-son49a.ibm.com   <none>           <none>
csi-rbdplugin-provisioner-6f7859cb4d-8jcnx                        6/6     Running       0              3h55m   10.131.0.187     syd04-worker-0.rdr-son49a.ibm.com   <none>           <none>
csi-rbdplugin-provisioner-6f7859cb4d-ms4pr                        6/6     Terminating   0              19d     10.128.2.14      syd04-worker-1.rdr-son49a.ibm.com   <none>           <none>
csi-rbdplugin-provisioner-6f7859cb4d-zn6kp                        6/6     Running       0              17m     10.129.3.212     syd04-worker-2.rdr-son49a.ibm.com   <none>           <none>
noobaa-core-0                                                     1/1     Running       0              23m     10.129.3.206     syd04-worker-2.rdr-son49a.ibm.com   <none>           <none>
noobaa-db-pg-0                                                    0/1     Init:0/1      0              23m     <none>           syd04-worker-2.rdr-son49a.ibm.com   <none>           <none>
noobaa-endpoint-86f78bcf76-8zfjl                                  1/1     Running       0              23m     10.129.3.207     syd04-worker-2.rdr-son49a.ibm.com   <none>           <none>
noobaa-operator-c9654854-kb74q                                    1/1     Running       0              23m     10.129.3.208     syd04-worker-2.rdr-son49a.ibm.com   <none>           <none>
ocs-metrics-exporter-7566789b65-tqzkh                             1/1     Running       0              19d     10.131.0.164     syd04-worker-0.rdr-son49a.ibm.com   <none>           <none>
ocs-operator-8588554d5-fpnnp                                      1/1     Running       0              19d     10.131.0.163     syd04-worker-0.rdr-son49a.ibm.com   <none>           <none>
odf-console-5c7446d49f-xqc6n                                      1/1     Running       0              17m     10.131.1.19      syd04-worker-0.rdr-son49a.ibm.com   <none>           <none>
odf-console-5c7446d49f-zsqhk                                      1/1     Terminating   0              19d     10.128.2.13      syd04-worker-1.rdr-son49a.ibm.com   <none>           <none>
odf-operator-controller-manager-67fc478859-qhdp8                  2/2     Terminating   0              19d     10.128.2.12      syd04-worker-1.rdr-son49a.ibm.com   <none>           <none>
odf-operator-controller-manager-67fc478859-xwwv7                  2/2     Running       0              17m     10.129.3.215     syd04-worker-2.rdr-son49a.ibm.com   <none>           <none>
rook-ceph-crashcollector-syd04-worker-0.rdr-son49a.ibm.comqwcdg   1/1     Running       0              19d     10.131.0.174     syd04-worker-0.rdr-son49a.ibm.com   <none>           <none>
rook-ceph-crashcollector-syd04-worker-1.rdr-son49a.ibm.com6cm9x   0/1     Pending       0              22m     <none>           <none>                              <none>           <none>
rook-ceph-crashcollector-syd04-worker-1.rdr-son49a.ibm.comwpnkp   1/1     Terminating   0              4h      10.128.3.209     syd04-worker-1.rdr-son49a.ibm.com   <none>           <none>
rook-ceph-crashcollector-syd04-worker-2.rdr-son49a.ibm.comjpmt7   1/1     Running       0              22m     10.129.3.210     syd04-worker-2.rdr-son49a.ibm.com   <none>           <none>
rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-576585cffv698   2/2     Running       0              19d     10.131.0.173     syd04-worker-0.rdr-son49a.ibm.com   <none>           <none>
rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-5d58d55bhpq5f   2/2     Terminating   1 (137m ago)   4h      10.128.3.208     syd04-worker-1.rdr-son49a.ibm.com   <none>           <none>
rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-5d58d55bxj5vd   2/2     Running       0              22m     10.129.3.209     syd04-worker-2.rdr-son49a.ibm.com   <none>           <none>
rook-ceph-mgr-a-5f4fb67c46-2b668                                  2/2     Terminating   17 (81m ago)   4h      10.128.3.207     syd04-worker-1.rdr-son49a.ibm.com   <none>           <none>
rook-ceph-mgr-a-5f4fb67c46-clppv                                  2/2     Running       0              22m     10.129.3.211     syd04-worker-2.rdr-son49a.ibm.com   <none>           <none>
rook-ceph-mon-a-6454fb77b4-nbsf8                                  2/2     Running       0              19d     10.131.0.169     syd04-worker-0.rdr-son49a.ibm.com   <none>           <none>
rook-ceph-mon-b-8674f5ddd5-kqfz4                                  2/2     Running       0              3h55m   10.129.3.197     syd04-worker-2.rdr-son49a.ibm.com   <none>           <none>
rook-ceph-mon-f-6684677958-6w6dw                                  0/2     Pending       0              17m     <none>           <none>                              <none>           <none>
rook-ceph-mon-f-6684677958-kf98m                                  2/2     Terminating   0              47h     10.128.2.101     syd04-worker-1.rdr-son49a.ibm.com   <none>           <none>
rook-ceph-operator-749d46bd8-vxs45                                1/1     Running       0              3h55m   10.131.0.186     syd04-worker-0.rdr-son49a.ibm.com   <none>           <none>
rook-ceph-osd-0-794c8fc484-bl5dj                                  2/2     Running       0              3h55m   10.129.3.198     syd04-worker-2.rdr-son49a.ibm.com   <none>           <none>
rook-ceph-osd-1-8496dfc98f-7f8n2                                  2/2     Running       0              19d     10.131.0.172     syd04-worker-0.rdr-son49a.ibm.com   <none>           <none>
rook-ceph-osd-2-599fd57db6-t7fgj                                  2/2     Terminating   0              47h     10.128.2.111     syd04-worker-1.rdr-son49a.ibm.com   <none>           <none>
rook-ceph-osd-2-599fd57db6-zxrj7                                  0/2     Pending       0              17m     <none>           <none>                              <none>           <none>
rook-ceph-osd-prepare-ocs-deviceset-localblock-0-data---1-x6mw9   0/1     Completed     0              19d     10.131.0.170     syd04-worker-0.rdr-son49a.ibm.com   <none>           <none>
rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-7667477b9lzb   2/2     Running       0              19d     10.131.0.175     syd04-worker-0.rdr-son49a.ibm.com   <none>           <none>
rook-ceph-tools-f57d97cc6-chv86                                   1/1     Running       0              19d     192.168.25.69    syd04-worker-0.rdr-son49a.ibm.com   <none>           <none>


The pods on the stopped node are either in terminating, pending or init status.

3. The pod noobaa-db-pg-0 remains in init:0/2 state

Pod description shows below error:
Events:
  Type     Reason                  Age                   From                     Message
  ----     ------                  ----                  ----                     -------
  Normal   Scheduled               32m                   default-scheduler        Successfully assigned openshift-storage/noobaa-db-pg-0 to syd04-worker-2.rdr-son49a.ibm.com
  Warning  FailedAttachVolume      32m                   attachdetach-controller  Multi-Attach error for volume "pvc-c799bef5-bda4-4c5d-b87f-7ab2cdd246b9" Volume is already exclusively attached to one node and can't be attached to another
  Normal   SuccessfulAttachVolume  26m                   attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-c799bef5-bda4-4c5d-b87f-7ab2cdd246b9"
  Warning  FailedMount             12m (x2 over 23m)     kubelet                  Unable to attach or mount volumes: unmounted volumes=[db], unattached volumes=[kube-api-access-s6ljv noobaa-postgres-config-volume db]: timed out waiting for the condition
  Warning  FailedMount             3m36s (x3 over 28m)   kubelet                  Unable to attach or mount volumes: unmounted volumes=[db], unattached volumes=[noobaa-postgres-config-volume db kube-api-access-s6ljv]: timed out waiting for the condition
  Warning  FailedMount             2m26s (x14 over 25m)  kubelet                  MountVolume.MountDevice failed for volume "pvc-c799bef5-bda4-4c5d-b87f-7ab2cdd246b9" : rpc error: code = Internal desc = rbd image ocs-storagecluster-cephblockpool/csi-vol-e846c26e-334f-11ec-9846-0a580a80020e is still being used
  Warning  FailedMount             80s (x9 over 30m)     kubelet                  Unable to attach or mount volumes: unmounted volumes=[db], unattached volumes=[db kube-api-access-s6ljv noobaa-postgres-config-volume]: timed out waiting for the condition

On restarting the kubelet service on the stopped node, all the pods go to normal state and noobaa-db-pg-0 pod remains on worker-1

Actual results:


Expected results:
The terminated pods should disappear and pending pods should move to running state.

Additional info:

Comment 2 Sonia Garudi 2021-11-11 10:08:09 UTC
Link to must-gather logs: https://drive.google.com/drive/folders/1F9aqyIRiao4z5yTciYyGv0OxinFakXi4?usp=sharing

Comment 5 Niels de Vos 2021-11-17 11:54:01 UTC
If you really need to stop Kubelet for some reason, you need to make sure that PVCs are unmounted before. You can do this by draining the pods from the node (moving it into maintenance mode).

Because volumes are still mounted when application pods are running when `systemctl start kubelet.service` is run, checks that the (Ceph RBD) volumes are in use will not allow mounting the volume on an other worker node.

Can you explain what the reason is to run `systemctl start kubelet.service` without stopping application pods in advance? If moving the worker node into maintenance mode is an option, please close this bug. Thanks!

Comment 6 Sridhar Venkat (IBM) 2021-11-23 00:41:01 UTC
@ndevos Thanks for your comment. We will add a step to execute 'oc drain xxx' to drain the pods running on a specific node before shutting down the kubelet service to simulate the disruption of nodes. This BZ can be closed and we will reopen if the problem persists after following the new set of procedures.


Note You need to log in before you can comment on or make changes to this bug.