Description of problem: 10 app pods were created with block PVCs as storage. The initiator and target nodes were mutually exclusive. On draining the initiator node which hosted all the 10 application pods, the pods got migrated to one of the target nodes. All the pods were up and running after the drain. On checking the multipath -ll output on the node which hosted the app pods, it only listed 7 out of 10 mpath devices: [root@dhcp46-244 ~]# multipath -ll|grep mpath|wc -l 7 Out of the 7 devices two of the devices had only 2 paths each mpathh (3600140501cce7f81cfe4402a2827c3c4) dm-30 LIO-ORG ,TCMU device size=1.0G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=10 status=active | `- 38:0:0:0 sdl 8:176 active ready running `-+- policy='round-robin 0' prio=10 status=enabled `- 50:0:0:0 sdw 65:96 active ready running mpathg (3600140526a2ab4ee53643feaa1a711f9) dm-28 LIO-ORG ,TCMU device size=1.0G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=10 status=active | `- 37:0:0:0 sdk 8:160 active ready running `-+- policy='round-robin 0' prio=10 status=enabled `- 48:0:0:0 sdv 65:80 active ready running 5 out of 10 app pods did not have mpath devices listed on df -kh output. Output from some of the app pods is given below mongodb-9-1-cv5vl +++++++++++++++++++++++ Filesystem Size Used Avail Use% Mounted on /dev/mapper/docker-8:17-12877-555164ce082b80383a82b5412e00d5bd072afb7cdbe78f2f3dd0071358f24e0f 10G 471M 9.6G 5% / tmpfs 16G 0 16G 0% /dev tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/sdb1 60G 1.4G 59G 3% /etc/hosts shm 64M 0 64M 0% /dev/shm /dev/sdae 1014M 233M 782M 23% /var/lib/mongodb/data tmpfs 16G 16K 16G 1% /run/secrets/kubernetes.io/serviceaccount tmpfs 16G 0 16G 0% /proc/scsi tmpfs 16G 0 16G 0% /sys/firmware mongodb-7-1-c9ggt +++++++++++++++++++++++ Filesystem Size Used Avail Use% Mounted on /dev/mapper/docker-8:17-12877-a8822b32719b95660a5f06799b238fb7e9de0a4291386c29751bd8046fb6e43b 10G 471M 9.6G 5% / tmpfs 16G 0 16G 0% /dev tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/sdb1 60G 1.4G 59G 3% /etc/hosts shm 64M 0 64M 0% /dev/shm /dev/sdr 1014M 233M 782M 23% /var/lib/mongodb/data tmpfs 16G 16K 16G 1% /run/secrets/kubernetes.io/serviceaccount tmpfs 16G 0 16G 0% /proc/scsi tmpfs 16G 0 16G 0% /sys/firmware Version-Release number of selected component (if applicable): # oc rsh glusterfs-storage-hmljv rpm -qa|grep gluster glusterfs-libs-3.8.4-54.15.el7rhgs.x86_64 glusterfs-3.8.4-54.15.el7rhgs.x86_64 glusterfs-api-3.8.4-54.15.el7rhgs.x86_64 glusterfs-cli-3.8.4-54.15.el7rhgs.x86_64 glusterfs-server-3.8.4-54.15.el7rhgs.x86_64 gluster-block-0.2.1-21.el7rhgs.x86_64 glusterfs-client-xlators-3.8.4-54.15.el7rhgs.x86_64 glusterfs-fuse-3.8.4-54.15.el7rhgs.x86_64 glusterfs-geo-replication-3.8.4-54.15.el7rhgs.x86_64 # oc rsh heketi-storage-1-wtfvw rpm -qa|grep heketi python-heketi-7.0.0-3.el7rhgs.x86_64 heketi-client-7.0.0-3.el7rhgs.x86_64 heketi-7.0.0-3.el7rhgs.x86_64 # oc version oc v3.10.15 kubernetes v1.10.0+b81c8f8 features: Basic-Auth GSSAPI Kerberos SPNEGO How reproducible: 2/2 Steps to Reproduce: 1. Create 10 app pods which are not hosted on one of the target nodes 2. Drain the initiator node 3. Check multipath -ll 4. Check the mpath devices in each pod (df -kh) Actual results: multipath -ll lists 7 out of 10 mpath devices with some of the devices with only 2 paths. Some of the app pods don't have mpath devices Expected results: multipath -ll should list all the devices with 3 paths each and the app pods should should have mpath devices mounted on them Additional info: Logs will be attached soon