Description of problem: CNS:gluster-blockd service doesnt come up after recreating gluster-pod Version-Release number of selected component (if applicable): atomic-openshift-3.10.10-1.git.0.3431c56.el7.x86_64 heketi-client-7.0.0-2.el7rhgs.x86_64 rhgs3/rhgs-server-rhel7:3.3.1-21 How reproducible: Once Steps to Reproduce: Run automated testcase dynamic_provisioning_glusterblock_glusterpod_failure, it fails - https://polarion.engineering.redhat.com/polarion/#/project/ContainerNativeStorage/workitem?id=CNS-437&testrun=ContainerNativeStorage%2Fautomation_ocp3_10-cns3_10 The gluster-blockd service doesnt come up after re-creating gluster-pod This pod was re-created glusterfs-storage-ggzdl [root@dhcp46-165 ~]# oc get pods NAME READY STATUS RESTARTS AGE glusterblock-storage-provisioner-dc-1-bn45v 1/1 Running 0 12h glusterfs-storage-ggzdl 1/1 Running 0 11h glusterfs-storage-k24nj 1/1 Running 0 12h glusterfs-storage-tbh9s 1/1 Running 0 12h heketi-storage-1-zvc5t 1/1 Running 0 11h [root@dhcp46-165 ~]# oc rsh glusterfs-storage-ggzdl systemctl status gluster-blockd ● gluster-blockd.service - Gluster block storage utility Loaded: loaded (/usr/lib/systemd/system/gluster-blockd.service; enabled; vendor preset: disabled) Active: inactive (dead) command terminated with exit code 3 [root@dhcp46-165 ~]# oc rsh glusterfs-storage-k24nj systemctl status gluster-blockd ● gluster-blockd.service - Gluster block storage utility Loaded: loaded (/usr/lib/systemd/system/gluster-blockd.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2018-07-02 18:22:04 UTC; 12h ago Main PID: 653 (gluster-blockd) CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb16249c6_7e24_11e8_930b_005056a5f18a.slice/docker-e560055d28b8f02cf1ff29a67708d52a3016d66940c1795815d6f90b09fa2456.scope/system.slice/gluster-blockd.service └─653 /usr/sbin/gluster-blockd --glfs-lru-count 15 --log-level INFO Jul 02 18:22:04 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Started Gluster block storage utility. Jul 02 18:22:04 dhcp46-122.lab.eng.blr.redhat.com systemd[1]: Starting Gluster block storage utility... Jul 02 18:22:04 dhcp46-122.lab.eng.blr.redhat.com gluster-blockd[653]: Parameter auto_save_on_exit is now 'false'. Jul 02 18:22:04 dhcp46-122.lab.eng.blr.redhat.com gluster-blockd[653]: Parameter logfile is now '/var/log/glusterfs/gluster-block/gluster-block-configshell.log'. Jul 02 18:22:04 dhcp46-122.lab.eng.blr.redhat.com gluster-blockd[653]: Parameter loglevel_file is now 'info'. Jul 02 18:22:04 dhcp46-122.lab.eng.blr.redhat.com gluster-blockd[653]: Parameter auto_enable_tpgt is now 'false'. Jul 02 18:22:04 dhcp46-122.lab.eng.blr.redhat.com gluster-blockd[653]: Parameter auto_add_default_portal is now 'false'. [root@dhcp46-165 ~]# oc rsh glusterfs-storage-tbh9s systemctl status gluster-blockd ● gluster-blockd.service - Gluster block storage utility Loaded: loaded (/usr/lib/systemd/system/gluster-blockd.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2018-07-02 18:22:03 UTC; 12h ago Main PID: 658 (gluster-blockd) CGroup: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1645fc7_7e24_11e8_930b_005056a5f18a.slice/docker-2e2c5138501eed629051a0e475c124664457af5cbde11e6ae0e73d30d9487420.scope/system.slice/gluster-blockd.service └─658 /usr/sbin/gluster-blockd --glfs-lru-count 15 --log-level INF... Jul 02 18:22:03 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Started Gluster... Jul 02 18:22:03 dhcp46-187.lab.eng.blr.redhat.com systemd[1]: Starting Gluste... Jul 02 18:22:03 dhcp46-187.lab.eng.blr.redhat.com gluster-blockd[658]: Parame... Jul 02 18:22:03 dhcp46-187.lab.eng.blr.redhat.com gluster-blockd[658]: Parame... Jul 02 18:22:03 dhcp46-187.lab.eng.blr.redhat.com gluster-blockd[658]: Parame... Jul 02 18:22:03 dhcp46-187.lab.eng.blr.redhat.com gluster-blockd[658]: Parame... Jul 02 18:22:03 dhcp46-187.lab.eng.blr.redhat.com gluster-blockd[658]: Parame... Hint: Some lines were ellipsized, use -l to show in full. Note i am not able to manually start the gluster-blockd service on that gluster-pod
Please reopen the bug, if you think this is different from BZ#1596369 *** This bug has been marked as a duplicate of bug 1596369 ***