Description of problem: [This bug is raised based on https://bugzilla.redhat.com/show_bug.cgi?id=1477455#c76] when the node containing gluster pod is restarted, app pod failed and went into crashloop. [root@dhcp47-57 ~]# oc get pods NAME READY STATUS RESTARTS AGE glusterblock-provisioner-dc-1-v0jd6 1/1 Running 0 1d glusterfs-30vb6 1/1 Running 0 1d glusterfs-bzx4c 1/1 Running 0 1d glusterfs-mc1xn 1/1 Running 1 1d heketi-1-5ktqm 1/1 Running 0 1d mongodb-1-1-7r8rf 0/1 CrashLoopBackOff 304 1d storage-project-router-2-56dv2 1/1 Running 2 38d Version-Release number of selected component (if applicable): cns-deploy-5.0.0-37.el7rhgs.x86_64 How reproducible: 1/1
Upstream patches related to the gluster-block and gluster-block-target systemd units [1] and the tcmu-runner systemd unit [2] [1] https://review.gluster.org/#/c/18029/ [2] https://github.com/open-iscsi/tcmu-runner/pull/296
Verified the fix in the below build on cns setup. rpm -qa | grep 'cns' cns-deploy-5.0.0-43.el7rhgs.x86_64 sh-4.2# rpm -qa | grep 'tcmu' libtcmu-1.2.0-15.el7rhgs.x86_64 tcmu-runner-1.2.0-15.el7rhgs.x86_64 following tests were done, gluster pod restart node containing gluster pod reboot app pod restart Issue reported in this bug was not seen. Moving the bug to verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:2773