Description of problem: When the disk is removed from the VM, we get these logs from glusterfsd(brick process) ``` Broadcast message from systemd-journald@node1 (Mon 2017-03-27 12:14:29 UTC): var-lib-heketi-mounts-vg_3dc59eba73cb2070a96c0fec5a2b5e82-brick_455ecb9f475cf6bd7dd201457799736c-brick[14090]: [2017-03-27 12:14:29.751829] M [MSGID: 113075] [posix-helpers.c:1841:posix_health_check_thread_proc] 0-vol_71bda80b7a159f08ad795e4f4f244bd4-posix: health-check failed, going down Message from syslogd@localhost at Mar 27 12:14:29 ... var-lib-heketi-mounts-vg_3dc59eba73cb2070a96c0fec5a2b5e82-brick_455ecb9f475cf6bd7dd201457799736c-brick[14090]:[2017-03-27 12:14:29.751829] M [MSGID: 113075] [posix-helpers.c:1841:posix_health_check_thread_proc] 0-vol_71bda80b7a159f08ad795e4f4f244bd4-posix: health-check failed, going down ``` When the kill signal is sent to the same brick process as part of replace-brick, we get ``` Broadcast message from systemd-journald@node1 (Mon 2017-03-27 12:14:59 UTC): var-lib-heketi-mounts-vg_3dc59eba73cb2070a96c0fec5a2b5e82-brick_455ecb9f475cf6bd7dd201457799736c-brick[14090]: [2017-03-27 12:14:59.752367] M [MSGID: 113075] [posix-helpers.c:1847:posix_health_check_thread_proc] 0-vol_71bda80b7a159f08ad795e4f4f244bd4-posix: still alive! -> SIGTERM Message from syslogd@localhost at Mar 27 12:14:59 ... var-lib-heketi-mounts-vg_3dc59eba73cb2070a96c0fec5a2b5e82-brick_455ecb9f475cf6bd7dd201457799736c-brick[14090]:[2017-03-27 12:14:59.752367] M [MSGID: 113075] [posix-helpers.c:1847:posix_health_check_thread_proc] 0-vol_71bda80b7a159f08ad795e4f4f244bd4-posix: still alive! -> SIGTERM Shared connection to 192.168.21.14 closed. ``` It is found that glusterd has crashed on the system and the system goes to emergency mode. This might not be the best way to test, if a better way proves that gluster is resilient to disk crashes, this bug might be closed.
Closing this bug since needinfo hasn't been addressed. Please open this bug if this issue persists.