Description of problem: If gluster process or file-system back-end locks up in the kernel, then gluster posix health check does not get to know the overall health of the system. Expected results: posix health check should identify issues with back-end file-system and also watch for the latency between the write-timestamp/read-timestamp cycle.
REVIEW: https://review.gluster.org/18872 (posix: Convert posix_fs_health_check asynchrnously to save timestamp) posted (#1) for review on master by MOHIT AGRAWAL
COMMIT: https://review.gluster.org/18872 committed in master by \"MOHIT AGRAWAL\" <moagrawa> with a commit message- posix: Convert posix_fs_health_check asynchrnously to save timestamp Problem: Sometime posix_fs_health_check thread is blocked on write/read call while backend device deleted abruptly. Solution: To resolve it convert code to update timestamp asynchrnously. BUG: 1501132 Change-Id: Id68ea6a572bf68fbf437e1d9be5221b63d47ff9c Signed-off-by: Mohit Agrawal <moagrawa>
REVIEW: https://review.gluster.org/18954 (posix: Convert posix_fs_health_check asynchrnously) posted (#1) for review on experimental by Kotresh HR
COMMIT: https://review.gluster.org/18954 committed in experimental by \"Kotresh HR\" <khiremat> with a commit message- posix: Convert posix_fs_health_check asynchrnously Problem: Sometime posix_fs_health_check thread is blocked on write/read call while backend device deleted abruptly. Solution: To resolve it convert code to update timestamp asynchrnously. BUG: 1501132 Change-Id: Id68ea6a572bf68fbf437e1d9be5221b63d47ff9c Signed-off-by: Mohit Agrawal <moagrawa>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.0.0, please open a new bug report. glusterfs-4.0.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2018-March/000092.html [2] https://www.gluster.org/pipermail/gluster-users/