Description of problem: Few workers fails to start with out any failure. Version-Release number of selected component (if applicable): mainline How reproducible: Seen only while running upstream regression test case prove -v tests/00-geo-rep/georep-basic-dr-rsync.t Steps to Reproduce: 1. Get upstream gluster source code 2. source install gluster 3. prove -v tests/00-geo-rep/georep-basic-dr-rsync.t Actual results: one of the worker fails to start without any log Expected results: No worker should fail to start Additional info:
REVIEW: https://review.gluster.org/20704 (geo-rep: Fix deadlock during worker start) posted (#1) for review on master by Kotresh HR
COMMIT: https://review.gluster.org/20704 committed in master by "Amar Tumballi" <amarts> with a commit message- geo-rep: Fix deadlock during worker start Analysis: Monitor process spawns monitor threads (one per brick). Each monitor thread, forks worker and agent processes. Each monitor thread, while intializing, updates the monitor status file. It is synchronized using flock. The race is that, some thread can fork worker while other thread opened the status file resulting in holding the reference of fd in worker process. Cause: flock gets unlocked either by specifically unlocking it or by closing all duplicate fds referring to the file. The code was relying on fd close, hence a reference in worker/agent process by fork could cause the deadlock. Fix: 1. flock is unlocked specifically. 2. Also made sure to update status file in approriate places so that the reference is not leaked to worker/agent process. With this fix, both the deadlock and possible fd leaks is solved. fixes: bz#1614799 Change-Id: I0d1ce93072dab07d0dbcc7e779287368cd9f093d Signed-off-by: Kotresh HR <khiremat>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report. glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html [2] https://www.gluster.org/pipermail/gluster-users/