Description of problem: tests/bugs/replicate/bug-1408712.t fails the 1st time, and succeeds on the retry in the upstream regression runs. Logs not available. There are 2 runs so far: 1) https://build.gluster.org/job/regression-on-demand-full-run/44/consoleFull 2) https://build.gluster.org/job/line-coverage/460/consoleFull Both of them have failed at the same line numbers: ------------------------------------------------------------------------------ 21:27:53 Failed tests: 15, 24, 41-43 21:27:53 dd: closing output file ‘file’: Transport endpoint is not connected 21:27:53 not ok 15 , LINENUM:29 21:27:53 FAILED COMMAND: dd if=/dev/zero of=file bs=1M count=8 21:27:53 ok 23, LINENUM:45 21:27:53 md5sum: /mnt/glusterfs/1/file: Transport endpoint is not connected 21:27:53 not ok 24 , LINENUM:49 21:27:53 not ok 41 Got "4" instead of "^0$", LINENUM:76 21:27:53 FAILED COMMAND: ^0$ get_pending_heal_count patchy 21:27:53 not ok 42 , LINENUM:79 21:27:53 FAILED COMMAND: ! stat /d/backends/patchy1/.glusterfs/indices/entry-changes/be318638-e8a0-4c6d-977d-7a937aa84806 21:27:53 not ok 43 , LINENUM:80 21:27:53 FAILED COMMAND: ! stat /d/backends/patchy2/.glusterfs/indices/entry-changes/be318638-e8a0-4c6d-977d-7a937aa84806 ------------------------------------------------------------------------------
Since the very first `dd` after the mounting fails with ENOTCONN, I'm guessing the client is not connected to the bricks yet. Adding checks to the .t to verify bricks are online and client is connected to them before the `dd`.
REVIEW: https://review.gluster.org/20708 (tests: potential fixes to bugs/replicate/bug-1408712.t) posted (#1) for review on master by Ravishankar N
COMMIT: https://review.gluster.org/20708 committed in master by "Shyamsundar Ranganathan" <srangana> with a commit message- tests: potential fixes to bugs/replicate/bug-1408712.t See BZ for details. Change-Id: I2cc2064f14d80271ebcc21747103ce4cee848cbf fixes: bz#1615078 Signed-off-by: Ravishankar N <ravishankar>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report. glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html [2] https://www.gluster.org/pipermail/gluster-users/