Bug 1615078 - tests/bugs/replicate/bug-1408712.t fails.
Summary: tests/bugs/replicate/bug-1408712.t fails.
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: tests
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Ravishankar N
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-08-12 06:18 UTC by Ravishankar N
Modified: 2018-10-23 15:16 UTC (History)
1 user (show)

Fixed In Version: glusterfs-5.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-10-23 15:16:52 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Ravishankar N 2018-08-12 06:18:19 UTC
Description of problem:

tests/bugs/replicate/bug-1408712.t fails the 1st time, and succeeds on the retry in the upstream regression runs. Logs not available.

There are 2 runs so far:
1) https://build.gluster.org/job/regression-on-demand-full-run/44/consoleFull
2) https://build.gluster.org/job/line-coverage/460/consoleFull  

Both of them have failed at the same line numbers:
------------------------------------------------------------------------------
21:27:53   Failed tests:  15, 24, 41-43

21:27:53 dd: closing output file ‘file’: Transport endpoint is not connected
21:27:53 not ok 15 , LINENUM:29
21:27:53 FAILED COMMAND: dd if=/dev/zero of=file bs=1M count=8


21:27:53 ok 23, LINENUM:45
21:27:53 md5sum: /mnt/glusterfs/1/file: Transport endpoint is not connected
21:27:53 not ok 24 , LINENUM:49


21:27:53 not ok 41 Got "4" instead of "^0$", LINENUM:76
21:27:53 FAILED COMMAND: ^0$ get_pending_heal_count patchy
21:27:53 not ok 42 , LINENUM:79
21:27:53 FAILED COMMAND: ! stat /d/backends/patchy1/.glusterfs/indices/entry-changes/be318638-e8a0-4c6d-977d-7a937aa84806
21:27:53 not ok 43 , LINENUM:80
21:27:53 FAILED COMMAND: ! stat /d/backends/patchy2/.glusterfs/indices/entry-changes/be318638-e8a0-4c6d-977d-7a937aa84806

------------------------------------------------------------------------------

Comment 1 Ravishankar N 2018-08-12 06:26:48 UTC
Since the very first `dd` after the mounting fails with ENOTCONN, I'm guessing the client is not connected to the bricks yet. Adding checks to the .t to verify bricks are online and client is connected to them before the `dd`.

Comment 2 Worker Ant 2018-08-12 06:30:25 UTC
REVIEW: https://review.gluster.org/20708 (tests: potential fixes to bugs/replicate/bug-1408712.t) posted (#1) for review on master by Ravishankar N

Comment 3 Worker Ant 2018-08-13 12:15:18 UTC
COMMIT: https://review.gluster.org/20708 committed in master by "Shyamsundar Ranganathan" <srangana> with a commit message- tests: potential fixes to bugs/replicate/bug-1408712.t

See BZ for details.

Change-Id: I2cc2064f14d80271ebcc21747103ce4cee848cbf
fixes: bz#1615078
Signed-off-by: Ravishankar N <ravishankar>

Comment 4 Shyamsundar 2018-10-23 15:16:52 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report.

glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.