$SUMMARY is because the test is calling cleanup twice in its exit bash traps. - First, the test itself sets a trap to cleanup at https://github.com/gluster/glusterfs/blob/master/tests/bugs/core/multiplex-limit-issue-151.t#L30 - There is an additional trap set to cleanup in include.rc, https://github.com/gluster/glusterfs/blob/master/tests/include.rc#L719 The tar ball is generated in the cleanup routine, and also ensures that on content in the tar balls is between 2 invocations. Thus, calling cleanup twice will result in an empty tarball. This can be seen running the test locally as, `./tests/bugs/distribute/bug-1042725.t` There are a few things in that test we need clarified, 1. why trap this: https://github.com/gluster/glusterfs/blob/master/tests/bugs/core/multiplex-limit-issue-151.t#L29 2. Why trap cleanup, rather than invoke it at the end of the test as is normal This pattern is repeated across the following tests: tests/basic/mpx-compat.t tests/basic/multiplex.t tests/bugs/core/multiplex-limit-issue-151.t tests/bugs/glusterd/brick-mux-validation.t Fix is to revert this pattern to the normal cleanup at the end and remove the traps set by these test cases.
REVIEW: https://review.gluster.org/20706 (tests: Fix cleanup routine for some mux tests) posted (#1) for review on master by Shyamsundar Ranganathan
COMMIT: https://review.gluster.org/20706 committed in master by "Atin Mukherjee" <amukherj> with a commit message- tests: Fix cleanup routine for some mux tests Some of the mux tests, set a trap to catch test exit and call cleanup. This will cause cleanup to be invoked twice in case the test times out, or even otherwise, as include.rc also sets a trap to cleanup on exit (TERM and others). This leads to the tarballs generated on failures for these tests to be empty and does not aid debugging. This patch corrects this pattern across the tests to the more standard cleanup at the end. Fixes: bz#1615037 Change-Id: Ib83aeb09fac2aa591b390b9fb9e1f605bfef9a8b Signed-off-by: ShyamsundarR <srangana>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report. glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html [2] https://www.gluster.org/pipermail/gluster-users/