+++ This bug was initially created as a clone of Bug #1608564 +++
The nightly line coverage tests are failing consistently for over a few weeks. The failures are as follows,
2 test(s) failed
1 test(s) generated core
This test is timing out, my thought is to increment the time for this test, as the line coverage tests seem to take more time (assuming lcov instrumentation slows things down).
For example the time taken for the following tests in centos7 regression builds look as follows,
./tests/bugs/index/bug-1559004-EMLINK-handling.t - 896 second
./tests/bugs/core/bug-1432542-mpx-restart-crash.t - 309 second
./tests/basic/afr/lk-quorum.t - 225 second
On lcov tests these take,
./tests/bugs/index/bug-1559004-EMLINK-handling.t - 1063 second
./tests/bugs/core/bug-1432542-mpx-restart-crash.t - 400 second (timeout)
./tests/basic/afr/lk-quorum.t - 267 second
As can be seen each test seems to add 25 seconds for every 100 seconds of a normal run.
Need to reproduce this locally and check if we can increase the timeout for the mpx test to resolve (a)
REVIEW: https://review.gluster.org/20568 (tests: Increase timeout for mpx restart crash test) posted (#3) for review on master by Shyamsundar Ranganathan
COMMIT: https://review.gluster.org/20568 committed in master by "Atin Mukherjee" <email@example.com> with a commit message- tests: Increase timeout for mpx restart crash test
In lcov based regression testing environments, all tests take
more time than what occurs in centos7 regressions. Possibly
due to code instrumentation for lcov purposes.
Due to this the test, bug-1432542-mpx-restart-crash.t constantly
times out. This patch increases the timeout for the same to enable
lcov tests to pass on a more regular basis.
It was also noted by Nithya that the test at times generated an
OOM kill on the regression machines. In order to reduce runtime
memory foot print of the tests, FUSE mounts are unmounted as
soon as the required test is complete.
Signed-off-by: ShyamsundarR <firstname.lastname@example.org>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report.
glusterfs-5.0 has been announced on the Gluster mailinglists , packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist  and the update infrastructure for your distribution.