Bug 1581219 - centos regression fails for tests/bugs/replicate/bug-1292379.t
Summary: centos regression fails for tests/bugs/replicate/bug-1292379.t
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: replicate
Version: rhgs-3.4
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: RHGS 3.4.0
Assignee: Karthik U S
QA Contact: Vijay Avuthu
URL:
Whiteboard:
Depends On: 1515163
Blocks: 1503137
TreeView+ depends on / blocked
 
Reported: 2018-05-22 11:38 UTC by Karthik U S
Modified: 2018-09-16 11:50 UTC (History)
9 users (show)

Fixed In Version: glusterfs-3.12.2-12
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1515163
Environment:
Last Closed: 2018-09-04 06:48:11 UTC


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 None None None 2018-09-04 06:49:57 UTC

Description Karthik U S 2018-05-22 11:38:07 UTC
+++ This bug was initially created as a clone of Bug #1515163 +++

Description of problem:
regression failure observed at https://build.gluster.org/job/centos6-regression/7534/console

--- Additional comment from Worker Ant on 2018-01-11 23:55:07 EST ---

REVIEW: https://review.gluster.org/19185 (tests: check volume status for shd being up) posted (#1) for review on master by Ravishankar N

--- Additional comment from Worker Ant on 2018-01-12 00:56:47 EST ---

COMMIT: https://review.gluster.org/19185 committed in master by \"Ravishankar N\" <ravishankar@redhat.com> with a commit message- tests: check volume status for shd being up

so that glusterd is also aware that shd is up and running.

While not reproducible locally, on the jenkins slaves, 'gluster vol heal patchy'
fails with "Self-heal daemon is not running. Check self-heal daemon log file.",
while infact the afr_child_up_status_in_shd() checks before that passed. In the
shd log also, I see the shd being up and connected to at least one brick before
the heal is launched.

Change-Id: Id3801fa4ab56a70b1f0bd6a7e240f69bea74a5fc
BUG: 1515163
Signed-off-by: Ravishankar N <ravishankar@redhat.com>

--- Additional comment from Shyamsundar on 2018-03-15 07:21:35 EDT ---

This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.0.0, please open a new bug report.

glusterfs-4.0.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-March/000092.html
[2] https://www.gluster.org/pipermail/gluster-users/

Comment 8 Vijay Avuthu 2018-07-25 10:08:21 UTC
Update:
===========

Build Used: glusterfs-3.12.2-14.el7rhgs.x86_64

Scenario:

1) Create 1 * 3 volume and mount
2) Disable self heal and all client side heals
3) on mount point, continously write to a file using dd ( 4GB )
4) bring down brick0 
5) rename the file from another session
6) after 2 min, bring up b0
7) enable self heal daemon
8) check self heal daemon is up and running and trigger volume heal
9) wait for heal to complete
10) kill the dd if it still running 
11) calculate md5sum on both the bricks

> md5sum for the file is same on all the bricks as well from mountpoint.

N1:

# md5sum /bricks/brick0/b0/file1_rename 
c25283b4aa3e52b37abbbfb9835bdf81  /bricks/brick0/b0/file1_rename
# 

N2:

# md5sum /bricks/brick0/b1/file1_rename
c25283b4aa3e52b37abbbfb9835bdf81  /bricks/brick0/b1/file1_rename
#

N3:

# md5sum /bricks/brick0/b2/file1_rename
c25283b4aa3e52b37abbbfb9835bdf81  /bricks/brick0/b2/file1_rename
#

Client:

# md5sum file1_rename
c25283b4aa3e52b37abbbfb9835bdf81  file1_rename
#

Comment 9 errata-xmlrpc 2018-09-04 06:48:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607


Note You need to log in before you can comment on or make changes to this bug.