Bug 1076348

Summary: Multiple bricks on a node with changelog enabled could cause changelog/journal corruption
Product: [Community] GlusterFS Reporter: Venky Shankar <vshankar>
Component: coreAssignee: Venky Shankar <vshankar>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: mainlineCC: bugs
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-4.1.3 (or later) Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-08-29 03:37:31 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Venky Shankar 2014-03-14 07:15:18 UTC
Description of problem:

Changelog translator uses <brick>/.glusterfs/changelog as the default directory for recording journals. Default directory can be changed (using "changelog-dir" volume set option). When multiple bricks are hosted on the same node, changelog translator on each brick would write to a common journal (as the path would now be /path/to/journal/CHANGELOG). This would lead in journals corruption and incorrect behavior of application relying on changelogs such as geo-replication, backup utilities etc.


Version-Release number of selected component (if applicable):
mainline

How reproducible:
always

Expected results:
Changelog translator on each brick should ensure that the directory for journals are unique.

Comment 1 Anand Avati 2014-03-14 08:54:58 UTC
REVIEW: http://review.gluster.org/7274 (features/changelog: use brick hash for changelog directory) posted (#1) for review on master by Venky Shankar (vshankar)

Comment 2 Mike McCune 2016-03-28 23:45:25 UTC
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions

Comment 3 Amar Tumballi 2018-08-29 03:37:31 UTC
This update is done in bulk based on the state of the patch and the time since last activity. If the issue is still seen, please reopen the bug.