Bug 1076348 - Multiple bricks on a node with changelog enabled could cause changelog/journal corruption
Summary: Multiple bricks on a node with changelog enabled could cause changelog/journa...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: core
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Venky Shankar
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-03-14 07:15 UTC by Venky Shankar
Modified: 2018-08-29 03:37 UTC (History)
1 user (show)

Fixed In Version: glusterfs-4.1.3 (or later)
Clone Of:
Environment:
Last Closed: 2018-08-29 03:37:31 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Venky Shankar 2014-03-14 07:15:18 UTC
Description of problem:

Changelog translator uses <brick>/.glusterfs/changelog as the default directory for recording journals. Default directory can be changed (using "changelog-dir" volume set option). When multiple bricks are hosted on the same node, changelog translator on each brick would write to a common journal (as the path would now be /path/to/journal/CHANGELOG). This would lead in journals corruption and incorrect behavior of application relying on changelogs such as geo-replication, backup utilities etc.


Version-Release number of selected component (if applicable):
mainline

How reproducible:
always

Expected results:
Changelog translator on each brick should ensure that the directory for journals are unique.

Comment 1 Anand Avati 2014-03-14 08:54:58 UTC
REVIEW: http://review.gluster.org/7274 (features/changelog: use brick hash for changelog directory) posted (#1) for review on master by Venky Shankar (vshankar)

Comment 2 Mike McCune 2016-03-28 23:45:25 UTC
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions

Comment 3 Amar Tumballi 2018-08-29 03:37:31 UTC
This update is done in bulk based on the state of the patch and the time since last activity. If the issue is still seen, please reopen the bug.


Note You need to log in before you can comment on or make changes to this bug.