Bug 1169331 - Geo-replication slave fills up inodes
Summary: Geo-replication slave fills up inodes
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: mainline
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: ---
Assignee: Aravinda VK
QA Contact:
URL:
Whiteboard:
: 1188968 (view as bug list)
Depends On:
Blocks: 1164906
TreeView+ depends on / blocked
 
Reported: 2014-12-01 10:56 UTC by Andrea Tartaglia
Modified: 2015-05-14 17:35 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.7.0beta1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-05-14 17:26:17 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Andrea Tartaglia 2014-12-01 10:56:45 UTC
Description of problem:
In a master/slave geo-replication setup, the slave node fills up the disk inodes. The issue happens because the ".processing" directory is not cleared up as it is on the master. This means that on the slave nodes that directory keeps all the CHANGELOG_* files. 

Version-Release number of selected component (if applicable):
glusterfs-geo-replication-3.6.0-0.5.beta3.el6.x86_64

How reproducible:
2 local nodes, geo-replicating to 1 or more remote sites.

Actual results:
The slave server saves the CHANGELOG file in: 
/var/lib/misc/glusterfsd/<VOLNAME>/<REMOTESITE>/<ID>/.processing/ 
But it does not delete them after the changelog is applied. 

Expected results:
The .processing directory gets cleared up as it is on the master ( CHANGELOG file gets moved in the .processed directory )

Comment 1 Niels de Vos 2014-12-02 12:43:57 UTC
This is a known issue, but also could not find a bug for this yet.

Venky is aware of the problem:
- http://supercolony.gluster.org/pipermail/gluster-devel/2014-November/042887.html

Comment 2 Andrea Tartaglia 2014-12-02 12:53:14 UTC
Yep, that was me asking about it on the mailing list.
He asked me to raise a bug report for it.

Comment 3 Venky Shankar 2014-12-03 06:16:37 UTC
Aravinda,

passive replica periodically brings it's local stime to cluster stime. Could we also purge accumulated changelogs too.

Also, we'd need to purge processed changelogs too. A good idea would be to pack them into an archive periodically.

Any thoughts?

Comment 4 Anand Avati 2015-01-15 09:58:52 UTC
REVIEW: http://review.gluster.org/9453 (geo-rep: [WIP] Archive Changelogs and avoid generating empty XSync changelogs) posted (#1) for review on master by Aravinda VK (avishwan)

Comment 5 Anand Avati 2015-01-19 05:47:52 UTC
REVIEW: http://review.gluster.org/9453 (geo-rep: Archive Changelogs and avoid generating empty XSync changelogs) posted (#2) for review on master by Aravinda VK (avishwan)

Comment 6 Anand Avati 2015-02-03 15:55:35 UTC
REVIEW: http://review.gluster.org/9453 (geo-rep: Archive Changelogs and avoid generating empty XSync changelogs) posted (#3) for review on master by Aravinda VK (avishwan)

Comment 7 Anand Avati 2015-02-20 02:56:09 UTC
COMMIT: http://review.gluster.org/9453 committed in master by Venky Shankar (vshankar) 
------
commit 1226083d0ff5fcff21abd16b314effeee49ae770
Author: Aravinda VK <avishwan>
Date:   Thu Jan 15 15:19:50 2015 +0530

    geo-rep: Archive Changelogs and avoid generating empty XSync changelogs
    
    With this patch,
    - Hybrid Crawl will not generate empty Changelogs
    - Archives Changelogs when processed(Hybrid(XSync), History,
      and Changelog Crawl
    - Passive worker cleans up its processing directory
    
    BUG: 1169331
    Change-Id: I1383ffaed261cdf50da91b14260b4d43177657d1
    Signed-off-by: Aravinda VK <avishwan>
    Reviewed-on: http://review.gluster.org/9453
    Reviewed-by: Venky Shankar <vshankar>
    Tested-by: Venky Shankar <vshankar>

Comment 8 Aravinda VK 2015-04-02 09:00:37 UTC
*** Bug 1188968 has been marked as a duplicate of this bug. ***

Comment 9 Niels de Vos 2015-05-14 17:26:17 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 10 Niels de Vos 2015-05-14 17:28:15 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 11 Niels de Vos 2015-05-14 17:35:14 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.