Bug 1031515 - Dist-geo-rep : too much logging in slave gluster logs when there are some 20 million files for xsync to crawl
Summary: Dist-geo-rep : too much logging in slave gluster logs when there are some 20 ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: geo-replication
Version: 2.1
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: ---
: RHGS 3.1.0
Assignee: Kotresh HR
QA Contact: Rahul Hinduja
URL:
Whiteboard: usability
Depends On: 990558
Blocks: 1202842 1223636
TreeView+ depends on / blocked
 
Reported: 2013-11-18 07:47 UTC by Vijaykumar Koppad
Modified: 2015-07-29 04:30 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.7.0-2.el6rhs
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-07-29 04:30:12 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 0 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 08:26:26 UTC

Description Vijaykumar Koppad 2013-11-18 07:47:22 UTC
Description of problem: too much logging in slave gluster logs when there are some 20 million files for xsync to crawl. The slave gluster log had grown to some 2.5GB while crawling those 20million files within a week.


Version-Release number of selected component (if applicable):glusterfs-api-3.4.0.43rhs


How reproducible: Didn't try to reproduce. 


Steps to Reproduce:
1.create a geo-rep relationship between master and slave (6x2). 
2.create some 20 million files on master.
3.start geo-rep session, and wait for it to sync.
4.check the size of the slave gluster logs

Actual results: the slave gluster log grows to some 2.5GB within a week. 


Expected results: It shouldn't grow that much. 


Additional info:

The log file has too many these kind logs,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
[2013-11-14 16:25:57.940736] W [fuse-bridge.c:1627:fuse_err_cbk] 0-glusterfs-fuse: 1559950: MKNOD() <gfid:74aaaf05-f170-4df2-b12b-203a5c36827e>/5270235e~~Y5K6BIYDT6 => -1 (File exists)
[2013-11-14 16:25:57.942768] W [client-rpc-fops.c:256:client3_3_mknod_cbk] 0-slave-client-0: remote operation failed: File exists. Path: <gfid:74aaaf05-f170-4df2-b12b-203a5c36827e>/5270235f~~XLR61L7W5X
[2013-11-14 16:25:57.943143] W [client-rpc-fops.c:256:client3_3_mknod_cbk] 0-slave-client-1: remote operation failed: File exists. Path: <gfid:74aaaf05-f170-4df2-b12b-203a5c36827e>/5270235f~~XLR61L7W5X
[2013-11-14 16:25:57.943173] I [fuse-bridge.c:3515:fuse_auxgfid_newentry_cbk] 0-fuse-aux-gfid-mount: failed to create the entry <gfid:74aaaf05-f170-4df2-b12b-203a5c36827e>/5270235f~~XLR61L7W5X with gfid (42708382-712e-4b38-bdce-0327239ea6
fa): File exists
[2013-11-14 16:25
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

Comment 3 Aravinda VK 2015-04-16 11:37:24 UTC
Dependent bug is in POST state, moving this bug status to POST. Upstream patch sent for review https://bugzilla.redhat.com/show_bug.cgi?id=990558#c3

http://review.gluster.org/#/c/10184/

Comment 5 Rahul Hinduja 2015-07-15 13:15:38 UTC
Verified with the build: glusterfs-3.7.1-9.el6rhs.x86_64

[root@georep1 ~]# grep -i "client3_3_symlink_cbk" /var/log/glusterfs/geo-replication/master/* 
[root@georep1 ~]# grep -i "newentry_cbk" /var/log/glusterfs/geo-replication/master/* 
[root@georep1 ~]# 
[root@georep1 ~]# grep -i "mknod_cbk" /var/log/glusterfs/geo-replication/master/*
[root@georep1 ~]#


Moving this bug to verified state.

Comment 7 errata-xmlrpc 2015-07-29 04:30:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html


Note You need to log in before you can comment on or make changes to this bug.