Description of problem: too much logging in slave gluster logs when there are some 20 million files for xsync to crawl. The slave gluster log had grown to some 2.5GB while crawling those 20million files within a week. Version-Release number of selected component (if applicable):glusterfs-api-3.4.0.43rhs How reproducible: Didn't try to reproduce. Steps to Reproduce: 1.create a geo-rep relationship between master and slave (6x2). 2.create some 20 million files on master. 3.start geo-rep session, and wait for it to sync. 4.check the size of the slave gluster logs Actual results: the slave gluster log grows to some 2.5GB within a week. Expected results: It shouldn't grow that much. Additional info: The log file has too many these kind logs, >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [2013-11-14 16:25:57.940736] W [fuse-bridge.c:1627:fuse_err_cbk] 0-glusterfs-fuse: 1559950: MKNOD() <gfid:74aaaf05-f170-4df2-b12b-203a5c36827e>/5270235e~~Y5K6BIYDT6 => -1 (File exists) [2013-11-14 16:25:57.942768] W [client-rpc-fops.c:256:client3_3_mknod_cbk] 0-slave-client-0: remote operation failed: File exists. Path: <gfid:74aaaf05-f170-4df2-b12b-203a5c36827e>/5270235f~~XLR61L7W5X [2013-11-14 16:25:57.943143] W [client-rpc-fops.c:256:client3_3_mknod_cbk] 0-slave-client-1: remote operation failed: File exists. Path: <gfid:74aaaf05-f170-4df2-b12b-203a5c36827e>/5270235f~~XLR61L7W5X [2013-11-14 16:25:57.943173] I [fuse-bridge.c:3515:fuse_auxgfid_newentry_cbk] 0-fuse-aux-gfid-mount: failed to create the entry <gfid:74aaaf05-f170-4df2-b12b-203a5c36827e>/5270235f~~XLR61L7W5X with gfid (42708382-712e-4b38-bdce-0327239ea6 fa): File exists [2013-11-14 16:25 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Dependent bug is in POST state, moving this bug status to POST. Upstream patch sent for review https://bugzilla.redhat.com/show_bug.cgi?id=990558#c3 http://review.gluster.org/#/c/10184/
Verified with the build: glusterfs-3.7.1-9.el6rhs.x86_64 [root@georep1 ~]# grep -i "client3_3_symlink_cbk" /var/log/glusterfs/geo-replication/master/* [root@georep1 ~]# grep -i "newentry_cbk" /var/log/glusterfs/geo-replication/master/* [root@georep1 ~]# [root@georep1 ~]# grep -i "mknod_cbk" /var/log/glusterfs/geo-replication/master/* [root@georep1 ~]# Moving this bug to verified state.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1495.html