Description of problem: Too much logging in the brick-log from marker resulting in the log of size about 36GB within the time frame of 5 days of running tests on glusterfs. Version-Release number of selected component (if applicable):RHS-2.0.z How reproducible:Deterministic Steps to Reproduce: 1.Start geo-rep session between master(dist-rep) and slave (dist-rep) 2.Run tests like creating large number of small files and deleting them. 3.Run this test for some 7 days. Actual results:The brick log file gets about 36GB within 5 days, which resulted in filling completely root partition, which in turn resulted in failing all the geo-rep commands. Expected results: The log-file shouldn't get full so early Additional info:
This is the out of log_analyzer. Number Percentage Function 1 0.00 mgmt_getspec_cbk 18911 0.07 mq_dict_set_contribution 318 0.00 mq_get_parent_inode_local 1 0.00 mq_initiate_quota_txn 9266979 33.19 mq_inspect_directory_xattr 16 0.00 mq_release_parent_lock 16 0.00 mq_update_inode_contribution 62264 0.22 posix_handle_pair 16 0.00 posix_lookup 12680 0.05 posix_mkdir 4229 0.02 posix_mknod 1049 0.00 posix_setattr 9263145 33.18 posix_stat 590 0.00 posix_writev 23 0.00 _posix_xattr_get_set 529 0.00 server_connection_destroy 529 0.00 server_connection_put 3319 0.01 server_entrylk_cbk 37 0.00 server_getxattr_cbk 697 0.00 server_inodelk_cbk 12681 0.05 server_mkdir_cbk 4230 0.02 server_mknod_cbk 529 0.00 server_rpc_notify 1049 0.00 server_setattr_cbk 529 0.00 server_setvolume 9263145 33.18 server_unlink_cbk 590 0.00 server_writev_cbk ========= Error Functions ======== 18911 0.07 mq_dict_set_contribution 318 0.00 mq_get_parent_inode_local 1 0.00 mq_initiate_quota_txn 9266979 33.19 mq_inspect_directory_xattr 16 0.00 mq_release_parent_lock 16 0.00 mq_update_inode_contribution 62264 0.22 posix_handle_pair 16 0.00 posix_lookup 12680 0.05 posix_mkdir 4229 0.02 posix_mknod 1049 0.00 posix_setattr 9263145 33.18 posix_stat 590 0.00 posix_writev 23 0.00 _posix_xattr_get_set 1 0.00 server_unlink_cbk
http://review.gluster.org/3935 fixed the issue in upstream
The patch has been accepted in downstream too.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1262.html