Description of problem: Slave mount log file is cluttered by logs of multiple active mounts Geo-rep worker mounts the slave volume on the slave node. If multiple worker connects to same slave node, all workers share the same mount log file. This is very difficult to debug as logs are cluttered from different mounts The location of log file is /var/log/glusterfs/geo-replication-slaves/*.gluster.log Version-Release number of selected component (if applicable): mainline How reproducible: Always Steps to Reproduce: 1. Create master volume with two bricks 2. Create slave volume with single brick 3. Establish geo-rep sessio between them. 6. Now geo-rep will have two slave mounts per master brick. Both will log into single file. Actual results: Multiple mount logs to same file Expected results: Each mount should log to separate file Additional info:
REVIEW: http://review.gluster.org/16384 (geo-rep: Separate slave mount logs for each connection) posted (#1) for review on master by Kotresh HR (khiremat)
*** Bug 1396073 has been marked as a duplicate of this bug. ***
COMMIT: http://review.gluster.org/16384 committed in master by Aravinda VK (avishwan) ------ commit ff5e91a60887d22934fcb5f8a15dd36019d6e09a Author: Kotresh HR <khiremat> Date: Tue Jan 10 15:39:55 2017 -0500 geo-rep: Separate slave mount logs for each connection Geo-rep worker mounts the slave volume on the slave node. If multiple worker connects to same slave node, all workers share the same mount log file. This is very difficult to debug as logs are cluttered from different mounts. Hence creating separate mount log file for each connection from worker. Each connection from worker is identified uniquely using 'mastervol uuid', 'master host', 'master brickpath', 'salve vol'. The log file name will be combination of the above. Change-Id: I67871dc8e8ea5864e2ad55e2a82063be0138bf0c BUG: 1412689 Signed-off-by: Kotresh HR <khiremat> Reviewed-on: http://review.gluster.org/16384 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Aravinda VK <avishwan>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report. glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html [2] https://www.gluster.org/pipermail/gluster-users/