Bug 1412689 - [Geo-rep] Slave mount log file is cluttered by logs of multiple active mounts
Summary: [Geo-rep] Slave mount log file is cluttered by logs of multiple active mounts
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Kotresh HR
QA Contact:
URL:
Whiteboard:
: 1396073 (view as bug list)
Depends On:
Blocks: 1414750
TreeView+ depends on / blocked
 
Reported: 2017-01-12 14:54 UTC by Kotresh HR
Modified: 2017-03-06 17:43 UTC (History)
2 users (show)

Fixed In Version: glusterfs-3.10.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1414750 (view as bug list)
Environment:
Last Closed: 2017-03-06 17:43:33 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Kotresh HR 2017-01-12 14:54:02 UTC
Description of problem:
Slave mount log file is cluttered by logs of multiple active mounts

Geo-rep worker mounts the slave volume on the slave
node. If multiple worker connects to same slave node,
all workers share the same mount log file. This
is very difficult to debug as logs are cluttered from
different mounts

The location of log file is
/var/log/glusterfs/geo-replication-slaves/*.gluster.log

Version-Release number of selected component (if applicable):
mainline

How reproducible:
Always

Steps to Reproduce:
1. Create master volume with two bricks
2. Create slave volume with single brick
3. Establish geo-rep sessio between them.
6. Now geo-rep will have two slave mounts per master brick. Both will log into single file.

Actual results:
Multiple mount logs to same file

Expected results:
Each mount should log to separate file

Additional info:

Comment 1 Worker Ant 2017-01-12 14:57:57 UTC
REVIEW: http://review.gluster.org/16384 (geo-rep: Separate slave mount logs for each connection) posted (#1) for review on master by Kotresh HR (khiremat@redhat.com)

Comment 2 Aravinda VK 2017-01-13 04:49:56 UTC
*** Bug 1396073 has been marked as a duplicate of this bug. ***

Comment 3 Worker Ant 2017-01-18 08:47:05 UTC
COMMIT: http://review.gluster.org/16384 committed in master by Aravinda VK (avishwan@redhat.com) 
------
commit ff5e91a60887d22934fcb5f8a15dd36019d6e09a
Author: Kotresh HR <khiremat@redhat.com>
Date:   Tue Jan 10 15:39:55 2017 -0500

    geo-rep: Separate slave mount logs for each connection
    
    Geo-rep worker mounts the slave volume on the slave
    node. If multiple worker connects to same slave node,
    all workers share the same mount log file. This
    is very difficult to debug as logs are cluttered from
    different mounts. Hence creating separate mount log
    file for each connection from worker. Each connection
    from worker is identified uniquely using 'mastervol uuid',
    'master host', 'master brickpath', 'salve vol'. The log
    file name will be combination of the above.
    
    Change-Id: I67871dc8e8ea5864e2ad55e2a82063be0138bf0c
    BUG: 1412689
    Signed-off-by: Kotresh HR <khiremat@redhat.com>
    Reviewed-on: http://review.gluster.org/16384
    Smoke: Gluster Build System <jenkins@build.gluster.org>
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
    Reviewed-by: Aravinda VK <avishwan@redhat.com>

Comment 4 Shyamsundar 2017-03-06 17:43:33 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report.

glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.