Bug 1414750 - [Geo-rep] Slave mount log file is cluttered by logs of multiple active mounts
Summary: [Geo-rep] Slave mount log file is cluttered by logs of multiple active mounts
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: geo-replication
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: RHGS 3.3.0
Assignee: Kotresh HR
QA Contact: Rahul Hinduja
URL:
Whiteboard:
Depends On: 1412689
Blocks: 1417138
TreeView+ depends on / blocked
 
Reported: 2017-01-19 11:16 UTC by Kotresh HR
Modified: 2017-09-21 04:56 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.8.4-19
Doc Type: Enhancement
Doc Text:
Geo-replication workers now have separate slave mount logs to make debugging easier. Log files are named according to the following format: '<mastervol-uuid>:<master-host>:<master brickpath>:<slavevol>.gluster.log'.
Clone Of: 1412689
Environment:
Last Closed: 2017-09-21 04:30:55 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:2774 0 normal SHIPPED_LIVE glusterfs bug fix and enhancement update 2017-09-21 08:16:29 UTC

Description Kotresh HR 2017-01-19 11:16:31 UTC
+++ This bug was initially created as a clone of Bug #1412689 +++

Description of problem:
Slave mount log file is cluttered by logs of multiple active mounts

Geo-rep worker mounts the slave volume on the slave
node. If multiple worker connects to same slave node,
all workers share the same mount log file. This
is very difficult to debug as logs are cluttered from
different mounts

The location of log file is
/var/log/glusterfs/geo-replication-slaves/*.gluster.log

Version-Release number of selected component (if applicable):
mainline

How reproducible:
Always

Steps to Reproduce:
1. Create master volume with two bricks
2. Create slave volume with single brick
3. Establish geo-rep sessio between them.
6. Now geo-rep will have two slave mounts per master brick. Both will log into single file.

Actual results:
Multiple mount logs to same file

Expected results:
Each mount should log to separate file

Additional info:

--- Additional comment from Worker Ant on 2017-01-12 09:57:57 EST ---

REVIEW: http://review.gluster.org/16384 (geo-rep: Separate slave mount logs for each connection) posted (#1) for review on master by Kotresh HR (khiremat)

--- Additional comment from Aravinda VK on 2017-01-12 23:49:56 EST ---



--- Additional comment from Worker Ant on 2017-01-18 03:47:05 EST ---

COMMIT: http://review.gluster.org/16384 committed in master by Aravinda VK (avishwan) 
------
commit ff5e91a60887d22934fcb5f8a15dd36019d6e09a
Author: Kotresh HR <khiremat>
Date:   Tue Jan 10 15:39:55 2017 -0500

    geo-rep: Separate slave mount logs for each connection
    
    Geo-rep worker mounts the slave volume on the slave
    node. If multiple worker connects to same slave node,
    all workers share the same mount log file. This
    is very difficult to debug as logs are cluttered from
    different mounts. Hence creating separate mount log
    file for each connection from worker. Each connection
    from worker is identified uniquely using 'mastervol uuid',
    'master host', 'master brickpath', 'salve vol'. The log
    file name will be combination of the above.
    
    Change-Id: I67871dc8e8ea5864e2ad55e2a82063be0138bf0c
    BUG: 1412689
    Signed-off-by: Kotresh HR <khiremat>
    Reviewed-on: http://review.gluster.org/16384
    Smoke: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Aravinda VK <avishwan>

Comment 2 Atin Mukherjee 2017-01-20 04:09:13 UTC
upstream patch : http://review.gluster.org/16384

Comment 5 Kotresh HR 2017-02-22 06:59:40 UTC
It's in upstream 3.10 as part of branch out from master

Comment 7 Atin Mukherjee 2017-03-24 09:03:01 UTC
downstream patch : https://code.engineering.redhat.com/gerrit/#/c/101292/

Comment 9 Rahul Hinduja 2017-04-24 13:18:20 UTC
Verified with the build: glusterfs-geo-replication-3.8.4-22.el7rhgs.x86_64

With latest changes the log files contains 'mastervol uuid', 'master host', 'master brickpath', 'salve vol'.

In 3.2.0:
=========

[root@dhcp37-155 geo-replication-slaves]# ls
d0beefb8-495e-41ae-b545-1e819f7e3b01:gluster%3A%2F%2F127.0.0.1%3Aslave.gluster.log
d0beefb8-495e-41ae-b545-1e819f7e3b01:gluster%3A%2F%2F127.0.0.1%3Aslave.log
mbr
[root@dhcp37-155 geo-replication-slaves]# ps -eaf | grep gsyncd | grep tmp
root      1768     1  0 18:42 ?        00:00:00 /usr/sbin/glusterfs --aux-gfid-mount --acl --log-file=/var/log/glusterfs/geo-replication-slaves/d0beefb8-495e-41ae-b545-1e819f7e3b01:gluster%3A%2F%2F127.0.0.1%3Aslave.gluster.log --volfile-server=localhost --volfile-id=slave --client-pid=-1 /tmp/gsyncd-aux-mount-s3F0xe
root      1806     1  0 18:42 ?        00:00:00 /usr/sbin/glusterfs --aux-gfid-mount --acl --log-file=/var/log/glusterfs/geo-replication-slaves/d0beefb8-495e-41ae-b545-1e819f7e3b01:gluster%3A%2F%2F127.0.0.1%3Aslave.gluster.log --volfile-server=localhost --volfile-id=slave --client-pid=-1 /tmp/gsyncd-aux-mount-C1KsAJ
[root@dhcp37-155 geo-replication-slaves]# 


In 3.3.0:
=========

[root@dhcp43-94 geo-replication-slaves]# ps -eaf | grep gsyncd | grep tmp
root     19977     1  0 13:05 ?        00:00:00 /usr/sbin/glusterfs --aux-gfid-mount --acl --log-file=/var/log/glusterfs/geo-replication-slaves/e3e9e535-8938-49b8-8ef1-42488b13c6b9:10.70.43.91.%2Frhs%2Fbrick1%2Fb1.slave.gluster.log --volfile-server=localhost --volfile-id=slave --client-pid=-1 /tmp/gsyncd-aux-mount-GPp4F0
root     20015     1  0 13:05 ?        00:00:00 /usr/sbin/glusterfs --aux-gfid-mount --acl --log-file=/var/log/glusterfs/geo-replication-slaves/e3e9e535-8938-49b8-8ef1-42488b13c6b9:10.70.43.93.%2Frhs%2Fbrick1%2Fb2.slave.gluster.log --volfile-server=localhost --volfile-id=slave --client-pid=-1 /tmp/gsyncd-aux-mount-lYyEBg
[root@dhcp43-94 geo-replication-slaves]# 
[root@dhcp43-94 geo-replication-slaves]# 
[root@dhcp43-94 geo-replication-slaves]# ls
e3e9e535-8938-49b8-8ef1-42488b13c6b9:10.70.43.91.%2Frhs%2Fbrick1%2Fb1.slave.gluster.log
e3e9e535-8938-49b8-8ef1-42488b13c6b9:10.70.43.91.%2Frhs%2Fbrick1%2Fb1.slave.log
e3e9e535-8938-49b8-8ef1-42488b13c6b9:10.70.43.93.%2Frhs%2Fbrick1%2Fb2.slave.gluster.log
e3e9e535-8938-49b8-8ef1-42488b13c6b9:10.70.43.93.%2Frhs%2Fbrick1%2Fb2.slave.log
mbr
[root@dhcp43-94 geo-replication-slaves]# 

Moving this RFE to verified state, any further issues will be tracked by separate bugs.

Comment 11 Kotresh HR 2017-08-16 03:48:50 UTC
Nit: It's slave mount logs, master mount logs were fine.

Geo-replication workers now have separate slave mount logs to make debugging easier. Log files are named according to the following format: '<mastervol-uuid>:<master-host>:<master brickpath>:<slavevol>.gluster.log'.

Comment 13 errata-xmlrpc 2017-09-21 04:30:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2774

Comment 14 errata-xmlrpc 2017-09-21 04:56:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2774


Note You need to log in before you can comment on or make changes to this bug.