Bug 1414750
Summary: | [Geo-rep] Slave mount log file is cluttered by logs of multiple active mounts | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Kotresh HR <khiremat> |
Component: | geo-replication | Assignee: | Kotresh HR <khiremat> |
Status: | CLOSED ERRATA | QA Contact: | Rahul Hinduja <rhinduja> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | rhgs-3.1 | CC: | amukherj, asrivast, avishwan, bugs, csaba, khiremat, rhs-bugs, storage-qa-internal |
Target Milestone: | --- | ||
Target Release: | RHGS 3.3.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.8.4-19 | Doc Type: | Enhancement |
Doc Text: |
Geo-replication workers now have separate slave mount logs to make debugging easier. Log files are named according to the following format: '<mastervol-uuid>:<master-host>:<master brickpath>:<slavevol>.gluster.log'.
|
Story Points: | --- |
Clone Of: | 1412689 | Environment: | |
Last Closed: | 2017-09-21 04:30:55 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1412689 | ||
Bug Blocks: | 1417138 |
Description
Kotresh HR
2017-01-19 11:16:31 UTC
upstream patch : http://review.gluster.org/16384 It's in upstream 3.10 as part of branch out from master downstream patch : https://code.engineering.redhat.com/gerrit/#/c/101292/ Verified with the build: glusterfs-geo-replication-3.8.4-22.el7rhgs.x86_64 With latest changes the log files contains 'mastervol uuid', 'master host', 'master brickpath', 'salve vol'. In 3.2.0: ========= [root@dhcp37-155 geo-replication-slaves]# ls d0beefb8-495e-41ae-b545-1e819f7e3b01:gluster%3A%2F%2F127.0.0.1%3Aslave.gluster.log d0beefb8-495e-41ae-b545-1e819f7e3b01:gluster%3A%2F%2F127.0.0.1%3Aslave.log mbr [root@dhcp37-155 geo-replication-slaves]# ps -eaf | grep gsyncd | grep tmp root 1768 1 0 18:42 ? 00:00:00 /usr/sbin/glusterfs --aux-gfid-mount --acl --log-file=/var/log/glusterfs/geo-replication-slaves/d0beefb8-495e-41ae-b545-1e819f7e3b01:gluster%3A%2F%2F127.0.0.1%3Aslave.gluster.log --volfile-server=localhost --volfile-id=slave --client-pid=-1 /tmp/gsyncd-aux-mount-s3F0xe root 1806 1 0 18:42 ? 00:00:00 /usr/sbin/glusterfs --aux-gfid-mount --acl --log-file=/var/log/glusterfs/geo-replication-slaves/d0beefb8-495e-41ae-b545-1e819f7e3b01:gluster%3A%2F%2F127.0.0.1%3Aslave.gluster.log --volfile-server=localhost --volfile-id=slave --client-pid=-1 /tmp/gsyncd-aux-mount-C1KsAJ [root@dhcp37-155 geo-replication-slaves]# In 3.3.0: ========= [root@dhcp43-94 geo-replication-slaves]# ps -eaf | grep gsyncd | grep tmp root 19977 1 0 13:05 ? 00:00:00 /usr/sbin/glusterfs --aux-gfid-mount --acl --log-file=/var/log/glusterfs/geo-replication-slaves/e3e9e535-8938-49b8-8ef1-42488b13c6b9:10.70.43.91.%2Frhs%2Fbrick1%2Fb1.slave.gluster.log --volfile-server=localhost --volfile-id=slave --client-pid=-1 /tmp/gsyncd-aux-mount-GPp4F0 root 20015 1 0 13:05 ? 00:00:00 /usr/sbin/glusterfs --aux-gfid-mount --acl --log-file=/var/log/glusterfs/geo-replication-slaves/e3e9e535-8938-49b8-8ef1-42488b13c6b9:10.70.43.93.%2Frhs%2Fbrick1%2Fb2.slave.gluster.log --volfile-server=localhost --volfile-id=slave --client-pid=-1 /tmp/gsyncd-aux-mount-lYyEBg [root@dhcp43-94 geo-replication-slaves]# [root@dhcp43-94 geo-replication-slaves]# [root@dhcp43-94 geo-replication-slaves]# ls e3e9e535-8938-49b8-8ef1-42488b13c6b9:10.70.43.91.%2Frhs%2Fbrick1%2Fb1.slave.gluster.log e3e9e535-8938-49b8-8ef1-42488b13c6b9:10.70.43.91.%2Frhs%2Fbrick1%2Fb1.slave.log e3e9e535-8938-49b8-8ef1-42488b13c6b9:10.70.43.93.%2Frhs%2Fbrick1%2Fb2.slave.gluster.log e3e9e535-8938-49b8-8ef1-42488b13c6b9:10.70.43.93.%2Frhs%2Fbrick1%2Fb2.slave.log mbr [root@dhcp43-94 geo-replication-slaves]# Moving this RFE to verified state, any further issues will be tracked by separate bugs. Nit: It's slave mount logs, master mount logs were fine. Geo-replication workers now have separate slave mount logs to make debugging easier. Log files are named according to the following format: '<mastervol-uuid>:<master-host>:<master brickpath>:<slavevol>.gluster.log'. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:2774 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:2774 |