Bug 998943
Summary: | Dist-geo-rep: logrotate utility command rotates the geo-replication log file but doesn't open a new one to write. | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | M S Vishwanath Bhat <vbhat> | |
Component: | geo-replication | Assignee: | Aravinda VK <avishwan> | |
Status: | CLOSED ERRATA | QA Contact: | M S Vishwanath Bhat <vbhat> | |
Severity: | high | Docs Contact: | ||
Priority: | high | |||
Version: | 2.1 | CC: | aavati, amarts, csaba, grajaiya, kparthas, mzywusko, rhs-bugs, shaines, vagarwal | |
Target Milestone: | --- | Keywords: | ZStream | |
Target Release: | --- | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | Bug Fix | ||
Doc Text: |
Previously, the geo-replication log-rotate utility rotated the existing log files, but did not open a new file for writing further logs, leading the further logs to be missed. With this update release, the geo-replication log-rotate module creates a new file after rotating existing file, thus not loosing any logs.
|
Story Points: | --- | |
Clone Of: | ||||
: | 1012776 (view as bug list) | Environment: | ||
Last Closed: | 2013-11-27 15:31:57 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1012776 |
Description
M S Vishwanath Bhat
2013-08-20 11:51:11 UTC
MS, Could you do the operations by hand and see if the logs are rotated and gsyncd works as expected. Maybe that will figure out if it's something with logrotate or with gsyncd handling rotation signals. It doesn't work as expected even when I try by sending the signal SIGSTOP and SIGCONT manually. The log will be rotated but new file is not being opened for writing. Seems there is some issue with gsyncd while handling signal. Aravinda, can you look into this? Fixed in version please. There will be two logs in the geo-replication log directory. One will be by geo-rep monitor and worker processes and other is by the auxiliary mount process used by geo-rep. As of now in master volume, both the log files are being rotated. When I say rotated, they are renamed from *.log to *.log.1) But only one of them (gsyncd log file) is being re-opened for writing. The other one (auxiliary gluster client log) is not being re-opened. I am not sure the reason is because there is bug or if the process is Idle and there is nothing to write to the log file. At the slave side, the log files are rotated but are not opened again. None of the log files are being re-opened at the slave side. So re-opening the bug. *slave.gluster.log file is used by glusterfs --aux-gfid-mount, so need to send message to glusterfs process. I will update the /etc/logrotate.d/glusterfs-georep file. I am still looking into the logrotate issue of *slave.log. gsync spawns glusterfs aux-mount and python logrotate handling will not work for external processes. SIGHUP is re for auxiliary mount process. Patch sent by Vishwanath Bhat. https://code.engineering.redhat.com/gerrit/#/c/14265/ Now this issue is fixed... Tested in version: glusterfs-3.4.0.38rhs-1.el6rhs.x86_64 Before executing the logrotate command [root@spitfire ~]# find /var/log//glusterfs/geo-replication* -type f /var/log//glusterfs/geo-replication/master/ssh%3A%2F%2Froot%4010.70.42.224%3Agluster%3A%2F%2F127.0.0.1%3Aslave.log /var/log//glusterfs/geo-replication/master/ssh%3A%2F%2Froot%4010.70.42.224%3Agluster%3A%2F%2F127.0.0.1%3Aslave.%2Frhs%2Fbricks%2Fbrick0.gluster.log /var/log//glusterfs/geo-replication-slaves/slave.log After executing the logrotate command and waiting for some time (till there is something for gsync to write to the log file) [root@spitfire glusterfs-deploy-scripts]# find /var/log/glusterfs/geo-replication/master/ /var/log/glusterfs/geo-replication/master/ /var/log/glusterfs/geo-replication/master/ssh%3A%2F%2Froot%4010.70.42.224%3Agluster%3A%2F%2F127.0.0.1%3Aslave.%2Frhs%2Fbricks%2Fbrick0.gluster.log.1 /var/log/glusterfs/geo-replication/master/ssh%3A%2F%2Froot%4010.70.42.224%3Agluster%3A%2F%2F127.0.0.1%3Aslave.log /var/log/glusterfs/geo-replication/master/ssh%3A%2F%2Froot%4010.70.42.224%3Agluster%3A%2F%2F127.0.0.1%3Aslave.log.1 /var/log/glusterfs/geo-replication/master/ssh%3A%2F%2Froot%4010.70.42.224%3Agluster%3A%2F%2F127.0.0.1%3Aslave.%2Frhs%2Fbricks%2Fbrick0.gluster.log The original log files have been rotated and new log files are being opened. This works fine for all the logs in master. But in slave only the logfile opened by the glusterfs process will be rotated. Other log file opened by python process will not be rotated. Amar, Is this Okay? Not rotating the log file in slave. Can we open another bug for slave log-rotate behavior and VERIFY this bug? (In reply to Amar Tumballi from comment #12) > Can we open another bug for slave log-rotate behavior and VERIFY this bug? From my discussion with Aravinda, it seems like slave files are being written only once during the initial setting up. But they won't be logged later since there will be no gsync process running. So moving this bug to verified. Will open a new bug if required. Tested in version: glusterfs-3.4.0.38rhs-1.el6rhs.x86_64 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1769.html |