Bug 1985474 - foreman-proxy keeps holding delete log files after rotation
Summary: foreman-proxy keeps holding delete log files after rotation
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Logging
Version: 6.9.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: Unspecified
Assignee: Lukas Zapletal
QA Contact: Satellite QE Team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-07-23 16:20 UTC by Joniel Pasqualetto
Modified: 2023-04-06 17:10 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-04-06 17:10:05 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 2168516 0 unspecified CLOSED [RFE] Switch logging to journal by default 2024-04-24 09:53:28 UTC

Description Joniel Pasqualetto 2021-07-23 16:20:48 UTC
Description of problem:
Apparently, we had similar issues in the past about this, but this is still happening.

After logrotate foreman-proxy logs, the deleted logs files are kept open by the daemon consuming disk space. 

Postrotate script (called by logrotate) runs but daemon sometimes ignores the SIGUSR1 and does not do what's expected.

Version-Release number of selected component (if applicable):
Seem on Satellite 6.9 (internal) and on customer's Satellite 6.7, at least.

How reproducible:

Not exactly sure.

Steps to Reproduce:
1.
2.
3.

Actual results:
Logrotate creates new files but system holds old file open.

Expected results:

After logrotate daemon would release delete files.

Additional info:
Restarting foreman-proxy service is a workaround

Comment 3 Lukas Zapletal 2021-08-12 09:18:03 UTC
Hey,

can you give me:

rpm -qa | grep rubygem-logging

output? This has been identified and fixed upstream and new Ruby library logging was released 2.3.0 which includes the fix.

Note Satellite 6.8 still contains the old version 2.2 with the bug. Satellite 6.9 contains 2.3.0 which is supposed to be fixed.

Comment 5 Lukas Zapletal 2021-08-12 10:14:18 UTC
One idea, copying is a slow operation and logrotate might compress the file too early, the process does not guarantee that the rotate operation is done any fast manner.

Please specify delaycompress argument to logrotate and restart, then test it and get back to me.

Comment 6 Lukas Zapletal 2021-08-12 10:54:02 UTC
For more context, on your instance I see that file_rolling_size is set to the default value (0) which means smart proxy rotation is turned off completely and everything is handled by logrotate. Its configuration is:

/var/log/foreman-proxy/proxy.log
{
  missingok
  notifempty
  create 0644 foreman-proxy foreman-proxy
  sharedscripts
  rotate 5
  compress
        daily
  postrotate
    /bin/systemctl kill --signal=SIGUSR1 foreman-proxy >/dev/null 2>&1 || true
  endscript
}

After my second though, I do not thing delaycompress would solve the problem. I think we need to use copytruncate instead, this will make sure the original never gets renamed and smart proxy can continue writing to it without reopening.

Here is my insight into the problem: https://community.theforeman.org/t/foreman-proxy-open-fd-of-deleted-log-files-hdd-out-of-space/14481

Comment 9 Leos Stejskal 2022-07-20 08:38:00 UTC
Hi,
is there any update on the proposed fix, can we close the BZ?

Comment 12 Brad Buckingham 2023-02-01 19:24:14 UTC
Upon review of our valid but aging backlog the Satellite Team has concluded that this Bugzilla does not meet the criteria for a resolution in the near term, and are planning to close in a month. This message may be a repeat of a previous update and the bug is again being considered to be closed. If you have any concerns about this, please contact your Red Hat Account team.  Thank you.

Comment 13 Ewoud Kohl van Wijngaarden 2023-02-02 19:18:29 UTC
Adam: You've proposed to move to journald logging by default. That would solve this bug, correct? Is there a BZ we should refer to?

Comment 14 Adam Ruzicka 2023-02-07 10:16:02 UTC
So far I've only proposed that in upstream community forum, as far as I know there is no issue yet that would track this anywhere. And yes, it would solve this bug

Comment 15 Adam Ruzicka 2023-02-09 09:37:57 UTC
Opened BZ #2168516 to track the switch to journald.

Comment 16 Brad Buckingham 2023-03-06 11:11:12 UTC
Hi Adam, 

Does bug 2168516 replace this one?  If so, should we close:dupe this one?

Thanks!

Comment 17 Brad Buckingham 2023-03-06 11:37:55 UTC
Upon review of our valid but aging backlog the Satellite Team has concluded that this Bugzilla does not meet the criteria for a resolution in the near term, and are planning to close in a month. This message may be a repeat of a previous update and the bug is again being considered to be closed. If you have any concerns about this, please contact your Red Hat Account team.  Thank you.

Comment 18 Adam Ruzicka 2023-03-06 13:25:07 UTC
I'd be a little hesitant to close this as a dupe. This BZ describes an issue, the other BZ proposes changes which would, among other things, resolve the issue described here. The other BZ was never intended as a fix for this one primarily. While the other BZ should completely rule out the kind of issues similar to what is described here, I'd still rather keep both open to have this one verified on its own.

Comment 19 Brad Buckingham 2023-04-06 17:10:05 UTC
Thank you for your interest in Red Hat Satellite. We have evaluated this request, and while we recognize that it is a valid request, we do not expect this to be implemented in the product in the foreseeable future. This is due to other priorities for the product, and not a reflection on the request itself. We are therefore closing this out as WONTFIX. If you have any concerns about this feel free to contact your Red Hat Account Team. Thank you.


Note You need to log in before you can comment on or make changes to this bug.