Bug 1185862

Summary: 27 lines of boilerplate for cron jobs in journal
Product: [Fedora] Fedora Reporter: Kamil Páral <kparal>
Component: systemdAssignee: systemd-maint
Status: CLOSED DUPLICATE QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 21CC: auxsvr, johannbg, jsynacek, lnykryn, msekleta, s, systemd-maint, vpavlin, zbyszek
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-01-27 22:03:12 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Kamil Páral 2015-01-26 13:18:16 UTC
Description of problem:
I have a package redhat-ddns-client which is called every 5 minutes from cron.

$ cat /etc/cron.d/redhat-ddns.cron 
*/5 * * * * root /usr/bin/redhat-ddns-client &> /dev/null

In system journal, I see this every 5 minutes:

Jan 26 13:35:01 medusa kernel: SELinux: initialized (dev tmpfs, type tmpfs), uses transition SIDs
Jan 26 13:35:01 medusa systemd[1285]: pam_unix(systemd-user:session): session opened for user root by (uid=0)
Jan 26 13:35:01 medusa systemd[1285]: Starting Paths.
Jan 26 13:35:01 medusa systemd[1285]: Reached target Paths.
Jan 26 13:35:01 medusa systemd[1285]: Starting Timers.
Jan 26 13:35:01 medusa systemd[1285]: Reached target Timers.
Jan 26 13:35:01 medusa systemd[1285]: Starting Sockets.
Jan 26 13:35:01 medusa systemd[1285]: Reached target Sockets.
Jan 26 13:35:01 medusa systemd[1285]: Starting Basic System.
Jan 26 13:35:01 medusa systemd[1285]: Reached target Basic System.
Jan 26 13:35:01 medusa systemd[1285]: Starting Default.
Jan 26 13:35:01 medusa systemd[1285]: Reached target Default.
Jan 26 13:35:01 medusa systemd[1285]: Startup finished in 10ms.
Jan 26 13:35:01 medusa CROND[1289]: (root) CMD (/usr/bin/redhat-ddns-client &> /dev/null)
Jan 26 13:35:07 medusa systemd[1285]: Stopping Default.
Jan 26 13:35:07 medusa systemd[1285]: Stopped target Default.
Jan 26 13:35:07 medusa systemd[1285]: Stopping Basic System.
Jan 26 13:35:07 medusa systemd[1285]: Stopped target Basic System.
Jan 26 13:35:07 medusa systemd[1285]: Stopping Paths.
Jan 26 13:35:07 medusa systemd[1285]: Stopped target Paths.
Jan 26 13:35:07 medusa systemd[1285]: Stopping Timers.
Jan 26 13:35:07 medusa systemd[1285]: Stopped target Timers.
Jan 26 13:35:07 medusa systemd[1285]: Stopping Sockets.
Jan 26 13:35:07 medusa systemd[1285]: Stopped target Sockets.
Jan 26 13:35:07 medusa systemd[1285]: Starting Shutdown.
Jan 26 13:35:07 medusa systemd[1285]: Reached target Shutdown.
Jan 26 13:35:07 medusa systemd[1285]: Starting Exit the Session...
Jan 26 13:35:07 medusa systemd[1285]: Received SIGRTMIN+24 from PID 1299 (kill).
Jan 26 13:35:07 medusa systemd[1287]: pam_unix(systemd-user:session): session closed for user root


Which is an awful long block of text (27 systemd lines for 1 cron line), which spams my journal every 5 minutes. It's hard to search for something useful in such a log.

I think I haven't seen this in F20, so it seems to be a "new feature" in F21.

Is this really intended? If it is not, can you fix it? If it is, can you make it less verbose? I'm not running with systemd debug flags or anything, yet it feels like I am.

Thanks.

Version-Release number of selected component (if applicable):
cronie-1.4.12-1.fc21.x86_64
cronie-anacron-1.4.12-1.fc21.x86_64
crontabs-1.11-9.20130830git.fc21.noarch
systemd-216-17.fc21.x86_64

How reproducible:
seems 100%, I see it all the time

Comment 1 Zbigniew Jędrzejewski-Szmek 2015-01-27 22:03:12 UTC
The problem is that cron starts a full PAM session in each of those cases, and we still haven't figured out the proper solution. You can cut down the number of messages by 'loginctl enable-linger root', which will prevent the user service from being started and stopped repeatedly.

*** This bug has been marked as a duplicate of bug 995792 ***