Red Hat Bugzilla – Bug 131707
policycoreutils fixfiles leaves tmp files after running
Last modified: 2007-11-30 17:10:48 EST
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i586; en-US; rv:1.4.2)
Description of problem:
Fixfiles leaves tmp files laying around after its used. The file
permissions are safe, but they are big and waste disk space.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. fixfiles check
2. ls /var/tmp /tmp
Actual Results: tmp files are listed
Expected Results: no tmp files
I will attach a patch that fixes the problem.
Created attachment 103433 [details]
patch that fixes the problem
Added in policycoreutils-1.17.5-2
I just checked 1.17.5-4. The last chunk of my patch didn't get applied:
@@ -176,7 +180,11 @@
if [ $logfileFlag = 0 ]; then
- LOGFILE=`mktemp /var/tmp/fixfiles.XXXXXXXXXX` || exit 1
+ if [ ! -w $LOGFILE ] ; then
+ rm -f $FCFILE
+ exit 1
if [ $checkFlag = 1 ]; then
Without this chunk being applied, it will automatically open a logfile
everytime you use the utility. I would have to go add
if [ $logfileFlag = 0 ]; then
rm -f $LOGFILE
everywhere there's an exit to make sure the log file is deleted. The
simplest solution is to point LOGFILE to /dev/null so that the rest of
the code doesn't need to be touched.
Did you have an issue with this or was it accidently left out?
Reassigning to make sure the maintainer sees the new comment about the
patch chunk not being applied.
Yes I changed it back and then changed fixfiles.cron to pass in
/dev/null for log file. I also changed the fixfiles to no longer tee
the output. So if you want the output to go to the screen you need
to run with -l `tty`. So now on a nightly basis we don't end up
sending massive mail messages.
Yes...IMHO this is now wrong. The issue in this PR is that temp files
get left laying around turning me into a janitor everytime fixfiles is
run. If I do not give the -l option, why create a log that I have to
I think the usage in fixfiles.cron is backwards. fixfiles -l /dev/null
is basically trying to fix the problem from the wrong end. You didn't
want a log file, so you set it for /dev/null. It should default to
/dev/null unless the -l option is given. That's what my patch does. :)
Secondly, I don't get the problem you were trying to solve. There is a
check for the output file's size before sending it in the mail within
if [ $size -lt 100 ]; then
is this not working?
Or is it the stdout stuff that you were fixing? Maybe this helps:
/sbin/fixfiles -o $OUTFILE $CRONTYPE >/dev/null 2>&1
There's probably several solutions to whatever the cron problem was,
but tempfiles need to go if they are not requested. :)
Ok lets step back in history. I am willing to relook at it.
Fixfiles was originally a script that allowed you to run setfiles
without having to find the file_context file. It would just output
the errors to the screen. The problem with this is things would
scroll off the screen and get lost, so the script was modified to tee
off the output to save it to a log file and then show it to the
screen. Eventually we added fixfiles.cron which runs it from a cron
job on a nightly basis. The problem was that the stdout caused a huge
email to be sent so we added the sizecheck. But the sizecheck was not
enough because it was still grabbing stdout and sending that in the EMAIL.
So now we got this mess.
fixfiles now defaults to saving setfiles output to a log file unless
the logfile is overruled. (-l).
fixfiles.cron passwd -l /dev/null to eliminate the creation of the log
file and just saves the outputfile of files with wrong context.
If a user wants to see the output from fixfiles they need to redirect
In your solution, 'fixfiles check', will output nothing.
My solution 'fixfiles check' will create a logfile in /var/tmp/
>In your solution, 'fixfiles check', will output nothing.
Well, my patch was against an earlier version which used to put out
I see several issues surrounding fixfiles + cron helper. This PR may
not be the correct place to air them.
I believe that the correct action is for setfiles and restorecon is to
log all errors/"corrective actions taken" to syslog, in addition to
outputting them to the screen. Boot messages get lost otherwise.
I think its the unix way for programs to output gobs of information.
They can be re-directed with > or | to a file or a pager program. If
you are running check, then no harm...you just re-run it and redirect
it. If not in check and an error flies off the screen, it should be in
syslog where you can see it.
I think its expected that programs do not require you to clean up
after them unless you ask them to make something you have to clean up.
e.g - tempfiles.
Programs like tmpwatch can erase reports that have accumulated in /tmp
or /var/tmp before you ever get to them.
Maybe a real log should be setup in /var/logs and logrotate keep 5 or
ten of them around as gz files. This is for the cron run of fixfiles,
not the "on demand" one.
People that admin a lot of systems like to have a central logger
machine that digests everything and produces a report. The way it
currently is, when a mail indicates a problem they will need to go
login to that machine and find the report. Where if errors were in
syslog, they could see it right in the report. logwatch rules could be
written for it.
This probably not the place for this kind of discussion. I wished we
could work out the requirements and then make the software follow it.
not sure what you want to do with the PR at this point.
Eliminating fixfiles.cron since it really is not necassary now and is
more trouble than it is worth.