Bug 177017 - faillog doesn't handle large UIDs well
faillog doesn't handle large UIDs well
Product: Red Hat Enterprise Linux 4
Classification: Red Hat
Component: shadow-utils (Show other bugs)
All Linux
medium Severity medium
: ---
: ---
Assigned To: Peter Vrabec
David Lawrence
: Reopened
Depends On:
  Show dependency treegraph
Reported: 2006-01-05 06:24 EST by Bastien Nocera
Modified: 2008-04-30 16:02 EDT (History)
5 users (show)

See Also:
Fixed In Version: RHSA-2007-0276
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2008-04-15 07:13:15 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
shadow-faster-faillog.patch (456 bytes, patch)
2006-01-06 05:53 EST, Bastien Nocera
no flags Details | Diff

  None (edit)
Description Bastien Nocera 2006-01-05 06:24:57 EST

1. touch /var/log/faillog
2. faillog -m 32767 -r

The second faillog will check every UID from 0 to the max UID, which is a
horrendously large number on 64-bit systems. This makes faillog unusable on
64-bit systems. This means that reset_one() will be called for each and every
UID. It might be better to modify it to reset ranges of UIDs.
Comment 1 Bastien Nocera 2006-01-06 05:53:11 EST
Created attachment 122866 [details]

This will reset the failures only for existing users (which should be much less
than 32-bit of them).
Idea from Chris Snook <csnook@redhat.com>
Comment 3 Peter Vrabec 2006-01-10 04:23:29 EST
It looks the patch is going to by applied by upstream too.
Comment 15 Red Hat Bugzilla 2007-05-01 13:31:22 EDT
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

Comment 17 Calvin Smith 2007-06-14 15:11:39 EDT
We are still seeing this exact issue using shadow-utils-4.0.3-61.RHEL4
Comment 19 Matthew Whitehead 2007-06-28 14:03:41 EDT
Using faillog with either the -p or -r flags doesn't generate a huge file. The
file is probably sparse. It isn't until you use the -m flag that the file
explodes to 128G.

I think the problem is because every record has an individual fail_max field
that the -m flag fills in, exploding the sparse file:

 struct faillog {
                       short   fail_cnt;
                       short   fail_max;
                       char    fail_line[12];
                       time_t  fail_time;

Instead, it should have a header record with system-wide defaults. After that,
you should be able to fill in user-specific fail_max entries as needed, in a
sparse manner.
Comment 22 RHEL Product and Program Management 2007-11-28 23:26:30 EST
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release.  Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products.  This request is not yet committed for inclusion in an Update
Comment 23 Peter Vrabec 2008-03-04 06:06:55 EST
Why is this bug still open? It was fixed in shadow-utils-4.0.3-61.RHEL4, shipped
in 4.5 errata 2007:0276.
Comment 24 Peter Vrabec 2008-03-04 08:02:15 EST
# touch /var/log/faillog
# time faillog -m 32767 -r
real    0m1.262s
user    0m0.039s
sys     0m0.193s

# rpm -q shadow-utils
# uname -a
Linux x86-64-4as-6-m1.lab.boston.redhat.com 2.6.9-67.0.4.ELsmp #1 SMP Fri Jan 
18 05:00:00 EST 2008 x86_64 x86_64 x86_64 GNU/Linux

I don't know what's the problem here. According to comment #11, it should be 
OK on the other side too.

Note You need to log in before you can comment on or make changes to this bug.