Bug 177017 - faillog doesn't handle large UIDs well
Summary: faillog doesn't handle large UIDs well
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 4
Classification: Red Hat
Component: shadow-utils
Version: 4.0
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
: ---
Assignee: Peter Vrabec
QA Contact: David Lawrence
URL:
Whiteboard:
Depends On:
Blocks: FAST4.5APPROVED
TreeView+ depends on / blocked
 
Reported: 2006-01-05 11:24 UTC by Bastien Nocera
Modified: 2008-04-30 20:02 UTC (History)
5 users (show)

Fixed In Version: RHSA-2007-0276
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2008-04-15 11:13:15 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
shadow-faster-faillog.patch (456 bytes, patch)
2006-01-06 10:53 UTC, Bastien Nocera
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2007:0276 0 normal SHIPPED_LIVE Low: shadow-utils security and bug fix update 2007-05-01 17:30:42 UTC

Description Bastien Nocera 2006-01-05 11:24:57 UTC
shadow-utils-4.0.3-58.RHEL4

1. touch /var/log/faillog
2. faillog -m 32767 -r

The second faillog will check every UID from 0 to the max UID, which is a
horrendously large number on 64-bit systems. This makes faillog unusable on
64-bit systems. This means that reset_one() will be called for each and every
UID. It might be better to modify it to reset ranges of UIDs.

Comment 1 Bastien Nocera 2006-01-06 10:53:11 UTC
Created attachment 122866 [details]
shadow-faster-faillog.patch

This will reset the failures only for existing users (which should be much less
than 32-bit of them).
Idea from Chris Snook <csnook>

Comment 3 Peter Vrabec 2006-01-10 09:23:29 UTC
It looks the patch is going to by applied by upstream too.

Comment 15 Red Hat Bugzilla 2007-05-01 17:31:22 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHSA-2007-0276.html


Comment 17 Calvin Smith 2007-06-14 19:11:39 UTC
We are still seeing this exact issue using shadow-utils-4.0.3-61.RHEL4

Comment 19 Matthew Whitehead 2007-06-28 18:03:41 UTC
Using faillog with either the -p or -r flags doesn't generate a huge file. The
file is probably sparse. It isn't until you use the -m flag that the file
explodes to 128G.

I think the problem is because every record has an individual fail_max field
that the -m flag fills in, exploding the sparse file:

 struct faillog {
                       short   fail_cnt;
                       short   fail_max;
                       char    fail_line[12];
                       time_t  fail_time;
               };

Instead, it should have a header record with system-wide defaults. After that,
you should be able to fill in user-specific fail_max entries as needed, in a
sparse manner.

Comment 22 RHEL Program Management 2007-11-29 04:26:30 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release.  Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products.  This request is not yet committed for inclusion in an Update
release.

Comment 23 Peter Vrabec 2008-03-04 11:06:55 UTC
Why is this bug still open? It was fixed in shadow-utils-4.0.3-61.RHEL4, shipped
in 4.5 errata 2007:0276.

Comment 24 Peter Vrabec 2008-03-04 13:02:15 UTC
# touch /var/log/faillog
# time faillog -m 32767 -r
real    0m1.262s
user    0m0.039s
sys     0m0.193s

# rpm -q shadow-utils
shadow-utils-4.0.3-63.RHEL4.x86_64
# uname -a
Linux x86-64-4as-6-m1.lab.boston.redhat.com 2.6.9-67.0.4.ELsmp #1 SMP Fri Jan 
18 05:00:00 EST 2008 x86_64 x86_64 x86_64 GNU/Linux

I don't know what's the problem here. According to comment #11, it should be 
OK on the other side too.



Note You need to log in before you can comment on or make changes to this bug.