Bug 431490 - [BETA RHEL5.2] i386 wrong hugepages info shown after allocate and deallocate
[BETA RHEL5.2] i386 wrong hugepages info shown after allocate and deallocate
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: kernel (Show other bugs)
i386 Linux
medium Severity medium
: rc
: ---
Assigned To: Eric Paris
Martin Jenner
Depends On: 173617
  Show dependency treegraph
Reported: 2008-02-04 17:40 EST by Mike Gahagan
Modified: 2008-07-08 09:54 EDT (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2008-07-08 09:54:19 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
test case (685 bytes, application/x-bzip2)
2008-06-19 13:01 EDT, Mike Gahagan
no flags Details

  None (edit)
Comment 2 Eric Paris 2008-03-04 18:58:45 EST
This is a different issue since my fixed patch was accepted upstream, but I'll
take a look at it.  Do we know if it is still reproducible on rawhide kernels?
Comment 3 Eric Paris 2008-06-17 17:17:00 EDT

Above brew task number should have a patch I think will close this race. 
hugetlb_report_meminfo() and hugetlb_report_node_meminfo() both used
nr_huge_pages pages and palls without holding the hugetlb_lock.  Its possible
during the snprintf operation to build the output buffer these could get out of
sync.  Appears to be purely cosmetic as all of the accounting seems to be done
under the lock.  This adds the lock to the proc output buffer building.
Comment 4 Mike Gahagan 2008-06-18 17:07:04 EDT
ok, after trying this again with both the -92 kernel and the test kernel on
ibm-hermes-n1, I'm not able to reproduce the bug anymore with either kernel. 

If this is just /proc accounting and we aren't leaking hugepages or anything
like that I'm ok with closing this (or taking the fix for that matter)
Comment 5 Eric Paris 2008-06-18 17:24:03 EDT
I just ask mike to try running a slightly different test.

2 threads setting the number of hugepages up and down
1 thread per core (16 cores on his test machine) reading /proc/meminfo

The most interesting thing is the threads reading /proc since I think 2 threads
changing the sysctl's will probably be about enough to saturate the system...
Comment 6 Mike Gahagan 2008-06-18 17:57:00 EDT
I hacked up the test to have 2 processes set nr_huge_pages and 15 to read the
values and report anytime free hugepages > total hugepages.. I'll run it
overnight with the test kernel.
Comment 7 Mike Gahagan 2008-06-19 11:16:53 EDT
I let the modified test case run overnight (2 processes set nr_huge_pages and 15
to read the values from /proc/meminfo). I have not seen any accounting
descrepencies. The -92 kernel typically showed free hugepages > total hugepages
after approximately 5 minutes of run time. I'd say our race is very likely
fixed, but I'll be glad to test it more if anyone wants to see more results.

I'll go ahead and propose it for 5.3 and set the qe ack.

Comment 8 Mike Gahagan 2008-06-19 13:01:37 EDT
Created attachment 309864 [details]
test case

multi-threaded test case minus the rhts specific stuff.
Comment 9 Eric Paris 2008-06-19 14:23:53 EDT
Patch sent to lkml for laughs.

Comment 10 Eric Paris 2008-07-08 09:54:19 EDT
Upstream told me to go fly a kite.  Locking here could allow a normal user to
significantly degrade the systems use of hugetables since every process that
wanted to free or take a hugetlb page would have to wait on the locking of that
process.  Proc is inherrintly racy and they will not take a fix for this.

I'd suggest changing the test case to look for free - total > 2 or two
invocations in a row both showing incorrect numbers.

closing as WONTFIX

Note You need to log in before you can comment on or make changes to this bug.