Bug 524512 (CVE-2009-3238) - CVE-2009-3238 kernel: random: add robust get_random_u32, remove weak get_random_int
Summary: CVE-2009-3238 kernel: random: add robust get_random_u32, remove weak get_rand...
Keywords:
Status: CLOSED ERRATA
Alias: CVE-2009-3238
Product: Security Response
Classification: Other
Component: vulnerability
Version: unspecified
Hardware: All
OS: Linux
high
high
Target Milestone: ---
Assignee: Red Hat Product Security
QA Contact:
URL:
Whiteboard:
Depends On: 499776 499783 499785 499787 504082 519692 524515
Blocks:
TreeView+ depends on / blocked
 
Reported: 2009-09-21 01:17 UTC by Eugene Teo (Security Response)
Modified: 2019-09-29 12:32 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-08-24 16:04:15 UTC
Embargoed:


Attachments (Terms of Use)

Description Eugene Teo (Security Response) 2009-09-21 01:17:25 UTC
Description of problem:
"It's a really simple patch that basically just open-codes the current
"secure_ip_id()" call, but when open-coding it we now use a _static_ hashing
area, so that it gets updated every time.

And to make sure somebody can't just start from the same original seed of
all-zeroes, and then do the "half_md4_transform()" over and over until they get
the same sequence as the kernel has, each iteration also mixes in the same old
"current->pid + jiffies" we used - so we should now have a regular strong
pseudo-number generator, but we also have one that doesn't have a single seed.

Note: the "pid + jiffies" is just meant to be a tiny tiny bit of noise. It has
no real meaning. It could be anything. I just picked the previous seed, it's
just that now we keep the state in between calls and that will feed into the
next result, and that should make all the difference.

I made that hash be a per-cpu data just to avoid cache-line ping-pong: having
multiple CPU's write to the same data would be fine for randomness, and add yet
another layer of chaos to it, but since get_random_int() is supposed to be a
fast interface I did it that way instead. I considered using
"__raw_get_cpu_var()" to avoid any preemption overhead while still getting the
hash be _mostly_ ping-pong free, but in the end good taste won out."

Upstream commit:
http://git.kernel.org/linus/8a0a9bd4db63bc45e3017bedeafbd88d0eb84d02

Note: this is not addressed in 2.6.29.4.

--- Additional comment from eteo on 2009-05-08 02:28:26 EDT ---

http://patchwork.kernel.org/patch/21766/


Note You need to log in before you can comment on or make changes to this bug.