Bug 1765272 (CVE-2019-18218) - CVE-2019-18218 file: heap-based buffer overflow in cdf_read_property_info in cdf.c
Summary: CVE-2019-18218 file: heap-based buffer overflow in cdf_read_property_info in ...
Alias: CVE-2019-18218
Product: Security Response
Classification: Other
Component: vulnerability
Version: unspecified
Hardware: All
OS: Linux
Target Milestone: ---
Assignee: Red Hat Product Security
QA Contact:
Depends On: 1765273 1773624
Blocks: 1765274
TreeView+ depends on / blocked
Reported: 2019-10-24 17:36 UTC by Guilherme de Almeida Suckevicz
Modified: 2021-11-09 18:29 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2021-10-25 22:13:01 UTC
kdudka: needinfo-

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2021:4374 0 None None None 2021-11-09 18:29:18 UTC

Description Guilherme de Almeida Suckevicz 2019-10-24 17:36:03 UTC
cdf_read_property_info in cdf.c in file through 5.37 does not restrict the number of CDF_VECTOR elements, which allows a heap-based buffer overflow (4-byte out-of-bounds write).


Upstream patch:

Comment 1 Guilherme de Almeida Suckevicz 2019-10-24 17:36:16 UTC
Created file tracking bugs for this issue:

Affects: fedora-all [bug 1765273]

Comment 2 Stefan Cornelius 2019-11-18 13:21:21 UTC
A git bisect shows that this was introduced by https://github.com/file/file/commit/393555f2f3a6ba16cdedf6d65ac373700afdd769

Comment 3 Stefan Cornelius 2019-11-18 14:36:05 UTC
The root issue is an integer overflow in the cdf_grow_info() function:
"size_t newcount = *maxcount + incr;" can wrap around, causing newcount to be lower than it should be. Thus, the "if (newcount > CDF_PROP_LIMIT)" check can be bypassed. I believe this limits this issue to 32bit architectures, on 64bit systems the size of newcount should be large enough to handle all results (input read from the file is limited to 32bit).

 cdf_grow_info(cdf_property_info_t **info, size_t *maxcount, size_t incr)
 	cdf_property_info_t *inp;
 	size_t newcount = *maxcount + incr;
 	if (newcount > CDF_PROP_LIMIT) {
 		DPRINTF(("exceeded property limit %zu > %zu\n",
 		    newcount, CDF_PROP_LIMIT));
 		goto out;

Comment 4 Stefan Cornelius 2019-11-18 15:07:02 UTC

This issue affects file as shipped with Red Hat Enterprise Linux 8. However, this flaw is only exploitable if the 32bit version is used, for example when an application uses the 32bit version of libmagic.so.

Comment 6 Karel Volný 2021-04-20 11:34:31 UTC
please, is there any simpler reproducer that doesn't involve oss-fuzz?

Comment 7 Kamil Dudka 2021-04-25 20:30:39 UTC
One can simply download the sample file and run the `file` utility locally on it:

$ curl -JLO 'https://oss-fuzz.com/download?testcase_id=5743444592427008'

The main obstacle is that the bug reproduces on 32bit arches only (see comment #3).  The following works reliably for me on an x86_64 VM:

$ sudo yum install file-libs.i686 valgrind.i686
$ rpm2cpio http://download.eng.bos.redhat.com/brewroot/vol/rhel-8/packages/file/5.33/16.el8/i686/file-5.33-16.el8.i686.rpm | cpio -div
$ valgrind ./usr/bin/file clusterfuzz-testcase-minimized-magic_fuzzer-5743444592427008

valgrind output is clean with file-libs-5.33-18.el8.i686 while there are invalid writes with file-libs-5.33-16.el8.i686.  The old version crashes without valgrind, too, in my testing environment.

Comment 8 Karel Volný 2021-05-17 09:23:54 UTC
(In reply to Kamil Dudka from comment #7)
> One can simply download the sample file and run the `file` utility locally on it:

I have tried, but ...

> The main obstacle is that the bug reproduces on 32bit arches only (see
> comment #3).

looks like forcing 32bit is the important part I have missed, thanks

Comment 10 errata-xmlrpc 2021-11-09 18:29:16 UTC
This issue has been addressed in the following products:

  Red Hat Enterprise Linux 8

Via RHSA-2021:4374 https://access.redhat.com/errata/RHSA-2021:4374

Note You need to log in before you can comment on or make changes to this bug.