Red Hat Bugzilla – Bug 474770
CVE-2008-4316 glib2: integer overflows in the base64 handling functions (oCERT-2008-015)
Last modified: 2016-03-04 07:20:18 EST
oCERT team notified us about discovered by Diego Pettenò in glib:
Base64 encoding and decoding functions in glib suffer from vulnerabilities during memory allocations which may result in arbitrary code execution when processing large strings.
g_base64_encode and g_base64_decode allocate memory for their respective behavior using a similar pattern:
ret = g_malloc0 (input_length * 3 / 4);
In g_base64_decode, input_length is typed as a gint and its value is derived directly from a strlen(3) call over an input string. Since the multiplication is evaluated prior to the division, an input_length greater than (UINT_MAX/3), or 1431655765, will wrap resulting in smaller allocation than required. It is also possible to achieve the same effect with a carefully calculated signedness overflow, like -1/4 = 0.
Reference (once the issue is public):
Base64 handling functions were introduced upstream in glib 2.11.0 (unstable) or 2.12 (stable, as noted in docs and source code). Therefore, this issue does not affect glib and glib2 packages as shipped in Red Hat Enterprise Linux 2.1, 3 and 4, which are based on older upstream versions.
Created attachment 325825 [details]
Here's my take on possible fix...
It swaps order of multiplication / division operations, so no overflow check is needed for decode, and the overflow in the encode case becomes rather theoretical (you'll need 3+gig input on 32bit which is beyond usual 2:2 or 3:1 user-space:kernel-space split, and way to high for today's (and tomorrow's too) 64 bit machines).
I also changed length requirements description in the comments to first divide, then multiply. This should get to the docs as well, so anyone using the *_step functions directly is less likely to make the same mistake (omitting overflow check in case of divide-multiply should not be a big deal, as described above).
It also changes input_length type to avoid this signedness flaw.
Matthias, Alex, can you please review?
So I was looking at what my be using those functions in EL5, but I don't see anything. Newer versions of evolution or vino in Fedora use them, in vino's case only for trusted inputs as far as I can see.
However, glib's code is based on camel library used by evolution / evolution-data-server. RHEL versions of evolution have the same flaw in camel_base64_encode_simple() in camel/camel-mime-utils.c. camel_base64_decode_simple() is not affected, as it does decoding in-place. I did not have time to investigate the uses of camel_base64_encode_simple yet, but I'd guess it's more likely to be used on trusted inputs, rather than untrusted. Suggestions are welcome.
Another possible problem lies in new gstreamer-plugins-base in 5.3 (not in current version in 5.2):
gst_vorbis_tag_add_coverart() in gst-libs/gst/tag/gstvorbistag.c:
img_data = g_try_malloc0 (base64_len * 3 / 4);
if (img_data == NULL)
img_len = g_base64_decode_step (img_data_base64, base64_len, img_data,
So it does the same mistake as g_base64_decode, and fixing the issue in glib2 will not fix gstreamer. I don't see any good reason why g_base64_decode_step is used here (possibly due to easier malloc error handling), g_base64_decode could be used as well, making the code even bit simple. Again, I'm not yet sure whether there may be some other restriction preventing this function to be reached when large inputs are provided. It may still be the easiest to fix this before we release 5.3 (avoiding the overhead of extra RHSA for what seems to be a moderate-impact issue for gstreamer). Changing alloc to (base64_len / 4) * 3 should be enough. Adam, any thoughts?
Created attachment 328547 [details]
Make sure g_malloc0 is always called with argument >0 in g_base64_decode(), to make sure g_base64_decode() does not return NULL for very short inputs (less than 4 chars).
I didn't follow the exact calculations that you changed in the comments, but the code changes look good to me, with one exception:
g_return_val_if_fail (len < ((G_MAXSIZE - 1) / 4 - 1) * 3, NULL);
should be an explicit
if (len < ((G_MAXSIZE - 1) / 4 - 1) * 3)
since g_return_val_if_fail() warnings are no-ops when configured with --enable-debug=no
and of course, I managed to invert the sense of the condition.
if (len >= ((G_MAXSIZE - 1) / 4 - 1) * 3)
Also, probably the reason gstreamer uses g_base64_decode_step is that they can then use g_try_malloc0, which doesn't exit if out of memory (which g_malloc0 does).
Created attachment 328728 [details]
(In reply to comment #6)
> g_return_val_if_fail (len < ((G_MAXSIZE - 1) / 4 - 1) * 3, NULL);
> should be an explicit
> if (len < ((G_MAXSIZE - 1) / 4 - 1) * 3)
> return NULL;
Well probably g_error is better, just like in case of g_malloc failure. Description mentions allocated buffer is always returned.
> since g_return_val_if_fail() warnings are no-ops when configured with
Ok, good note, thank you.
Attached updated patch incorporates that comment, just uses g_error instead of return NULL.
Created attachment 328729 [details]
(In reply to comment #8)
> Also, probably the reason gstreamer uses g_base64_decode_step is that they can
> then use g_try_malloc0, which doesn't exit if out of memory (which g_malloc0
Correct, that was my guess too. So it might make sense to provide API for in-place Base64 decoding, in the similar way original camel_base64_decode_simple did. Something like this.
There is already a patch for in-place-decoding in some upstream bug.
(In reply to comment #11)
> There is already a patch for in-place-decoding in some upstream bug.
And you've already committed the patch:
So I gave evolution and evolution-data-server a closer look. Versions as shipped in RHEL5 use about 8 different Base64 encoding/decoding routines implementations, one of them external (from libsoup).
Out of them, some are affected by the similar flaw (these are from evolution-data-server):
- camel_base64_encode_simple() (in camel/camel-mime-utils.c)
- This one was mentioned previously. Looking at where it is used, it seems
quite possible that large enough untrusted input can be passed to it from
the SASL authentication code, when NTLM auth mechanism is used. Other
mechs only pass short inputs to it.
- Current upstream versions of EDS use base64 functions from glib.
- _evc_base64_encode_simple() (in addressbook/libebook/e-vcard.c)
- Used in ebook, in setter functions for setting photo and certificate / key
details for contacts. It's probably not too likely for users to attach
huge random files to contacts, but I do not see any place they would be
blocked or size-limited. Can anyone more familiar with the code have
- Again, current upstream versions used glib functions.
- groupwise backend (in servers/groupwise/*)
- libsoup's base64 functions are used on multiple places, possibly on data
that may be untrusted.
Other implementations are either unaffected (in-place decoding, or allocation order (len/X)*Y is used, or dynamic buffers are used), or only used on short enough inputs are passed to them.
Affected functions in libsoup:
- Current upstream versions are not affected, 2.24.x do not longer provide
Base64 functions any more, latest 2.2.x versions (e.g. current 2.2.105)
have that function implemented as wrapper around g_base64_encode.
- Does own allocation and uses _step function directly. Unlike EDS' NTLM
code, it does better validation of NTLM packets, does not accept large
enough inputs that would trigger integer overflow.
Notes on seahorse:
- Seahorse contained a verbatim copy of g_base64_* functions. As of 2.23.6,
glib is used instead. In affected versions, code was used to decode SSH
keys to be loaded, so trusted input.
(In reply to comment #13)
> - camel_base64_encode_simple() (in camel/camel-mime-utils.c)
> - This one was mentioned previously. Looking at where it is used, it seems
> quite possible that large enough untrusted input can be passed to it from
> the SASL authentication code, when NTLM auth mechanism is used. Other
> mechs only pass short inputs to it.
In case of NTLM authentication, challenge reply packet gets base64 encoded prior to being sent to the server. That reply can contain a peace of data of possibly arbitrary length, that is directly controlled by the remote server - domain name. That is sent from server in base64 encoded form from server in the challenge packet. To be large enough to trigger integer overflow in during base64 encoding of the reply, domain has to be at least 1/4 of the address space, so base64 encoded representation as received from the server need to be at least 1/3 of the address space. Same amount of memory is later allocated in camel_sasl_challenge_base64() (camel-sasl.c) to store decoded challenge, and another copy of the domain name is made in ntlm_challenge() (camel-sasl-ntlm.c), via:
ntlm_set_string (ret, NTLM_RESPONSE_DOMAIN_OFFSET,
token->data + NTLM_CHALLENGE_DOMAIN_OFFSET,
atoi (token->data + NTLM_CHALLENGE_DOMAIN_LEN_OFFSET));
which results in a call to g_byte_array_append(). So if server sends long enough challenge, client is very likely to run out of memory before it tries to base64 encode the reply.
However, there is another problem in the NTLM authentication code. It does not check whether domain length value read from the received packet is sane and matches actual amount of data received from the server. Server can send short challenge that claims to contain long domain name. Based on the current memory layout, this can lead to an out-of-bounds read and crash of the client, or, if there is enough memory mapped in the process address space after token->data, ntlm_set_string call above can actually succeed and lead to integer overflow and insufficient memory allocation during base64 encoding of the reply.
Server should be able to use this insufficient checking of NTLM challenge packet to steal portions of client's memory, independently of the base64 encoding bug.
The code snipped above contains another strangeness, the use of atoi to read domain length. According to the specifications, it rather seems it should try to read 32-bit integer value, rather than ascii representation. Values of NTLM_CHALLENGE_DOMAIN_OFFSET and NTLM_CHALLENGE_DOMAIN_LEN_OFFSET suggests that as well.
> - _evc_base64_encode_simple() (in addressbook/libebook/e-vcard.c)
> - Used in ebook, in setter functions for setting photo and certificate / key
> details for contacts. It's probably not too likely for users to attach
> huge random files to contacts, but I do not see any place they would be
> blocked or size-limited.
These issues does not seem triggerable via evolution GUI. Current contact editor allows adding pictures to contacts, but those images are rescaled before being added, so even when user tries to add huge image, it is down-sized before photo_setter is reached. There's currently no GUI interface for adding certificates, as Milan pointed out to me (thank you!).
It seems that both photo_setter and cert_setter can be called with large inputs when contact info is extracted from malicious LDAP address book backend.
(In reply to comment #14)
> These issues does not seem triggerable via evolution GUI. Current contact
> editor allows adding pictures to contacts, but those images are rescaled before
> being added, so even when user tries to add huge image, it is down-sized before
> photo_setter is reached.
There is a question dialog, whether user wants to resize image, and one could say "no, keep it as is". It's for images larger than 96 pixels in any direction.
Fix is now committed in upstream SVN:
Bugs for other applications affected by similar flaw:
libsoup - bug #488026 (CVE-2009-0585)
gstreamer-plugins-base - bug #488208 (CVE-2009-0586)
evolution-data-server - bug #488226 (CVE-2009-0587)
This issue has been addressed in following products:
Red Hat Enterprise Linux 5
Via RHSA-2009:0336 https://rhn.redhat.com/errata/RHSA-2009-0336.html
glib2-2.16.6-3.fc9 has been pushed to the Fedora 9 stable repository. If problems still persist, please make note of it in this bug report.
glib2-2.18.4-2.fc10 has been pushed to the Fedora 10 stable repository. If problems still persist, please make note of it in this bug report.