Bug 981124 - (CVE-2013-2236) CVE-2013-2236 Quagga: OSPFD Potential remote code exec (stack based buffer overflow)
CVE-2013-2236 Quagga: OSPFD Potential remote code exec (stack based buffer ov...
Status: NEW
Product: Security Response
Classification: Other
Component: vulnerability (Show other bugs)
unspecified
All Linux
low Severity low
: ---
: ---
Assigned To: Red Hat Product Security
impact=low,public=20130702,reported=2...
: Security
Depends On: 981126 981127 981128 981129 981130 981131
Blocks: 981132
  Show dependency treegraph
 
Reported: 2013-07-04 01:29 EDT by Garth Mollett
Modified: 2016-03-04 07:14 EST (History)
5 users (show)

See Also:
Fixed In Version: quagga 0.99.22.2
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Garth Mollett 2013-07-04 01:29:36 EDT
Charlet, Ricky <ricky.charlet@hp.com> reports:

Howdy,

I'm gonna describe an ospfd bug that happened to me and a proposed patch. But at the top of this email, I want to say that I'm really not familiar enough with this code to suggest that this patch is "the right thing".  I'm looking for some double checking here. On the other hand, I think it is important. This is an on-stack, buffer overflow susceptible to input from the network in code that implements, what seems to me, to be a very minor feature.

I setup a network that, upon a single network event, sent over router LSAs representing 150 networks. The actual traffic broke down into about 100 ls-update packets, many of which are len=1480, and some of which are ip-fragmented. The ls-updates tend to grow in size. The largest (re-assembled) one was the last one at 1840 bytes.

While processing the received LSAs, we crash with  gdb backtrace points to memcpy called from new_msg_lsa_change_notify. By code review, I see that we memcpy into a buffer with a length we learned from the input, not governed by the length of the available buffer. In my patch, I suggest that we govern the memcpy by the length of the available buffer.

<patch>
Index: ospfd/ospf_api.c
===================================================================
--- ospfd/ospf_api.c    (revision 10875)
+++ ospfd/ospf_api.c    (working copy)
@@ -639,7 +639,7 @@
   nmsg->area_id = area_id;
   nmsg->is_self_originated = is_self_originated;
   memset (&nmsg->pad, 0, sizeof (nmsg->pad));
-  memcpy (&nmsg->data, data, ntohs (data->length));
+  memcpy (&nmsg->data, data, sizeof(struct lsa_header));

   return msg_new (msgtype, nmsg, seqnum, len);
 }</patch>

I tested the above, passing in the same large router-lsa's and it survives/passes.

Sssooo.... I'm not one to pretend I understand what 'clients' might be interested in this lsa-update message. I looked at the ospfclient.c: lsa_update_callback() and it apparently does not care about anything beyond the header.  Certianly, in our implementation, we have no clients at all listening for these messages.   Is it apprpriate to trunkate this ls-update message to the header only, or should a solution be built for passing arbitrarily large messages to clients?


Additionally, how about the code that is sending the ls-updates?  I did not look into it myself. But we may be a poor citizen if we are sending ls-updates which need to be reassembled into large buffers.
Comment 1 Garth Mollett 2013-07-04 01:36:12 EDT
Created quagga tracking bugs for this issue:

Affects: fedora-all [bug 981126]
Comment 3 Garth Mollett 2013-07-04 01:44:58 EDT
Please note: The above patch, while preventing the memcpy overflow may not be a complete fix as len may still be used from the wire in other places without sanity checking.
Comment 12 Florian Weimer 2013-07-04 09:19:45 EDT
I proposed another fix: <http://lists.quagga.net/pipermail/quagga-dev/2013-July/010625.html>
Comment 18 Huzaifa S. Sidhpurwala 2013-09-05 05:27:29 EDT
Statement:

This issue affects the version of quagga as shipped with Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this issue as having low security impact, a future update may address this flaw.

Note You need to log in before you can comment on or make changes to this bug.