Bug 1788421 - CVE-2019-20382 virt:8.1/qemu-kvm: QEMU: vnc: memory leakage upon disconnect [rhel-av-8]
Summary: CVE-2019-20382 virt:8.1/qemu-kvm: QEMU: vnc: memory leakage upon disconnect [...
Keywords:
Status: CLOSED ERRATA
Alias: None
Deadline: 2021-03-15
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: 8.1
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: rc
: 8.1
Assignee: Gerd Hoffmann
QA Contact: Guo, Zhiyi
URL:
Whiteboard:
Depends On:
Blocks: CVE-2019-20382
TreeView+ depends on / blocked
 
Reported: 2020-01-07 06:57 UTC by Han Han
Modified: 2020-12-20 06:49 UTC (History)
8 users (show)

Fixed In Version: qemu-kvm-4.2.0-12.module+el8.2.0+5858+afd073bc
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-05-05 09:55:17 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
valgrind log (32.75 KB, text/plain)
2020-01-07 06:57 UTC, Han Han
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:2017 0 None None None 2020-05-05 09:56:58 UTC

Description Han Han 2020-01-07 06:57:24 UTC
Created attachment 1650300 [details]
valgrind log

Description of problem:
As subject

Version-Release number of selected component (if applicable):
qemu-kvm-4.1.0-20.module+el8.1.1+5309+6d656f05.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Start a qemu with vnc together with valgrind
# valgrind --log-file=vnc-memleak.log --leak-check=full /usr/libexec/qemu-kvm -vnc 0.0.0.0:0


2. Use remote-viewer to connect vnc for some times
# for i in {1..10};do timeout -s INT 2 remote-viewer vnc://[HOST_IP]:5900;done

You can also start qemu directly and connect vnc for some times. Check the rss useage increasing on qemu process.

3. Memleak log:
==24942== 131,072 bytes in 1 blocks are possibly lost in loss record 4,662 of 4,683
==24942==    at 0x4C331EA: calloc (vg_replace_malloc.c:762)
==24942==    by 0x556122D: g_malloc0 (in /usr/lib64/libglib-2.0.so.0.5600.4)
==24942==    by 0x78B66B3: deflateInit2_ (in /usr/lib64/libz.so.1.2.11)
==24942==    by 0x68FBAD: tight_init_stream (vnc-enc-tight.c:798)
==24942==    by 0x68FBAD: tight_compress_data (vnc-enc-tight.c:850)
==24942==    by 0x69192F: send_palette_rect (vnc-enc-tight.c:1132)
==24942==    by 0x69192F: send_sub_rect_nojpeg (vnc-enc-tight.c:1414)
==24942==    by 0x69192F: send_sub_rect (vnc-enc-tight.c:1514)
==24942==    by 0x69292E: find_large_solid_color_rect (vnc-enc-tight.c:1614)
==24942==    by 0x69292E: tight_send_framebuffer_update (vnc-enc-tight.c:1675)
==24942==    by 0x69294E: find_large_solid_color_rect (vnc-enc-tight.c:1617)
==24942==    by 0x69294E: tight_send_framebuffer_update (vnc-enc-tight.c:1675)
==24942==    by 0x692AB0: find_large_solid_color_rect (vnc-enc-tight.c:1633)
==24942==    by 0x692AB0: tight_send_framebuffer_update (vnc-enc-tight.c:1675)
==24942==    by 0x692AB0: find_large_solid_color_rect (vnc-enc-tight.c:1633)
==24942==    by 0x692AB0: tight_send_framebuffer_update (vnc-enc-tight.c:1675)
==24942==    by 0x692AB0: find_large_solid_color_rect (vnc-enc-tight.c:1633)
==24942==    by 0x692AB0: tight_send_framebuffer_update (vnc-enc-tight.c:1675)
==24942==    by 0x68A104: vnc_send_framebuffer_update (vnc.c:910)
==24942==    by 0x69D3C0: vnc_worker_thread_loop (vnc-jobs.c:262)
==24942== 
==24942== 262,144 bytes in 2 blocks are possibly lost in loss record 4,665 of 4,683
==24942==    at 0x4C331EA: calloc (vg_replace_malloc.c:762)
==24942==    by 0x556122D: g_malloc0 (in /usr/lib64/libglib-2.0.so.0.5600.4)
==24942==    by 0x78B66DD: deflateInit2_ (in /usr/lib64/libz.so.1.2.11)
==24942==    by 0x68FBAD: tight_init_stream (vnc-enc-tight.c:798)
==24942==    by 0x68FBAD: tight_compress_data (vnc-enc-tight.c:850)
==24942==    by 0x691712: send_mono_rect (vnc-enc-tight.c:1015)
==24942==    by 0x691712: send_sub_rect_nojpeg (vnc-enc-tight.c:1412)
==24942==    by 0x691712: send_sub_rect (vnc-enc-tight.c:1514)
==24942==    by 0x692BC8: tight_send_framebuffer_update (vnc-enc-tight.c:1667)
==24942==    by 0x692BC8: tight_send_framebuffer_update (vnc-enc-tight.c:1644)
==24942==    by 0x69294E: find_large_solid_color_rect (vnc-enc-tight.c:1617)
==24942==    by 0x69294E: tight_send_framebuffer_update (vnc-enc-tight.c:1675)
==24942==    by 0x68A104: vnc_send_framebuffer_update (vnc.c:910)
==24942==    by 0x69D3C0: vnc_worker_thread_loop (vnc-jobs.c:262)
==24942==    by 0x69D5AF: vnc_worker_thread (vnc-jobs.c:324)
==24942==    by 0x773953: qemu_thread_start (qemu-thread-posix.c:502)
==24942==    by 0xA0B42DD: start_thread (in /usr/lib64/libpthread-2.28.so)


Actual results:
memory leak.

Expected results:
no leak

Additional info:
Not reproduced on qemu-kvm-4.2.0-4.module+el8.2.0+5220+e82621dc.x86_64 and qemu-kvm-rhev-2.12.0-38.el7.x86_64

I find a vnc memleak fix on upstream 4.2:

commit 6bf21f3d83
Author: Li Qiang <liq3ea>
Date:   Sat Aug 31 08:39:22 2019 -0700

    vnc: fix memory leak when vnc disconnect
    
    Currently when qemu receives a vnc connect, it creates a 'VncState' to
    represent this connection. In 'vnc_worker_thread_loop' it creates a
    local 'VncState'. The connection 'VcnState' and local 'VncState' exchange
    data in 'vnc_async_encoding_start' and 'vnc_async_encoding_end'.
    In 'zrle_compress_data' it calls 'deflateInit2' to allocate the libz library
    opaque data. The 'VncState' used in 'zrle_compress_data' is the local
    'VncState'. In 'vnc_zrle_clear' it calls 'deflateEnd' to free the libz
    library opaque data. The 'VncState' used in 'vnc_zrle_clear' is the connection
    'VncState'. In currently implementation there will be a memory leak when the
    vnc disconnect. Following is the asan output backtrack:
    
    Direct leak of 29760 byte(s) in 5 object(s) allocated from:
        0 0xffffa67ef3c3 in __interceptor_calloc (/lib64/libasan.so.4+0xd33c3)
        1 0xffffa65071cb in g_malloc0 (/lib64/libglib-2.0.so.0+0x571cb)
        2 0xffffa5e968f7 in deflateInit2_ (/lib64/libz.so.1+0x78f7)
        3 0xaaaacec58613 in zrle_compress_data ui/vnc-enc-zrle.c:87
        4 0xaaaacec58613 in zrle_send_framebuffer_update ui/vnc-enc-zrle.c:344
        5 0xaaaacec34e77 in vnc_send_framebuffer_update ui/vnc.c:919
        6 0xaaaacec5e023 in vnc_worker_thread_loop ui/vnc-jobs.c:271
        7 0xaaaacec5e5e7 in vnc_worker_thread ui/vnc-jobs.c:340
        8 0xaaaacee4d3c3 in qemu_thread_start util/qemu-thread-posix.c:502
        9 0xffffa544e8bb in start_thread (/lib64/libpthread.so.0+0x78bb)
        10 0xffffa53965cb in thread_start (/lib64/libc.so.6+0xd55cb)
    
    This is because the opaque allocated in 'deflateInit2' is not freed in
    'deflateEnd'. The reason is that the 'deflateEnd' calls 'deflateStateCheck'
    and in the latter will check whether 's->strm != strm'(libz's data structure).
    This check will be true so in 'deflateEnd' it just return 'Z_STREAM_ERROR' and
    not free the data allocated in 'deflateInit2'.
    
    The reason this happens is that the 'VncState' contains the whole 'VncZrle',
    so when calling 'deflateInit2', the 's->strm' will be the local address.
    So 's->strm != strm' will be true.
    
    To fix this issue, we need to make 'zrle' of 'VncState' to be a pointer.
    Then the connection 'VncState' and local 'VncState' exchange mechanism will
    work as expection. The 'tight' of 'VncState' has the same issue, let's also turn
    it to a pointer.
    
    Reported-by: Ying Fang <fangying1>
    Signed-off-by: Li Qiang <liq3ea>
    Message-id: 20190831153922.121308-1-liq3ea
    Signed-off-by: Gerd Hoffmann <kraxel>


We can backport it to downstream 4.1.

I am not sure if other version are affected. Please check it.
For the impact, it should be a security issue. Because malicious vnc connnector and make use of it to run out of host memory.

Comment 4 Ademar Reis 2020-02-05 23:12:04 UTC
QEMU has been recently split into sub-components and as a one-time operation to avoid breakage of tools, we are setting the QEMU sub-component of this BZ to "General". Please review and change the sub-component if necessary the next time you review this BZ. Thanks

Comment 10 Guo, Zhiyi 2020-03-09 01:53:44 UTC
Verified per comment 9

Comment 13 errata-xmlrpc 2020-05-05 09:55:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2017


Note You need to log in before you can comment on or make changes to this bug.