Bug 1452919 - heap-buffer-overflow in gluster-blockd
Summary: heap-buffer-overflow in gluster-blockd
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gluster-block
Version: rhgs-3.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: RHGS 3.3.0
Assignee: Pranith Kumar K
QA Contact: Sweta Anandpara
URL:
Whiteboard:
Depends On:
Blocks: 1417151
TreeView+ depends on / blocked
 
Reported: 2017-05-20 10:40 UTC by Pranith Kumar K
Modified: 2017-09-21 04:19 UTC (History)
7 users (show)

Fixed In Version: gluster-block-0.2.1-1.el7rhgs
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-09-21 04:19:33 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:2773 0 normal SHIPPED_LIVE new packages: gluster-block 2017-09-21 08:16:22 UTC

Description Pranith Kumar K 2017-05-20 10:40:52 UTC
Description of problem:

When command to create gluster-block is executed gluster-blockd crashed with following core.

[root@localhost ~]# gluster-blockd 
Cannot register service: RPC: Unable to receive; errno = Connection refused
=================================================================
==22873==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x6120000224cf at pc 0x7f073b9829b7 bp 0x7f07367fc400 sp 0x7f07367fbba8
WRITE of size 17 at 0x6120000224cf thread T1
    #0 0x7f073b9829b6 in strcpy (/lib64/libasan.so.3+0x919b6)
    #1 0x424416 in blockStuffMetaInfo /root/gluster-block/rpc/glfs-operations.c:305
    #2 0x424bc6 in blockGetMetaInfo /root/gluster-block/rpc/glfs-operations.c:363
    #3 0x412988 in glusterBlockAuditRequest /root/gluster-block/rpc/block_svc_routines.c:1097
    #4 0x41b4ef in block_create_cli_1_svc /root/gluster-block/rpc/block_svc_routines.c:1772
    #5 0x404bbe in gluster_block_cli_1 /root/gluster-block/rpc/rpcl/block_svc.c:132
    #6 0x7f073a4dc2a0 in svc_getreq_common (/lib64/libc.so.6+0x13a2a0)
    #7 0x7f073a4dc3e6 in svc_getreq_poll (/lib64/libc.so.6+0x13a3e6)
    #8 0x7f073a4dfd00 in svc_run (/lib64/libc.so.6+0x13dd00)
    #9 0x4036ef in glusterBlockCliThreadProc /root/gluster-block/daemon/gluster-blockd.c:101
    #10 0x7f073b6da6c9 in start_thread (/lib64/libpthread.so.0+0x76c9)
    #11 0x7f073a4a9f6e in clone (/lib64/libc.so.6+0x107f6e)

0x6120000224cf is located 0 bytes to the right of 271-byte region [0x6120000223c0,0x6120000224cf)
allocated by thread T1 here:
    #0 0x7f073b9b8020 in calloc (/lib64/libasan.so.3+0xc7020)
    #1 0x42592f in gbAlloc /root/gluster-block/utils/utils.c:98
    #2 0x424347 in blockStuffMetaInfo /root/gluster-block/rpc/glfs-operations.c:302
    #3 0x424bc6 in blockGetMetaInfo /root/gluster-block/rpc/glfs-operations.c:363
    #4 0x412988 in glusterBlockAuditRequest /root/gluster-block/rpc/block_svc_routines.c:1097
    #5 0x41b4ef in block_create_cli_1_svc /root/gluster-block/rpc/block_svc_routines.c:1772
    #6 0x404bbe in gluster_block_cli_1 /root/gluster-block/rpc/rpcl/block_svc.c:132
    #7 0x7f073a4dc2a0 in svc_getreq_common (/lib64/libc.so.6+0x13a2a0)
    #8 0x7f073c8a2db3 in _dl_fixup (/lib64/ld-linux-x86-64.so.2+0xfdb3)

Thread T1 created by T0 here:
    #0 0x7f073b922488 in __interceptor_pthread_create (/lib64/libasan.so.3+0x31488)
    #1 0x404296 in main /root/gluster-block/daemon/gluster-blockd.c:212
    #2 0x7f073a3c2400 in __libc_start_main (/lib64/libc.so.6+0x20400)

SUMMARY: AddressSanitizer: heap-buffer-overflow (/lib64/libasan.so.3+0x919b6) in strcpy
Shadow bytes around the buggy address:
  0x0c247fffc440: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c247fffc450: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c247fffc460: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c247fffc470: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00
  0x0c247fffc480: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x0c247fffc490: 00 00 00 00 00 00 00 00 00[07]fa fa fa fa fa fa
  0x0c247fffc4a0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
  0x0c247fffc4b0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x0c247fffc4c0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fa fa
  0x0c247fffc4d0: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00
  0x0c247fffc4e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Heap right redzone:      fb
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack partial redzone:   f4
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==22873==ABORTING

typedef struct NodeInfo {
  char addr[255];
  char status[16];
} NodeInfo;

>>> len('CONFIGINPROGRESS')
16
>>> 
[root@localhost block-meta]# cat test1-img 
VOLUME: block-test
GBID: 91d5ace3-e17b-4f02-a792-51c8a8e94940
SIZE: 1073741824
HA: 3
ENTRYCREATE: INPROGRESS
ENTRYCREATE: SUCCESS
192.168.122.61: CONFIGINPROGRESS
192.168.122.123: CONFIGINPROGRESS
192.168.122.113: CONFIGINPROGRESS

because status CONFIGINPROGRESS is 16 bytes it is overwriting info->status by 1 byte leading to the crash.


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Pranith Kumar K 2017-05-20 11:03:09 UTC
Also observed that when the remote ports are not connected, originator gluster-blockd crashes with the following trace:
==25588==ABORTING
[root@localhost gluster-block]# gluster-blockd 
Cannot register service: RPC: Unable to receive; errno = Connection refused
ASAN:DEADLYSIGNAL
=================================================================
==25691==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x7f52abd6e046 bp 0x7f52a80fe300 sp 0x7f52a80fda68 T1)
    #0 0x7f52abd6e045 in strlen (/lib64/libc.so.6+0x8d045)
    #1 0x7f52ad2c8f0b in xdr_string (/lib64/libasan.so.3+0x98f0b)
    #2 0x405555 in xdr_blockResponse /root/gluster-block/rpc/rpcl/block_xdr.c:188
    #3 0x7f52abe1db1f in xdr_union (/lib64/libc.so.6+0x13cb1f)
    #4 0x7f52abe15673 in svcunix_reply (/lib64/libc.so.6+0x134673)
    #5 0x7f52abe1adea in svc_sendreply (/lib64/libc.so.6+0x139dea)
    #6 0x404be8 in gluster_block_cli_1 /root/gluster-block/rpc/rpcl/block_svc.c:133
    #7 0x7f52abe1b2a0 in svc_getreq_common (/lib64/libc.so.6+0x13a2a0)
    #8 0x7f52abe1b3e6 in svc_getreq_poll (/lib64/libc.so.6+0x13a3e6)
    #9 0x7f52abe1ed00 in svc_run (/lib64/libc.so.6+0x13dd00)
    #10 0x4036ef in glusterBlockCliThreadProc /root/gluster-block/daemon/gluster-blockd.c:101
    #11 0x7f52ad0196c9 in start_thread (/lib64/libpthread.so.0+0x76c9)
    #12 0x7f52abde8f6e in clone (/lib64/libc.so.6+0x107f6e)

Because reply->out is NULL.
(gdb) p *reply
$1 = {exit = 255, out = 0x0, offset = 0, xdata = {xdata_len = 0, xdata_val = 0x0}}

Comment 3 Prasanna Kumar Kalever 2017-05-22 06:52:38 UTC
Fix for bug in description:
https://review.gluster.org/#/c/17353/

Comment 2 may have to be fixed in separate patch. Need to get details on how to reproduce though.

Comment 6 Pranith Kumar K 2017-06-04 17:26:41 UTC
https://review.gluster.org/17430

Comment 11 Sweta Anandpara 2017-07-17 10:56:12 UTC
A round of testing has taken place on glusterfs-3.8.4-33 and gluster-block-0.2.1-6. I do not see any crashes or something unexpected in gluster-block logs. 

Based on comment9 and 10 - developer's inputs as well as release leads', moving this bug to (conditionally) verified.

Comment 13 errata-xmlrpc 2017-09-21 04:19:33 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:2773


Note You need to log in before you can comment on or make changes to this bug.