Bug 1668309 - Fuse mount crashed while creating the preallocated image with size > 1TB
Summary: Fuse mount crashed while creating the preallocated image with size > 1TB
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhhi
Version: rhgs-3.4
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: RHHI-V 1.5.z Async
Assignee: Sahina Bose
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On: 1668304 1669077 1669382
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-01-22 12:25 UTC by SATHEESARAN
Modified: 2019-05-20 04:54 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1668304
Environment:
Last Closed: 2019-05-20 04:54:44 UTC
Embargoed:


Attachments (Terms of Use)

Description SATHEESARAN 2019-01-22 12:25:20 UTC
Description of problem:
------------------------
Fuse mount crashed while creating the preallocated image of size 1TB on a arbitrated replicate volume

Version-Release number of selected component (if applicable):
--------------------------------------------------------------
RHGS 3.4.3 nightly ( glusterfs-3.12.2-38.el7rhgs )
RHV 4.2.8

How reproducible:
-----------------
1 out of 2 times

Steps to Reproduce:
-------------------
1. Create a arbitrated replicate volume
2. Fuse mount it
3. Create preallocated image of size 1TB
# qemu-img create -f qcow2 -o preallocation=falloc /mnt/test1/vm1.img 1072G

Actual results:
---------------
Fuse mount crashed with segfault

Expected results:
-----------------
Preallocated image should be created successfully

Comment 1 SATHEESARAN 2019-01-22 12:25:51 UTC
Backtrace
-----------

[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `/usr/sbin/glusterfs --volfile-server=localhost --volfile-id=/data /mnt/test1'.
Program terminated with signal 11, Segmentation fault.
#0  0x00007fb1233e0a7a in __inode_get_xl_index (xlator=0x7fb1100143b0, inode=0x7fb104026258) at inode.c:455
455	        if ((inode->_ctx[xlator->xl_id].xl_key != NULL) &&
(gdb) bt
#0  0x00007fb1233e0a7a in __inode_get_xl_index (xlator=0x7fb1100143b0, inode=0x7fb104026258) at inode.c:455
#1  __inode_ref (inode=inode@entry=0x7fb104026258) at inode.c:537
#2  0x00007fb1233e0b81 in inode_ref (inode=inode@entry=0x7fb104026258) at inode.c:581
#3  0x00007fb1233f5d2b in __fd_create (inode=inode@entry=0x7fb104026258, pid=pid@entry=0) at fd.c:633
#4  0x00007fb1233f6f4a in __fd_anonymous (inode=inode@entry=0x7fb104026258, flags=flags@entry=2) at fd.c:779
#5  0x00007fb1233f729d in fd_anonymous (inode=0x7fb104026258) at fd.c:803
#6  0x00007fb115161534 in shard_post_lookup_fsync_handler (frame=0x7fb0c05eb178, this=0x7fb1100143b0) at shard.c:5936
#7  0x00007fb11514913c in shard_lookup_base_file (frame=frame@entry=0x7fb0c05eb178, this=this@entry=0x7fb1100143b0, loc=loc@entry=0x7fb10800a158, 
    handler=handler@entry=0x7fb115161030 <shard_post_lookup_fsync_handler>) at shard.c:1746
#8  0x00007fb1151544c3 in shard_fsync (frame=0x7fb0c05eb178, this=0x7fb1100143b0, fd=0x7fb0f800eb78, datasync=1, xdata=0x0) at shard.c:6015
#9  0x00007fb114f30189 in wb_fsync_helper (frame=0x7fb0f80022e8, this=0x7fb1100159d0, fd=0x7fb0f800eb78, datasync=1, xdata=0x0) at write-behind.c:1974
#10 0x00007fb1233f5b15 in call_resume_keep_stub (stub=0x7fb0f80250f8) at call-stub.c:2582
#11 0x00007fb114f35a69 in wb_do_winds (wb_inode=wb_inode@entry=0x7fb0f800dd70, tasks=tasks@entry=0x7fb10dae7510) at write-behind.c:1672
#12 0x00007fb114f35b7b in wb_process_queue (wb_inode=wb_inode@entry=0x7fb0f800dd70) at write-behind.c:1709
#13 0x00007fb114f35c57 in wb_fulfill_cbk (frame=0x7fb0f8010a58, cookie=<optimized out>, this=<optimized out>, op_ret=<optimized out>, op_errno=<optimized out>, prebuf=<optimized out>, 
    postbuf=0x7fb0f8003670, xdata=0x7fb0c065ee98) at write-behind.c:1054
#14 0x00007fb115156840 in shard_common_inode_write_success_unwind (fop=<optimized out>, frame=0x7fb0f80019b8, op_ret=65536) at shard.c:903
#15 0x00007fb115156bc0 in shard_common_inode_write_post_update_size_handler (frame=<optimized out>, this=<optimized out>) at shard.c:5214
#16 0x00007fb115147cc0 in shard_update_file_size (frame=frame@entry=0x7fb0f80019b8, this=this@entry=0x7fb1100143b0, fd=0x7fb0f800eb78, loc=loc@entry=0x0, 
    handler=handler@entry=0x7fb115156ba0 <shard_common_inode_write_post_update_size_handler>) at shard.c:1201
#17 0x00007fb11515e811 in shard_common_inode_write_do_cbk (frame=frame@entry=0x7fb0f80019b8, cookie=0x7fb0f800eb78, this=0x7fb1100143b0, op_ret=op_ret@entry=65536, 
    op_errno=op_errno@entry=0, pre=pre@entry=0x7fb0f8029730, post=post@entry=0x7fb0f80297a0, xdata=xdata@entry=0x7fb0c065ee98) at shard.c:5326
#18 0x00007fb1153d467e in dht_writev_cbk (frame=0x7fb0f80021d8, cookie=<optimized out>, this=<optimized out>, op_ret=65536, op_errno=0, prebuf=0x7fb0f8029730, postbuf=0x7fb0f80297a0, 
    xdata=0x7fb0c065ee98) at dht-inode-write.c:119
#19 0x00007fb115630b32 in afr_writev_unwind (frame=frame@entry=0x7fb0f8004888, this=this@entry=0x7fb11000fff0) at afr-inode-write.c:246
#20 0x00007fb11563105e in afr_writev_wind_cbk (frame=0x7fb0f800bd08, cookie=<optimized out>, this=0x7fb11000fff0, op_ret=<optimized out>, op_errno=<optimized out>, 
    prebuf=<optimized out>, postbuf=0x7fb10dae7990, xdata=0x7fb0c065ee98) at afr-inode-write.c:406
#21 0x00007fb1158a7ffa in client3_3_writev_cbk (req=<optimized out>, iov=<optimized out>, count=<optimized out>, myframe=0x7fb0f802d148) at client-rpc-fops.c:838
#22 0x00007fb123198b30 in rpc_clnt_handle_reply (clnt=clnt@entry=0x7fb11004a940, pollin=pollin@entry=0x7fb10bb95520) at rpc-clnt.c:778
#23 0x00007fb123198ed3 in rpc_clnt_notify (trans=<optimized out>, mydata=0x7fb11004a970, event=<optimized out>, data=0x7fb10bb95520) at rpc-clnt.c:971
#24 0x00007fb123194c33 in rpc_transport_notify (this=this@entry=0x7fb11004ac90, event=event@entry=RPC_TRANSPORT_MSG_RECEIVED, data=data@entry=0x7fb10bb95520) at rpc-transport.c:552
#25 0x00007fb117d89576 in socket_event_poll_in (this=this@entry=0x7fb11004ac90, notify_handled=<optimized out>) at socket.c:2322
#26 0x00007fb117d8bb1c in socket_event_handler (fd=11, idx=4, gen=1, data=0x7fb11004ac90, poll_in=1, poll_out=0, poll_err=0) at socket.c:2474
#27 0x00007fb12342ee84 in event_dispatch_epoll_handler (event=0x7fb10dae7e80, event_pool=0x56476beb1ec0) at event-epoll.c:583
#28 event_dispatch_epoll_worker (data=0x56476bf0b1d0) at event-epoll.c:659
#29 0x00007fb12222fdd5 in start_thread (arg=0x7fb10dae8700) at pthread_create.c:307
#30 0x00007fb121af7ead in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

Comment 2 SATHEESARAN 2019-01-30 11:58:50 UTC
Tested with glusterfs-3.12.2-40.el7rhgs with the steps described in comment0

1. Created many fallocated images on gluster volumes
2. Created parallel fallocated images.
3. Also tried the steps mentioned in comment5

No issues observed


Note You need to log in before you can comment on or make changes to this bug.