Bug 1313290 - [HC] glusterfs mount crashed
[HC] glusterfs mount crashed
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: core (Show other bugs)
unspecified
x86_64 Linux
unspecified Severity high
: ---
: RHGS 3.1.3
Assigned To: Krutika Dhananjay
SATHEESARAN
: ZStream
Depends On:
Blocks: Gluster-HC-1 1299184 1313293 1313315
  Show dependency treegraph
 
Reported: 2016-03-01 05:36 EST by RamaKasturi
Modified: 2016-09-17 10:40 EDT (History)
7 users (show)

See Also:
Fixed In Version: glusterfs-3.7.9-1
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1313293 (view as bug list)
Environment:
RHEV 3.6.3 beta RHEL 7.2 RHEV+RHGS HC
Last Closed: 2016-06-23 01:09:35 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description RamaKasturi 2016-03-01 05:36:53 EST
Description of problem:
In a HC setup where there are three nodes in the system when the first host looses its network connectivity and it node comes backup glusterfs mount for engine domain is not present and the mount is crashed.

Version-Release number of selected component (if applicable):
glusterfs-3.7.5-18.32.giteb76d88.el7rhgs.x86_64

How reproducible:


Steps to Reproduce:
1. Install HC 
2. once the engine vm is started and running add all the other hosts to the engine. 
3. Make sure that none of your interfaces has ips on the system where your engine is currently running.
4. Now login to the machine and run dhclient where there is no ip attached to the system.
5. Once the system is up login to the system and try to connect to the vm.

Actual results:
user will not be able to connect to vm plus engine volume is not mounted anymore and there is core dump in the system.

Expected results:
user should be able to connect to the vm and should not see any crashes.

Additional info:
Comment 2 RamaKasturi 2016-03-01 05:39:16 EST
brack trace from the system:
==============================
(gdb) bt
#0  shard_fsync_cbk (frame=frame@entry=0x7fe701c2facc, cookie=0x7fe701c10ba0, this=0x7fe6f800d1c0, op_ret=op_ret@entry=-1, op_errno=op_errno@entry=107, prebuf=prebuf@entry=0x0, postbuf=postbuf@entry=0x0,
    xdata=xdata@entry=0x0) at shard.c:3884
#1  0x00007fe6fc4e915f in dht_fsync_cbk (frame=0x7fe701c10ba0, cookie=<optimized out>, this=<optimized out>, op_ret=-1, op_errno=107, prebuf=0x0, postbuf=0x0, xdata=0x0) at dht-inode-read.c:861
#2  0x00007fe6fc74dbe1 in afr_fsync (frame=0x7fe701c33540, this=<optimized out>, fd=0x7fe6f80b9dcc, datasync=1, xdata=0x0) at afr-common.c:2969
#3  0x00007fe6fc4ebb19 in dht_fsync (frame=0x7fe701c10ba0, this=<optimized out>, fd=0x7fe6f80b9dcc, datasync=1, xdata=0x0) at dht-inode-read.c:930
#4  0x00007fe6fc2814d5 in shard_fsync (frame=0x7fe701c2facc, this=0x7fe6f800d1c0, fd=0x7fe6f80b9dcc, datasync=1, xdata=0x0) at shard.c:3894
#5  0x00007fe6fc070935 in wb_fsync_helper (frame=0x7fe701c3d074, this=0x7fe6f800e630, fd=0x7fe6f80b9dcc, datasync=1, xdata=0x0) at write-behind.c:1760
#6  0x00007fe70412b17d in call_resume (stub=0x7fe7016daf94) at call-stub.c:2576
#7  0x00007fe6fc073f29 in wb_do_winds (wb_inode=wb_inode@entry=0x7fe6e8041d20, tasks=tasks@entry=0x7fe6f5dcf990) at write-behind.c:1460
#8  0x00007fe6fc074037 in wb_process_queue (wb_inode=wb_inode@entry=0x7fe6e8041d20) at write-behind.c:1495
#9  0x00007fe6fc074c28 in wb_fsync (frame=0x7fe701c3d074, this=0x7fe6f800e630, fd=0x7fe6f80b9dcc, datasync=1, xdata=0x0) at write-behind.c:1785
#10 0x00007fe7040ff4cd in default_fsync (frame=0x7fe701c3d074, this=0x7fe6f800fa00, fd=0x7fe6f80b9dcc, flags=1, xdata=0x0) at defaults.c:1818
#11 0x00007fe70410b8d5 in default_fsync_resume (frame=0x7fe701c30aec, this=0x7fe6f8010dd0, fd=0x7fe6f80b9dcc, flags=1, xdata=0x0) at defaults.c:1377
#12 0x00007fe70412b17d in call_resume (stub=0x7fe701722a6c) at call-stub.c:2576
#13 0x00007fe6f7bf5648 in open_and_resume (this=this@entry=0x7fe6f8010dd0, fd=fd@entry=0x7fe6f80b9dcc, stub=0x7fe701722a6c) at open-behind.c:242
#14 0x00007fe6f7bf5a62 in ob_fsync (frame=0x7fe701c30aec, this=0x7fe6f8010dd0, fd=0x7fe6f80b9dcc, flag=<optimized out>, xdata=<optimized out>) at open-behind.c:499
#15 0x00007fe6f79dad20 in io_stats_fsync (frame=0x7fe701c3b238, this=0x7fe6f8012180, fd=0x7fe6f80b9dcc, flags=1, xdata=0x0) at io-stats.c:2207
#16 0x00007fe7040ff4cd in default_fsync (frame=0x7fe701c3b238, this=0x7fe6f8013660, fd=0x7fe6f80b9dcc, flags=1, xdata=0x0) at defaults.c:1818
#17 0x00007fe6f77c538b in meta_fsync (frame=0x7fe701c3b238, this=0x7fe6f8013660, fd=0x7fe6f80b9dcc, flags=1, xdata=0x0) at meta.c:176
#18 0x00007fe7012b1697 in fuse_fsync_resume (state=0x7fe6e8046590) at fuse-bridge.c:2489
#19 0x00007fe7012a8ec5 in fuse_resolve_done (state=<optimized out>) at fuse-resolve.c:665
#20 fuse_resolve_all (state=<optimized out>) at fuse-resolve.c:692
#21 0x00007fe7012a8c08 in fuse_resolve (state=0x7fe6e8046590) at fuse-resolve.c:656
#22 0x00007fe7012a8f0e in fuse_resolve_all (state=<optimized out>) at fuse-resolve.c:688
#23 0x00007fe7012a8373 in fuse_resolve_continue (state=state@entry=0x7fe6e8046590) at fuse-resolve.c:708
#24 0x00007fe7012a8ba8 in fuse_resolve_fd (state=0x7fe6e8046590) at fuse-resolve.c:568
#25 fuse_resolve (state=0x7fe6e8046590) at fuse-resolve.c:645
#26 0x00007fe7012a8eee in fuse_resolve_all (state=<optimized out>) at fuse-resolve.c:681
#27 0x00007fe7012a8f30 in fuse_resolve_and_resume (state=0x7fe6e8046590, fn=0x7fe7012b14a0 <fuse_fsync_resume>) at fuse-resolve.c:720
#28 0x00007fe7012bbcde in fuse_thread_proc (data=0x7fe705bceac0) at fuse-bridge.c:4944
#29 0x00007fe702f63dc5 in start_thread (arg=0x7fe6f5dd0700) at pthread_create.c:308
#30 0x00007fe7028aa28d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
Comment 3 Krutika Dhananjay 2016-03-01 06:51:52 EST
Patches posted upstream: http://review.gluster.org/#/c/13562/
                         http://review.gluster.org/#/c/13563/
Comment 6 SATHEESARAN 2016-04-19 09:39:25 EDT
Tested with RHGS 3.1.3 nightly ( glusterfs-3.7.9-1.el7rhgs )
1. Created a sharded replica 3 volume and optimized the volume for virt-store
2. Created GlusterFS storage domain in RHEV using this gluster volume
3. Created App VMs installed with RHEL 7 

No issues or crashes seen
Comment 8 errata-xmlrpc 2016-06-23 01:09:35 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1240

Note You need to log in before you can comment on or make changes to this bug.