Bug 1227649 - linux untar hanged after the bricks are up in a 8+4 config
Summary: linux untar hanged after the bricks are up in a 8+4 config
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: disperse
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: RHGS 3.1.0
Assignee: Pranith Kumar K
QA Contact: Bhaskarakiran
Depends On:
Blocks: 1202842 1223636 1227654 1228160
TreeView+ depends on / blocked
Reported: 2015-06-03 08:42 UTC by Bhaskarakiran
Modified: 2016-11-23 23:11 UTC (History)
9 users (show)

Fixed In Version: glusterfs-3.7.1-1
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1227654 (view as bug list)
Last Closed: 2015-07-29 04:55:38 UTC
Target Upstream Version:

Attachments (Terms of Use)
statedump of the process (261.39 KB, text/plain)
2015-06-03 08:42 UTC, Bhaskarakiran
no flags Details

System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 0 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 08:26:26 UTC

Description Bhaskarakiran 2015-06-03 08:42:48 UTC
Created attachment 1034179 [details]
statedump of the process

Description of problem:
Created 1x(8+4) disperse volume. Fuse mounted on the client and started linux untar. Brough down 2 of the bricks and after some time brought them up with volume start force. Linux untar on the client hangs

Version-Release number of selected component (if applicable):
[root@transformers ~]# gluster --version
glusterfs 3.7.0 built on Jun  1 2015 07:14:51
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@transformers ~]# 

How reproducible:

Steps to Reproduce:
As in description

Actual results:
linux untar hangs

Expected results:
no hangs

Additional info:

volume info 
[root@transformers ~]# gluster v info vol1
Volume Name: vol1
Type: Disperse
Volume ID: e7e43939-3d7e-4052-8242-2067cd803d7f
Status: Started
Number of Bricks: 1 x (8 + 4) = 12
Transport-type: tcp
Brick1: interstellar:/rhs/brick1/vol1
Brick2: transformers:/rhs/brick1/vol1
Brick3: ninja:/rhs/brick1/vol1
Brick4: interstellar:/rhs/brick2/vol1
Brick5: transformers:/rhs/brick2/vol1
Brick6: ninja:/rhs/brick2/vol1
Brick7: interstellar:/rhs/brick3/vol1
Brick8: transformers:/rhs/brick3/vol1
Brick9: ninja:/rhs/brick3/vol1
Brick10: interstellar:/rhs/brick4/vol1
Brick11: transformers:/rhs/brick4/vol1
Brick12: ninja:/rhs/brick4/vol1
Options Reconfigured:
performance.readdir-ahead: off
features.uss: off
features.quota: off
features.inode-quota: off
client.event-threads: 2
server.event-threads: 2
cluster.disperse-self-heal-daemon: enable
[root@transformers ~]# 

statedump of the process will be attached.

Comment 2 Pranith Kumar K 2015-06-04 03:21:50 UTC
Please mark this as blocker+

Comment 3 Bhaskarakiran 2015-06-04 10:28:11 UTC
on a similar note, "du -sh ." on the client mount point hangs. Looks to be similar to the linux untar issue.

Comment 5 Bhaskarakiran 2015-06-17 05:55:25 UTC
Verified this on 3.7.1-3 build and didn't see the issue.

Comment 6 errata-xmlrpc 2015-07-29 04:55:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.