Bug 983061 - [RHS-RHOS] Cinder fuse client crashed in afr_fd_has_witnessed_unstable_write after remove-brick operation
[RHS-RHOS] Cinder fuse client crashed in afr_fd_has_witnessed_unstable_write ...
Status: CLOSED DUPLICATE of bug 978802
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
Unspecified Unspecified
unspecified Severity high
: ---
: ---
Assigned To: Amar Tumballi
Sudhir D
Depends On:
  Show dependency treegraph
Reported: 2013-07-10 08:03 EDT by Anush Shetty
Modified: 2013-12-18 19:09 EST (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2013-07-10 08:33:01 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Anush Shetty 2013-07-10 08:03:25 EDT
Description of problem: One a 6x2 Distributed-Replicate volume, we added 2 more bricks using add-brick to make it 7x2 Distributed Replicate cinder volume. We created 10 cinder volumes of 15G each. 10 Nova instances were created. 2 pairs of replica bricks were removed using remove-brick start and then commited the same. And trying to attach the cinder volume to the instances, the cinder fuse process crashed.

Version-Release number of selected component (if applicable):
RHS: glusterfs-
Cinder: openstack-cinder-2013.1.2-3.el6ost.noarch
Puddle repo:  http://download.lab.bos.redhat.com/rel-eng/OpenStack/Grizzly/2013-07-08.1/puddle.repo

How reproducible: Filing it first time we saw it.

Steps to Reproduce:
1. Create 6x2 Distributed-Replicate volume
2. Configure cinder to use RHS 
3. Create cinder volumes
4. Remove brick operations on RHS volume
5. Attach cinder volume to instance

Actual results:

Cinder fuse client crashed.

Expected results:

Should be able to seamlessly attach the cinder volumes to the instances after remove brick operations.

Additional info:

1. RHOS hostname: rhs-client28.lab.eng.blr.redhat.com

2. RHS nodes (hostname and IP address):,,,

3. RHS node from where the gluster commands were executed:

4. Volume info
# gluster volume info
Volume Name: cinder-vol
Type: Distributed-Replicate
Volume ID: 19f5abf1-5739-417a-bcff-e56d0a5baa74
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Options Reconfigured:
storage.owner-gid: 165
storage.owner-uid: 165
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: on

5. Volume status
# gluster volume status cinder-vol
Status of volume: cinder-vol
Gluster process						Port	Online	Pid
Brick				24009	Y	2106
Brick				24009	Y	3243
Brick				24010	Y	2111
Brick				24010	Y	3249
Brick				24009	Y	2683
Brick				24009	Y	14982
Brick				24011	Y	2695
Brick				24011	Y	14992
NFS Server on localhost					38467	Y	15718
Self-heal Daemon on localhost				N/A	Y	15724
NFS Server on				38467	Y	4693
Self-heal Daemon on				N/A	Y	4699
NFS Server on				38467	Y	4660
Self-heal Daemon on				N/A	Y	4666
NFS Server on				38467	Y	25999
Self-heal Daemon on			N/A	Y	26005

6. Mount point on the client: 

7. # tail /var/log/glusterfs/var-lib-nova-mnt-cf55327cba40506e44b37f45f55af5e7.log

patchset: git://git.gluster.com/glusterfs.git
signal received: 11
time of crash: 2013-07-10 15:53:01
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs
Comment 3 shilpa 2013-07-10 08:19:41 EDT
Reproduced the same bug.
Comment 4 Pranith Kumar K 2013-07-10 08:33:01 EDT

*** This bug has been marked as a duplicate of bug 978802 ***

Note You need to log in before you can comment on or make changes to this bug.