Bug 1293228 - Disperse: Disperse volume (cold vol) crashes while writing files on tier volume
Disperse: Disperse volume (cold vol) crashes while writing files on tier volume
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: disperse (Show other bugs)
3.1
Unspecified Unspecified
unspecified Severity high
: ---
: RHGS 3.1.2
Assigned To: Bug Updates Notification Mailing List
Bhaskarakiran
: ZStream
Depends On: 1293223
Blocks: 1293224
  Show dependency treegraph
 
Reported: 2015-12-21 01:55 EST by Ashish Pandey
Modified: 2016-11-23 18:11 EST (History)
7 users (show)

See Also:
Fixed In Version: glusterfs-3.7.5-13
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1293223
Environment:
Last Closed: 2016-03-01 01:04:40 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Ashish Pandey 2015-12-21 01:55:54 EST
+++ This bug was initially created as a clone of Bug #1293223 +++

Description of problem:

Disperse volume crashes while trying to write multiple files using multiple threads on fuse mounted tier volume.

Version-Release number of selected component (if applicable):
[root@apandey glusterfs]# glusterfs --version
glusterfs 3.8dev built on Dec 21 2015 10:49:16
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.


How reproducible:
100%

Steps to Reproduce:
1. Create a tier volume with 2 X (4+2) disperse volume and 6 X (2) replica volume.
2. Mount it through fuse.
3. start writing various files with multiple threads on mount point.
crefi --multi -n 10 -b 10 -d 10 --max=1024k --min=5k --random -T 5 -t text -I 5 --fop=create /mnt/gfs

Actual results:
After some time (1 min to 30 min) CRASH happens in disperse volume.


Expected results:
No Crash should be there and all the read, write and modify operation should be successful. 

Additional info:
[root@apandey glusterfs]# gluster v info
 
Volume Name: vol
Type: Tier
Volume ID: a9007561-0c50-463c-b37d-59f3992f339e
Status: Started
Number of Bricks: 24
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 6 x 2 = 12
Brick1: apandey:/brick/gluster/r12
Brick2: apandey:/brick/gluster/r11
Brick3: apandey:/brick/gluster/r10
Brick4: apandey:/brick/gluster/r9
Brick5: apandey:/brick/gluster/r8
Brick6: apandey:/brick/gluster/r7
Brick7: apandey:/brick/gluster/r6
Brick8: apandey:/brick/gluster/r5
Brick9: apandey:/brick/gluster/r4
Brick10: apandey:/brick/gluster/r3
Brick11: apandey:/brick/gluster/r2
Brick12: apandey:/brick/gluster/r1
Cold Tier:
Cold Tier Type : Distributed-Disperse
Number of Bricks: 2 x (4 + 2) = 12
Brick13: apandey:/brick/gluster/v1
Brick14: apandey:/brick/gluster/v2
Brick15: apandey:/brick/gluster/v3
Brick16: apandey:/brick/gluster/v4
Brick17: apandey:/brick/gluster/v5
Brick18: apandey:/brick/gluster/v6
Brick19: apandey:/brick/gluster/v7
Brick20: apandey:/brick/gluster/v8
Brick21: apandey:/brick/gluster/v9
Brick22: apandey:/brick/gluster/v10
Brick23: apandey:/brick/gluster/v11
Brick24: apandey:/brick/gluster/v12
Options Reconfigured:
cluster.tier-demote-frequency: 60
cluster.tier-promote-frequency: 60
cluster.write-freq-threshold: 1
cluster.read-freq-threshold: 1
features.record-counters: on
cluster.watermark-hi: 5
cluster.watermark-low: 1
cluster.tier-mode: cache
features.ctr-enabled: on
diagnostics.client-log-level: WARNING
performance.readdir-ahead: on
Comment 4 Bhaskarakiran 2015-12-28 06:45:32 EST
Have run crefi tool and other load and didn't see the crash. Marking this as fixed.
Comment 6 errata-xmlrpc 2016-03-01 01:04:40 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0193.html

Note You need to log in before you can comment on or make changes to this bug.