Bug 1293228 - Disperse: Disperse volume (cold vol) crashes while writing files on tier volume
Summary: Disperse: Disperse volume (cold vol) crashes while writing files on tier volume
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: disperse
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: RHGS 3.1.2
Assignee: Bug Updates Notification Mailing List
QA Contact: Bhaskarakiran
URL:
Whiteboard:
Depends On: 1293223
Blocks: 1293224
TreeView+ depends on / blocked
 
Reported: 2015-12-21 06:55 UTC by Ashish Pandey
Modified: 2016-11-23 23:11 UTC (History)
7 users (show)

Fixed In Version: glusterfs-3.7.5-13
Doc Type: Bug Fix
Doc Text:
Clone Of: 1293223
Environment:
Last Closed: 2016-03-01 06:04:40 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:0193 0 normal SHIPPED_LIVE Red Hat Gluster Storage 3.1 update 2 2016-03-01 10:20:36 UTC

Description Ashish Pandey 2015-12-21 06:55:54 UTC
+++ This bug was initially created as a clone of Bug #1293223 +++

Description of problem:

Disperse volume crashes while trying to write multiple files using multiple threads on fuse mounted tier volume.

Version-Release number of selected component (if applicable):
[root@apandey glusterfs]# glusterfs --version
glusterfs 3.8dev built on Dec 21 2015 10:49:16
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.


How reproducible:
100%

Steps to Reproduce:
1. Create a tier volume with 2 X (4+2) disperse volume and 6 X (2) replica volume.
2. Mount it through fuse.
3. start writing various files with multiple threads on mount point.
crefi --multi -n 10 -b 10 -d 10 --max=1024k --min=5k --random -T 5 -t text -I 5 --fop=create /mnt/gfs

Actual results:
After some time (1 min to 30 min) CRASH happens in disperse volume.


Expected results:
No Crash should be there and all the read, write and modify operation should be successful. 

Additional info:
[root@apandey glusterfs]# gluster v info
 
Volume Name: vol
Type: Tier
Volume ID: a9007561-0c50-463c-b37d-59f3992f339e
Status: Started
Number of Bricks: 24
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 6 x 2 = 12
Brick1: apandey:/brick/gluster/r12
Brick2: apandey:/brick/gluster/r11
Brick3: apandey:/brick/gluster/r10
Brick4: apandey:/brick/gluster/r9
Brick5: apandey:/brick/gluster/r8
Brick6: apandey:/brick/gluster/r7
Brick7: apandey:/brick/gluster/r6
Brick8: apandey:/brick/gluster/r5
Brick9: apandey:/brick/gluster/r4
Brick10: apandey:/brick/gluster/r3
Brick11: apandey:/brick/gluster/r2
Brick12: apandey:/brick/gluster/r1
Cold Tier:
Cold Tier Type : Distributed-Disperse
Number of Bricks: 2 x (4 + 2) = 12
Brick13: apandey:/brick/gluster/v1
Brick14: apandey:/brick/gluster/v2
Brick15: apandey:/brick/gluster/v3
Brick16: apandey:/brick/gluster/v4
Brick17: apandey:/brick/gluster/v5
Brick18: apandey:/brick/gluster/v6
Brick19: apandey:/brick/gluster/v7
Brick20: apandey:/brick/gluster/v8
Brick21: apandey:/brick/gluster/v9
Brick22: apandey:/brick/gluster/v10
Brick23: apandey:/brick/gluster/v11
Brick24: apandey:/brick/gluster/v12
Options Reconfigured:
cluster.tier-demote-frequency: 60
cluster.tier-promote-frequency: 60
cluster.write-freq-threshold: 1
cluster.read-freq-threshold: 1
features.record-counters: on
cluster.watermark-hi: 5
cluster.watermark-low: 1
cluster.tier-mode: cache
features.ctr-enabled: on
diagnostics.client-log-level: WARNING
performance.readdir-ahead: on

Comment 4 Bhaskarakiran 2015-12-28 11:45:32 UTC
Have run crefi tool and other load and didn't see the crash. Marking this as fixed.

Comment 6 errata-xmlrpc 2016-03-01 06:04:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0193.html


Note You need to log in before you can comment on or make changes to this bug.