Bug 1331334

Summary: Ganesha+Tiering: Multiple_files test suite fails during fssanity run on tiered volume
Product: Red Hat Gluster Storage Reporter: Shashank Raj <sraj>
Component: nfs-ganeshaAssignee: Jiffin <jthottan>
Status: CLOSED WONTFIX QA Contact: Manisha Saini <msaini>
Severity: high Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: jthottan, kkeithle, mzywusko, ndevos, nlevinki, skoduri
Target Milestone: ---Keywords: Triaged, ZStream
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-04-16 18:19:20 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Shashank Raj 2016-04-28 10:25:01 UTC
Description of problem:

Ganesha+Tiering: Multiple_files test suite fails during fssanity run on tiered volume

Version-Release number of selected component (if applicable):

[root@dhcp46-247 ~]# rpm -qa|grep glusterfs
glusterfs-debuginfo-3.7.9-1.el7rhgs.x86_64
glusterfs-libs-3.7.9-2.el7rhgs.x86_64
glusterfs-fuse-3.7.9-2.el7rhgs.x86_64
glusterfs-3.7.9-2.el7rhgs.x86_64
glusterfs-api-3.7.9-2.el7rhgs.x86_64
glusterfs-cli-3.7.9-2.el7rhgs.x86_64
glusterfs-geo-replication-3.7.9-2.el7rhgs.x86_64
glusterfs-rdma-3.7.9-2.el7rhgs.x86_64
glusterfs-client-xlators-3.7.9-2.el7rhgs.x86_64
glusterfs-server-3.7.9-2.el7rhgs.x86_64
glusterfs-ganesha-3.7.9-2.el7rhgs.x86_64
[root@dhcp46-247 ~]# rpm -qa|grep ganesha
nfs-ganesha-2.3.1-4.el7rhgs.x86_64
nfs-ganesha-gluster-2.3.1-4.el7rhgs.x86_64
glusterfs-ganesha-3.7.9-2.el7rhgs.x86_64


How reproducible:
Once

Steps to Reproduce:

1.Create a 8 node cluster, 4 out of which are ganesha nodes
2.Create a tiered volume, start it, attach tier and enable quota on the volume.

Volume Name: tiervolume
Type: Tier
Volume ID: cf12c2c2-b6c3-43dd-8607-f653c7cc0c83
Status: Started
Number of Bricks: 16
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick1: 10.70.46.202:/bricks/brick3/b3
Brick2: 10.70.47.139:/bricks/brick3/b3
Brick3: 10.70.46.26:/bricks/brick3/b3
Brick4: 10.70.46.247:/bricks/brick3/b3
Cold Tier:
Cold Tier Type : Distributed-Disperse
Number of Bricks: 2 x (4 + 2) = 12
Brick5: 10.70.46.247:/bricks/brick0/b0
Brick6: 10.70.46.26:/bricks/brick0/b0
Brick7: 10.70.47.139:/bricks/brick0/b0
Brick8: 10.70.46.202:/bricks/brick0/b0
Brick9: 10.70.46.247:/bricks/brick1/b1
Brick10: 10.70.46.26:/bricks/brick1/b1
Brick11: 10.70.47.139:/bricks/brick1/b1
Brick12: 10.70.46.202:/bricks/brick1/b1
Brick13: 10.70.46.247:/bricks/brick2/b2
Brick14: 10.70.46.26:/bricks/brick2/b2
Brick15: 10.70.47.139:/bricks/brick2/b2
Brick16: 10.70.46.202:/bricks/brick2/b2
Options Reconfigured:
ganesha.enable: on
features.cache-invalidation: on
cluster.watermark-hi: 40
cluster.watermark-low: 10
cluster.tier-mode: cache
features.ctr-enabled: on
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
nfs.disable: on
performance.readdir-ahead: on
nfs-ganesha: enable
cluster.enable-shared-storage: enable

3.Enable ganesha on the volume and mount it using vers=4
4.Start fssanity test suite on the volume.
5.Observe that during multiple files test suite, test gets hanged.
6.After sometime observed that it fails with below output:

executing multiple_files
start:20:19:21
end:22:41:06
Creation of 100000 done
Total files created is not 100000
Removing all the files
multiple_files failed

Actual results:

Multiple_files test suite fails during fssanity run on tiered volume

Expected results:

test suite should pass without any issues.

Additional info:

sosreports, statedumps and ganesha logs will be attached.

Comment 2 Shashank Raj 2016-04-28 10:40:30 UTC
sosreports and logs can be found under http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1331334