Bug 1277523 - tiering: IO error while creating files after attaching tier with quota
Summary: tiering: IO error while creating files after attaching tier with quota
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: tier
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.1.2
Assignee: Vijaikumar Mallikarjuna
QA Contact: Anil Shah
URL:
Whiteboard:
Depends On:
Blocks: 1260783 1260923
TreeView+ depends on / blocked
 
Reported: 2015-11-03 14:01 UTC by Anil Shah
Modified: 2016-09-17 15:37 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.7.5-6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-12-10 06:46:57 UTC
Embargoed:


Attachments (Terms of Use)

Description Anil Shah 2015-11-03 14:01:26 UTC
Description of problem:

After attaching tier, created some file on hot tier , got below error.

dd: error writing ‘test11’: Structure needs cleaning
dd: closing output file ‘test11’: Bad file descriptor


Version-Release number of selected component (if applicable):

[root@localhost ec01]# rpm -qa  | grep glusterfs
glusterfs-client-xlators-3.7.5-5.el7rhgs.x86_64
glusterfs-api-3.7.5-5.el7rhgs.x86_64
glusterfs-cli-3.7.5-5.el7rhgs.x86_64
glusterfs-libs-3.7.5-5.el7rhgs.x86_64
glusterfs-3.7.5-5.el7rhgs.x86_64
glusterfs-fuse-3.7.5-5.el7rhgs.x86_64
glusterfs-server-3.7.5-5.el7rhgs.x86_64
glusterfs-geo-replication-3.7.5-5.el7rhgs.x86_64



How reproducible:

1/1

Steps to Reproduce:
1. Created 2*2 distribute replicate volume
2. Fuse mount the volume 
3. Set quota on volume 
4. Create file on the so that disk quota exceeds 
5. Now attach 2*2 distribute replicate tier
6. Modify the quota limit
7. Create some on tier volume
 
Actual results:

Got below error on Mount point
dd: error writing ‘test11’: Structure needs cleaning
dd: closing output file ‘test11’: Bad file descriptor


Expected results:

There should not be IO error

Additional info:

[root@localhost ec01]# gluster v info
 
Volume Name: testvol
Type: Tier
Volume ID: fbee6a2e-39ef-4388-8239-8a148dafdba9
Status: Started
Number of Bricks: 8
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick1: 10.70.47.3:/rhs/brick2/ec04
Brick2: 10.70.47.2:/rhs/brick2/ec03
Brick3: 10.70.47.145:/rhs/brick2/ec02
Brick4: 10.70.47.143:/rhs/brick2/ec01
Cold Tier:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick5: 10.70.47.143:/rhs/brick1/b01
Brick6: 10.70.47.145:/rhs/brick1/b02
Brick7: 10.70.47.2:/rhs/brick1/b03
Brick8: 10.70.47.3:/rhs/brick1/b04
Options Reconfigured:
features.barrier: disable
cluster.tier-promote-frequency: 45
cluster.tier-demote-frequency: 45
cluster.write-freq-threshold: 0
cluster.read-freq-threshold: 0
performance.io-cache: off
performance.quick-read: off
features.ctr-enabled: on
features.uss: enable
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
performance.readdir-ahead: on

Comment 3 Manikandan 2015-11-19 10:55:22 UTC
Hi,
We are not able to reproduce the problem in the current build. Can you try to reproduce it in the current build?

--
Regards,
Manikandan Selvaganesh.

Comment 4 Nag Pavan Chilakam 2015-11-25 05:40:59 UTC
Hi,
if the bug is not reproducible, we must be moving it to "works for me".
Please contact the reporter and find out what was the exact scenario and setup used for raising this bug.
If you are still not able to reproduce, but the pertaining logs and move to "works for me"

only Bugs which have code fix through a patch for the same problem must be moved to on_QA

Comment 6 Anil Shah 2015-12-10 06:46:57 UTC
Unable to reproduce this bug on latest build. Will open new bug if encountered this issue. 

Hence closing this bug as of now.

Comment 7 Nag Pavan Chilakam 2015-12-14 11:28:47 UTC
This has been closed by QE as not reproducible accordinly, but this can be a "potential risk"
Also, if it was moved to post, why not have the fix in?


Note You need to log in before you can comment on or make changes to this bug.