Bug 1396328 - after write some files into the gluster DFS volume successful, when we read one of these files will warn cannot get the file stat,but other file can read
Summary: after write some files into the gluster DFS volume successful, when we read ...
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: distribute
Version: 3.6.3
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Nithya Balachandran
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-11-18 02:54 UTC by q449278118
Modified: 2016-12-06 01:01 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-12-02 06:41:03 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description q449278118 2016-11-18 02:54:46 UTC
Description of problem:

after write some files into the gluster DFS volume mounts points successful,  when we read one of these files will warn cannot get  the file stat,but other file can read.
Version-Release number of selected component (if applicable):

3.6.3
How reproducible:


Steps to Reproduce:
1.  check the gluster peer and volume status is good;
2.  use ls -l,we can see the bad files counts is 0,and the other good files is 1
3.  when we restart the volume, the bad file exist, but when we reboot the system, the bad file disappear

Actual results:


Expected results:

what causes the problem ,and how to fix or solve the problem
Additional info:

Comment 1 q449278118 2016-11-18 03:38:52 UTC
   int the brick of bad file, we use ls -l,we can see the bad files link counts is 1,and the other good files is 2,and we can read the bad files;
   but,in the gluster mount point,   use ls -l,we can see the bad files counts is 0,and the other good files is 1,and the bad file cannot read.

Comment 2 Nithya Balachandran 2016-11-23 07:22:58 UTC
(In reply to q449278118 from comment #1)
>    int the brick of bad file, we use ls -l,we can see the bad files link
> counts is 1,and the other good files is 2,and we can read the bad files;
>    but,in the gluster mount point,   use ls -l,we can see the bad files
> counts is 0,and the other good files is 1,and the bad file cannot read.

Can you provide the following:

1. gluster volume info for the volume
2. How many clients you are using. Are you using the same client to write and read from the file
3. ls output
4. OS and filesystem you are using for both server and clients

Comment 3 hari gowtham 2016-12-02 06:41:03 UTC
This bug is getting closed because the 3.6 is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.

Comment 4 q449278118 2016-12-06 01:01:55 UTC
(In reply to Nithya Balachandran from comment #2)
> (In reply to q449278118 from comment #1)
> >    int the brick of bad file, we use ls -l,we can see the bad files link
> > counts is 1,and the other good files is 2,and we can read the bad files;
> >    but,in the gluster mount point,   use ls -l,we can see the bad files
> > counts is 0,and the other good files is 1,and the bad file cannot read.
> 
> Can you provide the following:
> 
> 1. gluster volume info for the volume
> 2. How many clients you are using. Are you using the same client to write
> and read from the file
> 3. ls output
> 4. OS and filesystem you are using for both server and clients

1 three bricks with distribute volume without replicate
2 only one client, read and write
3 output as I expressed topic, 1 brick a file hard link 0, but others 1
4 redhat7.0 as the server and client,client mapped volume with NFS  and USERS use NFS client to access the volume;
5 when we remove the brick (which has some files hard link 0),the bad error disappeard


Note You need to log in before you can comment on or make changes to this bug.