Description of problem: after write some files into the gluster DFS volume mounts points successful, when we read one of these files will warn cannot get the file stat,but other file can read. Version-Release number of selected component (if applicable): 3.6.3 How reproducible: Steps to Reproduce: 1. check the gluster peer and volume status is good; 2. use ls -l,we can see the bad files counts is 0,and the other good files is 1 3. when we restart the volume, the bad file exist, but when we reboot the system, the bad file disappear Actual results: Expected results: what causes the problem ,and how to fix or solve the problem Additional info:
int the brick of bad file, we use ls -l,we can see the bad files link counts is 1,and the other good files is 2,and we can read the bad files; but,in the gluster mount point, use ls -l,we can see the bad files counts is 0,and the other good files is 1,and the bad file cannot read.
(In reply to q449278118 from comment #1) > int the brick of bad file, we use ls -l,we can see the bad files link > counts is 1,and the other good files is 2,and we can read the bad files; > but,in the gluster mount point, use ls -l,we can see the bad files > counts is 0,and the other good files is 1,and the bad file cannot read. Can you provide the following: 1. gluster volume info for the volume 2. How many clients you are using. Are you using the same client to write and read from the file 3. ls output 4. OS and filesystem you are using for both server and clients
This bug is getting closed because the 3.6 is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.
(In reply to Nithya Balachandran from comment #2) > (In reply to q449278118 from comment #1) > > int the brick of bad file, we use ls -l,we can see the bad files link > > counts is 1,and the other good files is 2,and we can read the bad files; > > but,in the gluster mount point, use ls -l,we can see the bad files > > counts is 0,and the other good files is 1,and the bad file cannot read. > > Can you provide the following: > > 1. gluster volume info for the volume > 2. How many clients you are using. Are you using the same client to write > and read from the file > 3. ls output > 4. OS and filesystem you are using for both server and clients 1 three bricks with distribute volume without replicate 2 only one client, read and write 3 output as I expressed topic, 1 brick a file hard link 0, but others 1 4 redhat7.0 as the server and client,client mapped volume with NFS and USERS use NFS client to access the volume; 5 when we remove the brick (which has some files hard link 0),the bad error disappeard