Created attachment 629332 [details] Automated bash script to perform the described steps Description of problem: When creating the hardlink to the file, ctime of that file changes as the link count is incremented. But the new ctime falls below, previously stored ctime. This happens from the second or third attempt to run note_ctime-->create_hardlink-->note_ctime-->unlink cycle. Version-Release number of selected component (if applicable): RHS 2.0 update 3 How reproducible: Consistent Steps to Reproduce: 1. Note ctime of file , 'stat -c %Z /mnt/1/file1' 2. Create a hard link to file, 'ln /mnt/1/file1 /mnt/1/linkf' 3. Again note the ctime of the file as in step 1 4. ctime as measured in step 3 should be greater than or atleast equal to ctime measured in step 1 5. Repeat the test Actual results: After first iteration, ctime captured after creating the hard link is lesser than ctime captured earlier(before creating hardlink) Expected results: ctime captured after creating the hardlink should fall behind the ctime captured before creating the hardlink Additional info: 1. The issue is not seen while testing a file in nfs mounted volume. 2. The issue is seen with/without md-cache(stat-prefetch)
what is the type of volume? can you post the volume info? and may be possible sosreport attachment itself. without knowing the type of volume, i can't get it assigned to the right person.
Created attachment 629804 [details] sosreport on one of the server in trusted storage pool sosreport on one of the server in trusted storage pool
Created attachment 629805 [details] sosreport on other server in trusted storage pool
Created attachment 629806 [details] sosreport of the client used to mount the gluster volume
Hi Amar, Type of volume is distributed replicate(2X2) I have attached the sos reports of the nodes in trusted storage pool and also the client machine used to mount the volumes with native glusterfs protocol. -- Sathees
http://review.gluster.org/3737 should fix the issue for you. Planning to fix it only in RHS2.1 (no plan to backport as of now). Once we have the downstream build for RHS2.1 you should be able to verify the fix. (already fixed in upstream)
comment#7 URL is committed
Hi Amar/Shishir, I see there is a elapse of 1 second, between new ctime and old ctime. Meaning, new ctime is 1 sec less than old-ctime, which is unrealistic. But given 1 sec of sleep between getting stat of a file and creating hard link to the file, clears this problem. Also, in 100 iterations, I see this problem happens, roughly 5 - 7 times. Please provide info to decide on this. Also this happens only on fuse mounted volume
The frequency reduced to 1 or 2 times, in 100 iterations and the deviation is only 1 seconds, looks pretty ok. Verified it with glusterfs-3.4.0.12rhs.beta5-2
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1262.html