Bug 867847 - Error in ctime - while creating hardlinks to a file(in glusterfs mounted volume)repeatedly
Error in ctime - while creating hardlinks to a file(in glusterfs mounted volu...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
2.0
x86_64 Linux
medium Severity low
: ---
: ---
Assigned To: shishir gowda
SATHEESARAN
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-10-18 08:15 EDT by SATHEESARAN
Modified: 2013-12-08 20:34 EST (History)
5 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0qa5-1
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-09-23 18:33:33 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
sasundar: needinfo+


Attachments (Terms of Use)
Automated bash script to perform the described steps (2.00 KB, application/x-shellscript)
2012-10-18 08:15 EDT, SATHEESARAN
no flags Details
sosreport on one of the server in trusted storage pool (4.19 MB, application/x-xz)
2012-10-19 02:36 EDT, SATHEESARAN
no flags Details
sosreport on other server in trusted storage pool (3.70 MB, application/x-xz)
2012-10-19 02:37 EDT, SATHEESARAN
no flags Details
sosreport of the client used to mount the gluster volume (2.10 MB, application/x-xz)
2012-10-19 02:39 EDT, SATHEESARAN
no flags Details

  None (edit)
Description SATHEESARAN 2012-10-18 08:15:19 EDT
Created attachment 629332 [details]
Automated bash script to perform the described steps

Description of problem:

When creating the hardlink to the file, ctime of that file changes as the link count is incremented. But the new ctime falls below, previously stored ctime.

This happens from the second or third attempt to run note_ctime-->create_hardlink-->note_ctime-->unlink cycle.

Version-Release number of selected component (if applicable):
RHS 2.0 update 3

How reproducible:
Consistent

Steps to Reproduce:
1. Note ctime of file , 'stat -c %Z /mnt/1/file1'
2. Create a hard link to file, 'ln /mnt/1/file1 /mnt/1/linkf'
3. Again note the ctime of the file as in step 1
4. ctime as measured in step 3 should be greater than or atleast equal to
   ctime measured in step 1
5. Repeat the test
  
Actual results:
After first iteration, ctime captured after creating the hard link is lesser than ctime captured earlier(before creating hardlink)


Expected results:
ctime captured after creating the hardlink should fall behind the ctime captured before creating the hardlink

Additional info:
1. The issue is not seen while testing a file in nfs mounted volume.
2. The issue is seen with/without md-cache(stat-prefetch)
Comment 2 Amar Tumballi 2012-10-19 01:04:54 EDT
what is the type of volume? can you post the volume info? and may be possible sosreport attachment itself. without knowing the type of volume, i can't get it assigned to the right person.
Comment 3 SATHEESARAN 2012-10-19 02:36:35 EDT
Created attachment 629804 [details]
sosreport on one of the server in trusted storage pool

sosreport on one of the server in trusted storage pool
Comment 4 SATHEESARAN 2012-10-19 02:37:55 EDT
Created attachment 629805 [details]
sosreport on other server in trusted storage pool
Comment 5 SATHEESARAN 2012-10-19 02:39:00 EDT
Created attachment 629806 [details]
sosreport of the client used to mount the gluster volume
Comment 6 SATHEESARAN 2012-10-19 02:41:12 EDT
Hi Amar,

Type of volume is distributed replicate(2X2)
I have attached the sos reports of the nodes in trusted storage pool and also the client machine used to mount the volumes with native glusterfs protocol.

-- Sathees
Comment 7 Amar Tumballi 2012-10-22 00:02:41 EDT
http://review.gluster.org/3737 should fix the issue for you. Planning to fix it only in RHS2.1 (no plan to backport as of now). Once we have the downstream build for RHS2.1 you should be able to verify the fix. (already fixed in upstream)
Comment 8 Amar Tumballi 2012-10-23 10:12:57 EDT
comment#7 URL is committed
Comment 9 SATHEESARAN 2013-01-07 06:07:04 EST
Hi Amar/Shishir,

I see there is a elapse of 1 second, between new ctime and old ctime.
Meaning, new ctime is 1 sec less than old-ctime, which is unrealistic.

But given 1 sec of sleep between getting stat of a file and creating hard link to the file, clears this problem.

Also, in 100 iterations, I see this problem happens, roughly 5 - 7 times.

Please provide info to decide on this.
Also this happens only on fuse mounted volume
Comment 10 SATHEESARAN 2013-07-24 08:30:35 EDT
The frequency reduced to 1 or 2 times, in 100 iterations and the deviation is only 1 seconds, looks pretty ok.

Verified it with glusterfs-3.4.0.12rhs.beta5-2
Comment 11 Scott Haines 2013-09-23 18:33:33 EDT
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html

Note You need to log in before you can comment on or make changes to this bug.