Bug 990501 - [perf-xlators/md-cache] md-cache doesn't update ctime of file on the fuse mountpoint, after setting xattrs on that file, till md-cache-timeout expires
Summary: [perf-xlators/md-cache] md-cache doesn't update ctime of file on the fuse mou...
Keywords:
Status: CLOSED EOL
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfs
Version: 2.1
Hardware: Unspecified
OS: Linux
unspecified
medium
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-07-31 10:13 UTC by SATHEESARAN
Modified: 2015-12-03 17:17 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-12-03 17:17:21 UTC
Embargoed:


Attachments (Terms of Use)

Description SATHEESARAN 2013-07-31 10:13:05 UTC
Description of problem:
After setting xattr on the file on fuse mountpoint,ctime of that file, never changes till md-cache-timeout expires.

Volume type          : Distribute volume with 2 bricks
Trusted Storage Pool : Cluster of 4 Nodes

Version-Release number of selected component (if applicable):
RHS 2.1 - glusterfs-3.4.0.14rhs-1

How reproducible:
Always

Steps to Reproduce:
1. Create a distribute volume with 2 bricks
(i.e) gluster volume create <vol-name> <brick1> <brick2>

2. Start the volume
(i.e) gluster volume start <vol-name>

3. Set md-cache-timeout to Max value, which is 60 seconds
(i.e) gluster volume set <vol-name> performance.md-cache-timeout 60

4. Fuse mount the volume in the client [in my case, this is RHEL6.4]
(i.e) mount.glusterfs <rhs-server>:<vol-name> <mount-point-on-client>

5. Create few files on the mountpoint
(i.e) touch <mount-point>/file{1,2,3,4,5,6,7,8,9}

6. Check for the atime, mtime, ctime of any one of the file
(i.e) stat <fuse-mount-point>/file1

7. Set the xattr on the file
(i.e) setfattr -n trusted.name -v file file1

8. Check for the ctime of the file, since it is expected to change now
(i.e) stat file1

Actual results:
'ctime' of the file doesn't seems to change immediately. But it changes after 60 seconds, which is nothing but md-cache-timeout

Expected results:
Since the client has witnessed setting xattr on the file, 'ctime' of the file
should change immediately on the cache

Additional info:

1. RHS Nodes
============
10.70.37.205
10.70.37.52
10.70.37.202
10.70.37.154

2. Volume information
======================
[Wed Jul 31 09:19:07 UTC 2013 root.37.205:~ ] # gluster volume info distvol
 
Volume Name: distvol
Type: Distribute
Volume ID: e6a72b06-52ed-4ee3-adf6-974e11d0dbb8
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 10.70.37.202:/rhs/brick1/distdir1
Brick2: 10.70.37.154:/rhs/brick1/distdir1
Options Reconfigured:
performance.md-cache-timeout: 60

[Wed Jul 31 10:00:45 UTC 2013 root.37.205:~ ] # gluster volume status distvol
Status of volume: distvol
Gluster process                                         Port    Online  Pid
------------------------------------------------------------------------------
Brick 10.70.37.202:/rhs/brick1/distdir1                 49152   Y       2857
Brick 10.70.37.154:/rhs/brick1/distdir1                 49152   Y       2877
NFS Server on localhost                                 2049    Y       2920
NFS Server on 10.70.37.52                               2049    Y       2868
NFS Server on 10.70.37.154                              2049    Y       2975
NFS Server on 10.70.37.202                              2049    Y       2955
 
There are no active volume tasks

3. Client Information
======================

[Wed Jul 31 09:43:13 UTC 2013 root.36.32:/mnt/distvol ] # df -Th
Filesystem    Type    Size  Used Avail Use% Mounted on
/dev/mapper/vg_rhsclient8-lv_root
              ext4     50G  3.0G   44G   7% /
tmpfs        tmpfs    7.8G     0  7.8G   0% /dev/shm
/dev/sda1     ext4    485M   65M  396M  14% /boot
/dev/mapper/vg_rhsclient8-lv_home
              ext4    1.8T  196M  1.7T   1% /home
10.70.37.205:distvol
    fuse.glusterfs    170G   69M  170G   1% /mnt/distvol

[Wed Jul 31 10:02:27 UTC 2013 root.36.32:/mnt/distvol ] # cat /etc/issue
Red Hat Enterprise Linux Server release 6.4 (Santiago)
Kernel \r on an \m

[Wed Jul 31 10:04:21 UTC 2013 root.36.32:/mnt/distvol ] # cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 6.4 (Santiago)

[Wed Jul 31 10:04:28 UTC 2013 root.36.32:/mnt/distvol ] # uname -r
2.6.32-358.14.1.el6.x86_64

[Wed Jul 31 10:04:40 UTC 2013 root.36.32:/mnt/distvol ] # mount
/dev/mapper/vg_rhsclient8-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
/dev/sda1 on /boot type ext4 (rw)
/dev/mapper/vg_rhsclient8-lv_home on /home type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
10.70.37.205:distvol on /mnt/distvol type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)

Observation
============
[Wed Jul 31 10:12:58 UTC 2013 root.36.32:/mnt/distvol ] # getfattr -d -m . file1

[Wed Jul 31 10:13:43 UTC 2013 root.36.32:/mnt/distvol ] # ls -lc file1
-rwxrwxrwx. 1 root root 0 Jul 31 13:22 file1

[Wed Jul 31 10:13:59 UTC 2013 root.36.32:/mnt/distvol ] # setfattr -n trusted.name -v file file1

[Wed Jul 31 10:14:27 UTC 2013 root.36.32:/mnt/distvol ] # getfattr -d -m . file1
# file: file1
trusted.name="file"

[Wed Jul 31 10:14:30 UTC 2013 root.36.32:/mnt/distvol ] # ls -lc file1
-rwxrwxrwx. 1 root root 0 Jul 31 13:22 file1

[Wed Jul 31 10:14:33 UTC 2013 root.36.32:/mnt/distvol ] # while true; do ls -lc file1;sleep 5;done
-rwxrwxrwx. 1 root root 0 Jul 31 13:22 file1
-rwxrwxrwx. 1 root root 0 Jul 31 13:22 file1
-rwxrwxrwx. 1 root root 0 Jul 31 13:22 file1
-rwxrwxrwx. 1 root root 0 Jul 31 13:22 file1
-rwxrwxrwx. 1 root root 0 Jul 31 15:44 file1

Comment 1 Vivek Agarwal 2015-12-03 17:17:21 UTC
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.


Note You need to log in before you can comment on or make changes to this bug.