Bug 985224 - AFR: Mismatch in mtime of directories/files on bricks after self-heal
AFR: Mismatch in mtime of directories/files on bricks after self-heal
Status: CLOSED EOL
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: replicate (Show other bugs)
2.1
Unspecified Unspecified
high Severity high
: ---
: ---
Assigned To: Anuradha
storage-qa-internal@redhat.com
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-07-17 02:42 EDT by spandura
Modified: 2016-09-19 22:01 EDT (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-12-03 12:22:34 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description spandura 2013-07-17 02:42:33 EDT
Description of problem:
===========================
In a replicated volume, when files/directories gets self-healed, the mtime of files/directories on sync is set to the time at which the files/directories got self-healed. The Modified time of files/directories are not self-healed from source. 

Version-Release number of selected component (if applicable):
==============================================================
root@king [Jul-17-2013-11:55:23] >rpm -qa | grep glusterfs-server
glusterfs-server-3.4.0.12rhs.beta4-1.el6rhs.x86_64

root@king [Jul-17-2013-11:55:24] >gluster --version
glusterfs 3.4.0.12rhs.beta4 built on Jul 11 2013 23:37:17


How reproducible:
==================
Often

Steps to Reproduce:
====================
1. Create a 1 x 2 replicate volume {node1, node2}. Start the volume. 

2. Create a fuse mount . From mount point execute: 
" for i in `seq 1 20`; do mkdir E_dir.$i ; for j in `seq 1 15` ; do dd if=/dev/input_file of=E_dir.$i/E_file.$j count=$j bs=1K ; done ; done "

3. Poweroff node2

4. From mount point execute: 
"for i in `seq 1 20`; do for j in `seq 1 15` ; do mv E_dir.$i/E_file.$j E_dir.$i/E_new_file.$j ; done ; mv E_dir.$i E_new_dir.$i ; done "

5. Poweron node2

6. Trigger self-heal from one of the nodes. 

7. Once the self-heal is complete, the time stamps of the files doesn't match

Actual results:
===================
Stat on a directory on source and sync after the self-heal completion. The modified time of the directories are not the same. 

stat on a directory from source node (node1):-
==============================================
root@king [Jul-16-2013-17:52:24] >stat /rhs/brick1/vol_rep_b0/testdir/E_new_dir.2
  File: `/rhs/brick1/vol_rep_b0/testdir/E_new_dir.2'
  Size: 297       	Blocks: 0          IO Block: 4096   directory
Device: fd02h/64770d	Inode: 51381563    Links: 2
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2013-07-16 17:11:26.764586000 +0530
Modify: 2013-07-16 17:18:58.506457530 +0530
Change: 2013-07-16 17:22:34.027660172 +0530

stat on a directory from sync node (node2):-
==============================================
root@hicks [Jul-16-2013-17:53:07] >stat /rhs/brick1/vol_rep_b1/testdir/E_new_dir.2
  File: `/rhs/brick1/vol_rep_b1/testdir/E_new_dir.2'
  Size: 297       	Blocks: 0          IO Block: 4096   directory
Device: fd02h/64770d	Inode: 1051477     Links: 2
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2013-07-16 17:11:26.764586000 +0530
Modify: 2013-07-16 17:22:32.339999955 +0530
Change: 2013-07-16 17:22:32.341999955 +0530


Expected results:
=================


Additional info:
==================
root@king [Jul-17-2013-12:10:54] >gluster v info vol_rep
 
Volume Name: vol_rep
Type: Replicate
Volume ID: 9cdf3761-067f-43a7-8483-628a83dfaa23
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: king:/rhs/brick1/vol_rep_b0
Brick2: hicks:/rhs/brick1/vol_rep_b1
Comment 2 Vivek Agarwal 2015-12-03 12:22:34 EST
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.

Note You need to log in before you can comment on or make changes to this bug.