Bug 1412545 - healing must not change the ctime of the file
Summary: healing must not change the ctime of the file
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: replicate
Version: rhgs-3.2
Hardware: Unspecified
OS: Unspecified
low
high
Target Milestone: ---
: RHGS 3.5.z Batch Update 3
Assignee: Kotresh HR
QA Contact: Arthy Loganathan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-01-12 09:22 UTC by Nag Pavan Chilakam
Modified: 2020-12-17 04:50 UTC (History)
8 users (show)

Fixed In Version: glusterfs-6.0-38
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-12-17 04:50:16 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:5603 0 None None None 2020-12-17 04:50:42 UTC

Description Nag Pavan Chilakam 2017-01-12 09:22:44 UTC
Description of problem:
=====================
when a heal happens, especially a entry or gfid heal, the ctime changes with the heal. This is applied even on the source file which is not right.
The ctime can be consumed by many applications.
Simple example is our QE automation cases, which calculate are-equal checksums before and after heal to check the heal functionality. This will fail almost everytime because the ctime changes with the healing

Suppose there is a file f1 which was created on a 1x2 volume with one brick down(say b1), which has a ctime of say 9:00AM
if the b1 is brought up after some time say 9:10Am, and the heal completes, the file f1 has a new ctime as 9:10AM

this is wrong, given that the new ctime is now changed even on the source file(b2 brick) and reflects the latest ctime on mount (9:10AM)

Applications will fail if they are noting the ctime of original file for heal validation or any other archival application systems can fail

as one of the dev member discussed with me, we may have to store ctime in the xattrs of src file to avoid this problem , or look at other ways to handle this

Version-Release number of selected component (if applicable):
==============
3.8.4-11

How reproducible:
-------------------
always


Steps to Reproduce:
=====================
explained for gfid healing(same can be done of entry healing too)

1)created a 1x2 volume say b1 and b2 are the bricks on n1 and n2
2)fuse mounted volume, created a file file1===> say its gfid is 100
3) killed b1,
4) delete file1 from mount, and recreated a new file with same name, ie file1--->let's say this got a gfid as "200"
5) now brought b1 online, healing succeeded.
But with healing what is happening is the file1 has a new ctime as below

on mount before heal
[root@dhcp47-147 dir1]# stat f1
  File: ‘f1’
  Size: 3               Blocks: 1          IO Block: 131072 regular file
Device: 28h/40d Inode: 9504604878251802295  Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Context: system_u:object_r:fusefs_t:s0
Access: 2017-01-12 13:04:09.998872000 +0530
Modify: 2017-01-12 13:04:16.626978963 +0530
Change: 2017-01-12 13:04:16.627978970 +0530
 Birth: -

(the above was the ctime of the source brick file)

and after heal:
[root@dhcp47-147 dir1]# stat f1
  File: ‘f1’
  Size: 3               Blocks: 1          IO Block: 131072 regular file
Device: 28h/40d Inode: 9504604878251802295  Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Context: system_u:object_r:fusefs_t:s0
Access: 2017-01-12 13:04:09.998872000 +0530
Modify: 2017-01-12 13:04:16.626978963 +0530
Change: 2017-01-12 13:07:23.732260887 +0530
 Birth: -
Actual results:

(above is the new ctime on source brick file, destination brick file and hence mount location too)

Comment 2 Nag Pavan Chilakam 2017-01-12 09:25:51 UTC
could be a day-1 issue(hence not proposing as blocker), but may have to fix it ASAP(may be next release)
(but if not day-1 we may have to propose as blocker)

Also, the ctime could be changing due the xattrs getting changed on source file , because of removing the afr-client bits which blame the other brick as part of heal desing

Comment 3 Pranith Kumar K 2017-01-13 04:43:05 UTC
(In reply to nchilaka from comment #2)
> could be a day-1 issue(hence not proposing as blocker), but may have to fix
> it ASAP(may be next release)
> (but if not day-1 we may have to propose as blocker)
> 
> Also, the ctime could be changing due the xattrs getting changed on source
> file , because of removing the afr-client bits which blame the other brick
> as part of heal desing

Yes, it is day-1 issue, the same problem exists with rebalance, quota, marker+geo-rep as well.

Comment 7 Atin Mukherjee 2018-11-12 02:32:50 UTC
Does this depend on the ctime feature? If so the bug status should change to post?

Comment 8 Pranith Kumar K 2018-11-12 11:36:32 UTC
Let me test and do the appropriate status change.

Comment 23 errata-xmlrpc 2020-12-17 04:50:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (glusterfs bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:5603


Note You need to log in before you can comment on or make changes to this bug.