Bug 1761531 - heal not actually healing metadata of a regular file when only time stamps are changed(data heal not required)
Summary: heal not actually healing metadata of a regular file when only time stamps ar...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: replicate
Version: rhgs-3.5
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: RHGS 3.5.z Batch Update 3
Assignee: Sheetal Pamecha
QA Contact: Arthy Loganathan
URL:
Whiteboard:
Depends On:
Blocks: 1787274
TreeView+ depends on / blocked
 
Reported: 2019-10-14 15:09 UTC by Nag Pavan Chilakam
Modified: 2020-12-17 04:50 UTC (History)
7 users (show)

Fixed In Version: glusterfs-6.0-38
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1787274 (view as bug list)
Environment:
Last Closed: 2020-12-17 04:50:17 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:5603 0 None None None 2020-12-17 04:50:33 UTC

Description Nag Pavan Chilakam 2019-10-14 15:09:11 UTC
Description of problem:
=======================
when a regular file's timestamps are updated using touch when a brick is down, the file is marked for healing. Once brick is up healing completes but doesn't update the timestamps on the sink.


Version-Release number of selected component (if applicable):
==========
6.0.15

How reproducible:
============
always

Steps to Reproduce:
1. create a 1x3volume and mount it on fuse
2. create a file f1, and wait for 2 minutes and note down stat from all  bricks and the mount point
3. now bring down a brick 
4. update the timestamps of the file f1, using touch command ie "touch f1"
5. you can see that the a/m/ctimes would have updated on the 2 online bricks.
also heal info shows the file f1 in heal entries ie to be healed
6. start the vol using force to bring brick online and wait for heal to complete.

Actual results:
================
it can be seen that post healing the new timestamps are not healed to the brick which was down, effectively not healing 


Expected results:
===============
metadata healing should heal the timestamps when only timestamps are changed

Comment 5 Sheetal Pamecha 2020-01-02 06:45:00 UTC
REVIEW: https://review.gluster.org/23953 (afr: restore timestamp of files during metadata heal) posted (#1) for review on master by Sheetal Pamecha

Comment 9 Arthy Loganathan 2020-10-29 12:31:02 UTC
Performed following steps to verify the fix.

1) create 1x3 volume , mount it
[root@dhcp46-157 ~]# gluster vol create vol5 replica 3 10.70.46.157:/bricks/brick4/vol5_brick0 10.70.46.56:/bricks/brick4/vol5_brick0 10.70.47.142:/bricks/brick4/vol5_brick0 
volume create: vol5: success: please start the volume to access data

[root@dhcp46-157 ~]# gluster vol start vol5
volume start: vol5: success
[root@dhcp46-157 ~]# 
[root@dhcp46-157 ~]# gluster vol info vol5
 
Volume Name: vol5
Type: Replicate
Volume ID: edfbd61a-2e9f-49e2-84ae-6dfa4406e0e8
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.46.157:/bricks/brick4/vol5_brick0
Brick2: 10.70.46.56:/bricks/brick4/vol5_brick0
Brick3: 10.70.47.142:/bricks/brick4/vol5_brick0
Options Reconfigured:
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.brick-multiplex: off

[root@dhcp37-62 ~]# mkdir /mnt/glusterfs_vol5
[root@dhcp37-62 ~]# mount -t glusterfs 10.70.46.157:/vol5 /mnt/glusterfs_vol5
[root@dhcp37-62 ~]# 

2) Note the stat od test/f2
[root@dhcp37-62 glusterfs_vol5]# mkdir test
[root@dhcp37-62 glusterfs_vol5]# touch test/f2
[root@dhcp37-62 glusterfs_vol5]# 
[root@dhcp37-62 glusterfs_vol5]# stat test/f2
  File: ‘test/f2’
  Size: 0         	Blocks: 0          IO Block: 131072 regular empty file
Device: 2ah/42d	Inode: 12013530131641530147  Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Context: system_u:object_r:fusefs_t:s0
Access: 2020-10-28 20:10:49.433946814 +0530
Modify: 2020-10-28 20:10:49.433946814 +0530
Change: 2020-10-28 20:10:49.432026368 +0530
 Birth: -

[root@dhcp46-157 ~]# stat /bricks/brick4/vol5_brick0/test/f2 
  File: ‘/bricks/brick4/vol5_brick0/test/f2’
  Size: 0         	Blocks: 0          IO Block: 4096   regular empty file
Device: fd25h/64805d	Inode: 34284610    Links: 2
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Context: system_u:object_r:glusterd_brick_t:s0
Access: 2020-10-28 20:10:49.433946814 +0530
Modify: 2020-10-28 20:10:49.433946814 +0530
Change: 2020-10-28 20:10:49.433927289 +0530
 Birth: -
[root@dhcp46-157 ~]# 
[root@dhcp46-157 ~]# gluster vol status vol5
Status of volume: vol5
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.46.157:/bricks/brick4/vol5_bric
k0                                          49156     0          Y       29949
Brick 10.70.46.56:/bricks/brick4/vol5_brick
0                                           49153     0          Y       23750
Brick 10.70.47.142:/bricks/brick4/vol5_bric
k0                                          49154     0          Y       11379
Self-heal Daemon on localhost               N/A       N/A        Y       29966
Self-heal Daemon on 10.70.47.142            N/A       N/A        Y       11552
Self-heal Daemon on 10.70.47.175            N/A       N/A        Y       26227
Self-heal Daemon on 10.70.46.56             N/A       N/A        Y       23979
 
Task Status of Volume vol5
------------------------------------------------------------------------------
There are no active volume tasks
 
3) bring down b1

[root@dhcp46-157 ~]# kill -s 9 29949
[root@dhcp46-157 ~]# 
[root@dhcp46-157 ~]# gluster vol status vol5
Status of volume: vol5
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.46.157:/bricks/brick4/vol5_bric
k0                                          N/A       N/A        N       N/A  
Brick 10.70.46.56:/bricks/brick4/vol5_brick
0                                           49153     0          Y       23750
Brick 10.70.47.142:/bricks/brick4/vol5_bric
k0                                          49154     0          Y       11379
Self-heal Daemon on localhost               N/A       N/A        Y       29966
Self-heal Daemon on 10.70.47.175            N/A       N/A        Y       26227
Self-heal Daemon on 10.70.46.56             N/A       N/A        Y       23979
Self-heal Daemon on 10.70.47.142            N/A       N/A        Y       11552
 
Task Status of Volume vol5
------------------------------------------------------------------------------
There are no active volume tasks
 
4) touch test/f2 to change the stat
[root@dhcp37-62 glusterfs_vol5]# touch test/f2
[root@dhcp37-62 glusterfs_vol5]# stat test/f2
  File: ‘test/f2’
  Size: 0         	Blocks: 0          IO Block: 131072 regular empty file
Device: 2ah/42d	Inode: 12013530131641530147  Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Context: system_u:object_r:fusefs_t:s0
Access: 2020-10-28 20:12:38.054795453 +0530
Modify: 2020-10-28 20:12:38.054795453 +0530
Change: 2020-10-28 20:12:38.057733209 +0530
 Birth: -


--------------------------
[root@dhcp46-56 ~]# stat /bricks/brick4/vol5_brick0/test/f2
  File: ‘/bricks/brick4/vol5_brick0/test/f2’
  Size: 0         	Blocks: 0          IO Block: 4096   regular empty file
Device: fd20h/64800d	Inode: 34259010    Links: 2
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Context: system_u:object_r:glusterd_brick_t:s0
Access: 2020-10-28 20:12:38.054795453 +0530
Modify: 2020-10-28 20:12:38.054795453 +0530
Change: 2020-10-28 20:12:38.057733209 +0530
 Birth: -
[root@dhcp46-56 ~]# 
------------------[root@dhcp47-142 ~]# stat /bricks/brick4/vol5_brick0/test/f2
  File: ‘/bricks/brick4/vol5_brick0/test/f2’
  Size: 0         	Blocks: 0          IO Block: 4096   regular empty file
Device: fd17h/64791d	Inode: 34245186    Links: 2
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Context: system_u:object_r:glusterd_brick_t:s0
Access: 2020-10-28 20:12:38.054795453 +0530
Modify: 2020-10-28 20:12:38.054795453 +0530
Change: 2020-10-28 20:12:38.057281843 +0530

5) Bring the brick up and complete heal
[root@dhcp46-157 ~]# gluster vol start vol5 force
volume start: vol5: success

6) Check the stat of test/f2

[root@dhcp46-157 ~]# 
[root@dhcp46-157 ~]# stat /bricks/brick4/vol5_brick0/test/f2 
  File: ‘/bricks/brick4/vol5_brick0/test/f2’
  Size: 0         	Blocks: 0          IO Block: 4096   regular empty file
Device: fd25h/64805d	Inode: 34284610    Links: 2
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Context: system_u:object_r:glusterd_brick_t:s0
Access: 2020-10-28 20:12:38.054795453 +0530
Modify: 2020-10-28 20:12:38.054795453 +0530
Change: 2020-10-28 20:13:13.928229649 +0530
 Birth: -

mtime and atime are updated to latest timestamps.

Verified the fix in,

glusterfs-server-6.0-46.el7rhgs.x86_64

Comment 11 errata-xmlrpc 2020-12-17 04:50:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (glusterfs bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:5603


Note You need to log in before you can comment on or make changes to this bug.