Bug 1761531
Summary: | heal not actually healing metadata of a regular file when only time stamps are changed(data heal not required) | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Nag Pavan Chilakam <nchilaka> | |
Component: | replicate | Assignee: | Sheetal Pamecha <spamecha> | |
Status: | CLOSED ERRATA | QA Contact: | Arthy Loganathan <aloganat> | |
Severity: | high | Docs Contact: | ||
Priority: | unspecified | |||
Version: | rhgs-3.5 | CC: | pasik, pprakash, puebele, rhs-bugs, rkothiya, sheggodu, storage-qa-internal | |
Target Milestone: | --- | |||
Target Release: | RHGS 3.5.z Batch Update 3 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-6.0-38 | Doc Type: | No Doc Update | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1787274 (view as bug list) | Environment: | ||
Last Closed: | 2020-12-17 04:50:17 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1787274 |
Description
Nag Pavan Chilakam
2019-10-14 15:09:11 UTC
REVIEW: https://review.gluster.org/23953 (afr: restore timestamp of files during metadata heal) posted (#1) for review on master by Sheetal Pamecha Performed following steps to verify the fix. 1) create 1x3 volume , mount it [root@dhcp46-157 ~]# gluster vol create vol5 replica 3 10.70.46.157:/bricks/brick4/vol5_brick0 10.70.46.56:/bricks/brick4/vol5_brick0 10.70.47.142:/bricks/brick4/vol5_brick0 volume create: vol5: success: please start the volume to access data [root@dhcp46-157 ~]# gluster vol start vol5 volume start: vol5: success [root@dhcp46-157 ~]# [root@dhcp46-157 ~]# gluster vol info vol5 Volume Name: vol5 Type: Replicate Volume ID: edfbd61a-2e9f-49e2-84ae-6dfa4406e0e8 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.70.46.157:/bricks/brick4/vol5_brick0 Brick2: 10.70.46.56:/bricks/brick4/vol5_brick0 Brick3: 10.70.47.142:/bricks/brick4/vol5_brick0 Options Reconfigured: storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: on performance.client-io-threads: off cluster.brick-multiplex: off [root@dhcp37-62 ~]# mkdir /mnt/glusterfs_vol5 [root@dhcp37-62 ~]# mount -t glusterfs 10.70.46.157:/vol5 /mnt/glusterfs_vol5 [root@dhcp37-62 ~]# 2) Note the stat od test/f2 [root@dhcp37-62 glusterfs_vol5]# mkdir test [root@dhcp37-62 glusterfs_vol5]# touch test/f2 [root@dhcp37-62 glusterfs_vol5]# [root@dhcp37-62 glusterfs_vol5]# stat test/f2 File: ‘test/f2’ Size: 0 Blocks: 0 IO Block: 131072 regular empty file Device: 2ah/42d Inode: 12013530131641530147 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Context: system_u:object_r:fusefs_t:s0 Access: 2020-10-28 20:10:49.433946814 +0530 Modify: 2020-10-28 20:10:49.433946814 +0530 Change: 2020-10-28 20:10:49.432026368 +0530 Birth: - [root@dhcp46-157 ~]# stat /bricks/brick4/vol5_brick0/test/f2 File: ‘/bricks/brick4/vol5_brick0/test/f2’ Size: 0 Blocks: 0 IO Block: 4096 regular empty file Device: fd25h/64805d Inode: 34284610 Links: 2 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Context: system_u:object_r:glusterd_brick_t:s0 Access: 2020-10-28 20:10:49.433946814 +0530 Modify: 2020-10-28 20:10:49.433946814 +0530 Change: 2020-10-28 20:10:49.433927289 +0530 Birth: - [root@dhcp46-157 ~]# [root@dhcp46-157 ~]# gluster vol status vol5 Status of volume: vol5 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.46.157:/bricks/brick4/vol5_bric k0 49156 0 Y 29949 Brick 10.70.46.56:/bricks/brick4/vol5_brick 0 49153 0 Y 23750 Brick 10.70.47.142:/bricks/brick4/vol5_bric k0 49154 0 Y 11379 Self-heal Daemon on localhost N/A N/A Y 29966 Self-heal Daemon on 10.70.47.142 N/A N/A Y 11552 Self-heal Daemon on 10.70.47.175 N/A N/A Y 26227 Self-heal Daemon on 10.70.46.56 N/A N/A Y 23979 Task Status of Volume vol5 ------------------------------------------------------------------------------ There are no active volume tasks 3) bring down b1 [root@dhcp46-157 ~]# kill -s 9 29949 [root@dhcp46-157 ~]# [root@dhcp46-157 ~]# gluster vol status vol5 Status of volume: vol5 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.46.157:/bricks/brick4/vol5_bric k0 N/A N/A N N/A Brick 10.70.46.56:/bricks/brick4/vol5_brick 0 49153 0 Y 23750 Brick 10.70.47.142:/bricks/brick4/vol5_bric k0 49154 0 Y 11379 Self-heal Daemon on localhost N/A N/A Y 29966 Self-heal Daemon on 10.70.47.175 N/A N/A Y 26227 Self-heal Daemon on 10.70.46.56 N/A N/A Y 23979 Self-heal Daemon on 10.70.47.142 N/A N/A Y 11552 Task Status of Volume vol5 ------------------------------------------------------------------------------ There are no active volume tasks 4) touch test/f2 to change the stat [root@dhcp37-62 glusterfs_vol5]# touch test/f2 [root@dhcp37-62 glusterfs_vol5]# stat test/f2 File: ‘test/f2’ Size: 0 Blocks: 0 IO Block: 131072 regular empty file Device: 2ah/42d Inode: 12013530131641530147 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Context: system_u:object_r:fusefs_t:s0 Access: 2020-10-28 20:12:38.054795453 +0530 Modify: 2020-10-28 20:12:38.054795453 +0530 Change: 2020-10-28 20:12:38.057733209 +0530 Birth: - -------------------------- [root@dhcp46-56 ~]# stat /bricks/brick4/vol5_brick0/test/f2 File: ‘/bricks/brick4/vol5_brick0/test/f2’ Size: 0 Blocks: 0 IO Block: 4096 regular empty file Device: fd20h/64800d Inode: 34259010 Links: 2 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Context: system_u:object_r:glusterd_brick_t:s0 Access: 2020-10-28 20:12:38.054795453 +0530 Modify: 2020-10-28 20:12:38.054795453 +0530 Change: 2020-10-28 20:12:38.057733209 +0530 Birth: - [root@dhcp46-56 ~]# ------------------[root@dhcp47-142 ~]# stat /bricks/brick4/vol5_brick0/test/f2 File: ‘/bricks/brick4/vol5_brick0/test/f2’ Size: 0 Blocks: 0 IO Block: 4096 regular empty file Device: fd17h/64791d Inode: 34245186 Links: 2 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Context: system_u:object_r:glusterd_brick_t:s0 Access: 2020-10-28 20:12:38.054795453 +0530 Modify: 2020-10-28 20:12:38.054795453 +0530 Change: 2020-10-28 20:12:38.057281843 +0530 5) Bring the brick up and complete heal [root@dhcp46-157 ~]# gluster vol start vol5 force volume start: vol5: success 6) Check the stat of test/f2 [root@dhcp46-157 ~]# [root@dhcp46-157 ~]# stat /bricks/brick4/vol5_brick0/test/f2 File: ‘/bricks/brick4/vol5_brick0/test/f2’ Size: 0 Blocks: 0 IO Block: 4096 regular empty file Device: fd25h/64805d Inode: 34284610 Links: 2 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Context: system_u:object_r:glusterd_brick_t:s0 Access: 2020-10-28 20:12:38.054795453 +0530 Modify: 2020-10-28 20:12:38.054795453 +0530 Change: 2020-10-28 20:13:13.928229649 +0530 Birth: - mtime and atime are updated to latest timestamps. Verified the fix in, glusterfs-server-6.0-46.el7rhgs.x86_64 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (glusterfs bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:5603 |