Bug 1397286
Summary: | Wrong value in Last Synced column during Hybrid Crawl | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Rahul Hinduja <rhinduja> |
Component: | geo-replication | Assignee: | Aravinda VK <avishwan> |
Status: | CLOSED ERRATA | QA Contact: | Rahul Hinduja <rhinduja> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | rhgs-3.2 | CC: | amukherj, avishwan, bugs, csaba, rhs-bugs, storage-qa-internal |
Target Milestone: | --- | ||
Target Release: | RHGS 3.2.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.8.4-6 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | 1396081 | Environment: | |
Last Closed: | 2017-03-23 06:20:38 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1396081, 1399468, 1399470 | ||
Bug Blocks: | 1351528 |
Description
Rahul Hinduja
2016-11-22 07:40:01 UTC
upstream mainline patch http://review.gluster.org/#/c/15869 posted for review. Upstream patches: (3.8) http://review.gluster.org/15961 (3.9) http://review.gluster.org/15962 Downstream patch: https://code.engineering.redhat.com/gerrit/91509 Verified with build: glusterfs-geo-replication-3.8.4-15.el7rhgs.x86_64 Following steps were carried: ============================= 1. Create Master and Slave volume 2. Create data on Master volume 3. Create geo-rep session and change the change_detector to xsync 4. Start the geo-rep session 5. Let the data sync, while sync in process, check the geo-rep status Observation 1: Last sync remain NA until whole sync completed Observation 2: Directories were created at the slave and stime was on sub directory was different than the root. The last sync remain same until the root directory stime got updated with different value. Based on the above 2 observations, moving this bug to verified state. [root@dhcp42-7 brick1]# getfattr -d -e hex -m . b1/thread0/level00/level10/level20/level30/level40/level50/level60/level70/level80/level90/ # file: b1/thread0/level00/level10/level20/level30/level40/level50/level60/level70/level80/level90/ security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000 trusted.gfid=0xdeedbf75c0da47feb7d42a02e0647ea5 trusted.glusterfs.322d54fb-1298-4da8-ac92-230c4dfdb198.9d20ea13-dd77-407b-8a84-0e7d3a9e3f3e.stime=0x58ab2d72000d47ae trusted.glusterfs.322d54fb-1298-4da8-ac92-230c4dfdb198.xtime=0x58ab2fd9000a3d3f trusted.glusterfs.dht=0x0000000100000000d5555552ffffffff [root@dhcp42-7 brick1]# getfattr -d -e hex -m . b1/ # file: b1/ security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.322d54fb-1298-4da8-ac92-230c4dfdb198.9d20ea13-dd77-407b-8a84-0e7d3a9e3f3e.stime=0x58ab2f45000bc39d trusted.glusterfs.322d54fb-1298-4da8-ac92-230c4dfdb198.xtime=0x58ab2fdc0006eba0 trusted.glusterfs.dht=0x0000000100000000555555547ffffffd trusted.glusterfs.volume-id=0x322d54fb12984da8ac92230c4dfdb198 [root@dhcp42-7 brick1]# [root@dhcp42-7 brick1]# gluster volume geo-replication master 10.70.43.249::slave status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED --------------------------------------------------------------------------------------------------------------------------------------------------- 10.70.42.7 master /rhs/brick1/b1 root 10.70.43.249::slave 10.70.43.208 Active Hybrid Crawl 2017-02-20 18:02:45 10.70.42.7 master /rhs/brick2/b5 root 10.70.43.249::slave 10.70.43.208 Active Hybrid Crawl 2017-02-20 18:02:45 10.70.42.7 master /rhs/brick3/b9 root 10.70.43.249::slave 10.70.43.208 Active Hybrid Crawl 2017-02-20 18:02:45 10.70.41.211 master /rhs/brick1/b2 root 10.70.43.249::slave 10.70.43.196 Passive N/A N/A 10.70.41.211 master /rhs/brick2/b6 root 10.70.43.249::slave 10.70.43.196 Passive N/A N/A 10.70.41.211 master /rhs/brick3/b10 root 10.70.43.249::slave 10.70.43.196 Passive N/A N/A 10.70.43.141 master /rhs/brick1/b3 root 10.70.43.249::slave 10.70.43.249 Active Hybrid Crawl N/A 10.70.43.141 master /rhs/brick2/b7 root 10.70.43.249::slave 10.70.43.249 Active Hybrid Crawl N/A 10.70.43.141 master /rhs/brick3/b11 root 10.70.43.249::slave 10.70.43.249 Active Hybrid Crawl N/A 10.70.43.156 master /rhs/brick1/b4 root 10.70.43.249::slave 10.70.41.187 Passive N/A N/A 10.70.43.156 master /rhs/brick2/b8 root 10.70.43.249::slave 10.70.41.187 Passive N/A N/A 10.70.43.156 master /rhs/brick3/b12 root 10.70.43.249::slave 10.70.41.187 Passive N/A N/A [root@dhcp42-7 brick1]# getfattr -d -e hex -m . b1/thread0/level00/level10/level20/level30/level40/level50/level60/level70/level80/level90/ # file: b1/thread0/level00/level10/level20/level30/level40/level50/level60/level70/level80/level90/ security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000 trusted.gfid=0xdeedbf75c0da47feb7d42a02e0647ea5 trusted.glusterfs.322d54fb-1298-4da8-ac92-230c4dfdb198.9d20ea13-dd77-407b-8a84-0e7d3a9e3f3e.stime=0x58ab2d72000d47ae trusted.glusterfs.322d54fb-1298-4da8-ac92-230c4dfdb198.xtime=0x58ab2fdc0006eba0 trusted.glusterfs.dht=0x0000000100000000d5555552ffffffff [root@dhcp42-7 brick1]# getfattr -d -e hex -m . b1 # file: b1 security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.322d54fb-1298-4da8-ac92-230c4dfdb198.9d20ea13-dd77-407b-8a84-0e7d3a9e3f3e.stime=0x58ab2f45000bc39d trusted.glusterfs.322d54fb-1298-4da8-ac92-230c4dfdb198.xtime=0x58ab2fdc0006eba0 trusted.glusterfs.dht=0x0000000100000000555555547ffffffd trusted.glusterfs.volume-id=0x322d54fb12984da8ac92230c4dfdb198 [root@dhcp42-7 brick1]# [root@dhcp42-7 brick1]# gluster volume geo-replication master 10.70.43.249::slave status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED --------------------------------------------------------------------------------------------------------------------------------------------------- 10.70.42.7 master /rhs/brick1/b1 root 10.70.43.249::slave 10.70.43.208 Active Hybrid Crawl 2017-02-20 18:02:45 10.70.42.7 master /rhs/brick2/b5 root 10.70.43.249::slave 10.70.43.208 Active Hybrid Crawl 2017-02-20 18:02:45 10.70.42.7 master /rhs/brick3/b9 root 10.70.43.249::slave 10.70.43.208 Active Hybrid Crawl 2017-02-20 18:02:45 10.70.41.211 master /rhs/brick1/b2 root 10.70.43.249::slave 10.70.43.196 Passive N/A N/A 10.70.41.211 master /rhs/brick2/b6 root 10.70.43.249::slave 10.70.43.196 Passive N/A N/A 10.70.41.211 master /rhs/brick3/b10 root 10.70.43.249::slave 10.70.43.196 Passive N/A N/A 10.70.43.156 master /rhs/brick1/b4 root 10.70.43.249::slave 10.70.41.187 Passive N/A N/A 10.70.43.156 master /rhs/brick2/b8 root 10.70.43.249::slave 10.70.41.187 Passive N/A N/A 10.70.43.156 master /rhs/brick3/b12 root 10.70.43.249::slave 10.70.41.187 Passive N/A N/A 10.70.43.141 master /rhs/brick1/b3 root 10.70.43.249::slave 10.70.43.249 Active Hybrid Crawl N/A 10.70.43.141 master /rhs/brick2/b7 root 10.70.43.249::slave 10.70.43.249 Active Hybrid Crawl N/A 10.70.43.141 master /rhs/brick3/b11 root 10.70.43.249::slave 10.70.43.249 Active Hybrid Crawl N/A [root@dhcp42-7 brick1]# getfattr -d -e hex -m . b1 # file: b1 security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.322d54fb-1298-4da8-ac92-230c4dfdb198.9d20ea13-dd77-407b-8a84-0e7d3a9e3f3e.stime=0x58ab2fdc0006eba0 trusted.glusterfs.322d54fb-1298-4da8-ac92-230c4dfdb198.xtime=0x58ab2fdc0006eba0 trusted.glusterfs.dht=0x0000000100000000555555547ffffffd trusted.glusterfs.volume-id=0x322d54fb12984da8ac92230c4dfdb198 [root@dhcp42-7 brick1]# getfattr -d -e hex -m . b1/thread0/level00/level10/level20/level30/level40/level50/level60/level70/level80/level90/ # file: b1/thread0/level00/level10/level20/level30/level40/level50/level60/level70/level80/level90/ security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000 trusted.gfid=0xdeedbf75c0da47feb7d42a02e0647ea5 trusted.glusterfs.322d54fb-1298-4da8-ac92-230c4dfdb198.9d20ea13-dd77-407b-8a84-0e7d3a9e3f3e.stime=0x58ab2fdc0006eba0 trusted.glusterfs.322d54fb-1298-4da8-ac92-230c4dfdb198.xtime=0x58ab2fdc0006eba0 trusted.glusterfs.dht=0x0000000100000000d5555552ffffffff [root@dhcp42-7 brick1]# gluster volume geo-replication master 10.70.43.249::slave status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED --------------------------------------------------------------------------------------------------------------------------------------------------- 10.70.42.7 master /rhs/brick1/b1 root 10.70.43.249::slave 10.70.43.208 Active Hybrid Crawl 2017-02-20 18:05:16 10.70.42.7 master /rhs/brick2/b5 root 10.70.43.249::slave 10.70.43.208 Active Hybrid Crawl 2017-02-20 18:05:16 10.70.42.7 master /rhs/brick3/b9 root 10.70.43.249::slave 10.70.43.208 Active Hybrid Crawl 2017-02-20 18:05:16 10.70.41.211 master /rhs/brick1/b2 root 10.70.43.249::slave 10.70.43.196 Passive N/A N/A 10.70.41.211 master /rhs/brick2/b6 root 10.70.43.249::slave 10.70.43.196 Passive N/A N/A 10.70.41.211 master /rhs/brick3/b10 root 10.70.43.249::slave 10.70.43.196 Passive N/A N/A 10.70.43.141 master /rhs/brick1/b3 root 10.70.43.249::slave 10.70.43.249 Active Hybrid Crawl 2017-02-20 18:05:16 10.70.43.141 master /rhs/brick2/b7 root 10.70.43.249::slave 10.70.43.249 Active Hybrid Crawl 2017-02-20 18:05:16 10.70.43.141 master /rhs/brick3/b11 root 10.70.43.249::slave 10.70.43.249 Active Hybrid Crawl 2017-02-20 18:05:16 10.70.43.156 master /rhs/brick1/b4 root 10.70.43.249::slave 10.70.41.187 Passive N/A N/A 10.70.43.156 master /rhs/brick2/b8 root 10.70.43.249::slave 10.70.41.187 Passive N/A N/A 10.70.43.156 master /rhs/brick3/b12 root 10.70.43.249::slave 10.70.41.187 Passive N/A N/A [root@dhcp42-7 brick1]# Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0486.html |