Bug 1318030

Summary: File contents not being updated when file size is not changed
Product: Red Hat Gluster Storage Reporter: CJ Beck <chris.beck>
Component: geo-replicationAssignee: Bug Updates Notification Mailing List <rhs-bugs>
Status: CLOSED WONTFIX QA Contact: storage-qa-internal <storage-qa-internal>
Severity: high Docs Contact:
Priority: unspecified    
Version: unspecifiedCC: chrisw, csaba, nlevinki
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-04-16 15:58:05 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description CJ Beck 2016-03-15 19:22:10 UTC
Description of problem:
We had an issue a few times now where a file (specifically, the repomd.xml file for yum), has been updated in the source of a geo-replication configuration, and in the destinations, the only thing that was updated was the timestamp of the file. The contents were not updated.

Version-Release number of selected component (if applicable): 3.7.5


How reproducible: Can't reproduce in the lab, only in our production environment. We now have a workaround in place (remove the file, then create it again).


Steps to Reproduce:
1. create new file in temp space
2. copy new file in place of old file

Actual results:
File contents and md5sum are correct in the source of the geo-replication ring. The remote destinations however only have the new timestamp, and not the new contents of the file.

Expected results:
File contents and md5sum are the same on all gluster geo-replicated volumes.


Additional info:
Here is the configuration of our geo-replication setup. This could be because we are using the "default" method of geo-replication of timestamp and size to verify that a file needs updating, rather than checksum. We wanted to verify that what we are seeing is expected behavior though.

[root.ourenv.cust.city.wd ~]# gluster vol geo-replication foobar-ourenv-eng ssh://repluser.nprd.loc.wd::foobar-city-eng-az1 config
special_sync_mode: partial
state_socket_unencoded: /var/lib/glusterd/geo-replication/foobar-ourenv-eng_svc0009.svc.nprd.loc.wd_foobar-city-eng-az1/ssh%3A%2F%2Frepluser%4010.180.10.9%3Agluster%3A%2F%2F127.0.0.1%3Afoobar-city-eng-az1.socket
gluster_log_file: /var/log/glusterfs/geo-replication/foobar-ourenv-eng/ssh%3A%2F%2Frepluser%4010.180.10.9%3Agluster%3A%2F%2F127.0.0.1%3Afoobar-city-eng-az1.gluster.log
ssh_command: ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem
ignore_deletes: false
change_detector: changelog
gluster_command_dir: /usr/sbin/
state_file: /var/lib/glusterd/geo-replication/foobar-ourenv-eng_svc0009.svc.nprd.loc.wd_foobar-city-eng-az1/ssh%3A%2F%2Frepluser%4010.180.10.9%3Agluster%3A%2F%2F127.0.0.1%3Afoobar-city-eng-az1.status
remote_gsyncd: /nonexistent/gsyncd
log_file: /var/log/glusterfs/geo-replication/foobar-ourenv-eng/ssh%3A%2F%2Frepluser%4010.180.10.9%3Agluster%3A%2F%2F127.0.0.1%3Afoobar-city-eng-az1.log
changelog_log_file: /var/log/glusterfs/geo-replication/foobar-ourenv-eng/ssh%3A%2F%2Frepluser%4010.180.10.9%3Agluster%3A%2F%2F127.0.0.1%3Afoobar-city-eng-az1-changes.log
socketdir: /var/run/gluster
working_dir: /var/lib/misc/glusterfsd/foobar-ourenv-eng/ssh%3A%2F%2Frepluser%4010.180.10.9%3Agluster%3A%2F%2F127.0.0.1%3Afoobar-city-eng-az1
state_detail_file: /var/lib/glusterd/geo-replication/foobar-ourenv-eng_svc0009.svc.nprd.loc.wd_foobar-city-eng-az1/ssh%3A%2F%2Frepluser%4010.180.10.9%3Agluster%3A%2F%2F127.0.0.1%3Afoobar-city-eng-az1-detail.status
session_owner: c7ccaa18-e2dc-40f5-8d37-4ac79362b267
ssh_command_tar: ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/tar_ssh.pem
pid_file: /var/lib/glusterd/geo-replication/foobar-ourenv-eng_svc0009.svc.nprd.loc.wd_foobar-city-eng-az1/ssh%3A%2F%2Frepluser%4010.180.10.9%3Agluster%3A%2F%2F127.0.0.1%3Afoobar-city-eng-az1.pid
georep_session_working_dir: /var/lib/glusterd/geo-replication/foobar-ourenv-eng_svc0009.svc.nprd.loc.wd_foobar-city-eng-az1/
gluster_params: aux-gfid-mount acl
volume_id: c7ccaa18-e2dc-40f5-8d37-4ac79362b267
[root.ourenv.cust.city.wd ~]#