Description of problem: Programs that set mtime, such as `rsync -a`, don't work correctly on GlusterFS, because it sets the nanoseconds to 000. This creates problems for incremental backups, where files get accidentally copied again and again. For example, consider `myfile` on an ext4 system, being copied to a GlusterFS volume, with `rsync -a` and then `cp -u` in turn. You'd expect that after the first `rsync -a`, `cp -u` agrees that the file need not be copied. $ cp -u -v myfile /mnt 'myfile' -> '/mnt/myfile' $ cp -u -v myfile /mnt $ rsync -a myfile /mnt $ cp -u -v myfile /mnt 'myfile' -> '/mnt/myfile' It copied it again! Version-Release number of selected component (if applicable): 3.9.1 How reproducible: Always Steps to Reproduce: With gluster mounted on /mnt: 1. rm -f /mnt/myfile && touch /mnt/file 2. # now `stat /mnt/file` shows nanoseconds mtime 3. touch -d '2017-01-01 00:00:00.123456001' /mnt/file Actual results: stat shows .123456000, the 001 is gone Expected results: stat shows '2017-01-01 00:00:00.123456001' Additional info: JoeJulian on IRC pointed at the code: https://github.com/gluster/glusterfs/blob/c8a23cc6cd289dd28deb136bf2550f28e2761ef3/libglusterfs/src/common-utils.c#L3800-L3841 with the comment: /* The granularity is micro seconds as per the current * requiremnt. Hence using 'utimes'. This can be updated * to 'utimensat' if we need timestamp in nanoseconds. */ Please support nano-seconds! It would unbreak lots of backup tools, avoiding unnecessary copying and surprising behaviour.
I've made a patch that implements this at https://github.com/gluster/glusterfs/compare/master...nh2:nanosecond-timestamps
(In reply to nh2 from comment #1) > I've made a patch that implements this at > https://github.com/gluster/glusterfs/compare/master...nh2:nanosecond- > timestamps Thanks! I had a quick glance over it and it looks ok to me. Could you send this to our Gerrit instance for review? The workflow to do that is listed here: http://gluster.readthedocs.io/en/latest/Developer-guide/Simplified-Development-Workflow/ Make sure to have a single-line subject, followed by a description of the problem and solution. Similar to this: -- posix: use nanosecond accuracy Programs that set mtime, such as `rsync -a`, don't work correctly on GlusterFS, because it sets the nanoseconds to 000. This creates problems for incremental backups, where files get accidentally copied again and again. For example, consider `myfile` on an ext4 system, being copied to a GlusterFS volume, with `rsync -a` and then `cp -u` in turn. You'd expect that after the first `rsync -a`, `cp -u` agrees that the file need not be copied. BUG: 1422074 Signed-off-by: Real Name <email> -- Note that the BUG: tag points to the bug that was cloned for the master branch. We need patches in the master branch before they get backported to stable releases. The Gerrit tools will also insert a Change-Id. This is used by Gerrit to track updates/modifications to patches. You are not supposed to change the Change-Id once you posted the change to Gerrit. Let me know if you hit any problems, or would like an other developer to take care of this in your name. Thanks!
I've asked for feedback about this, but until now nobody has reported any known problems.
Should this ticket be closed now that it's fixed with https://bugzilla.redhat.com/show_bug.cgi?id=1422074 ? I'm not quite sure what the purpose of the ticket clone is, is it to have the same issue once for the master branch and once for the 3.9 version?
(In reply to nh2 from comment #4) > Should this ticket be closed now that it's fixed with > https://bugzilla.redhat.com/show_bug.cgi?id=1422074 ? > > I'm not quite sure what the purpose of the ticket clone is, is it to have > the same issue once for the master branch and once for the 3.9 version? Bug 1422074 is not completely fixed yet. Bugs get closed when the patch is part of a released (minor)version. For that bug, it will be GlusterFS 3.11 (in approx. 3 months). This bug is intended to cherry-pick the backport to (currently) 3.9. However 3.9 is EOL with the recent release of 3.10. If you want the fix in a next 3.10 update you can update the version of this bug and send a backport to the release-3.10 branch. Steps to do so are documented on http://gluster.readthedocs.io/en/latest/Developer-guide/Backport-Guidelines/
This bug is getting closed because GlusterFS-3.9 has reached its end-of-life [1]. Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS. If this bug still exists in newer GlusterFS releases, please open a new bug against the newer release. [1]: https://www.gluster.org/community/release-schedule/