Description of problem: As per the sequence of steps to be followed, 'glusterfind create' is to be followed with a 'pre' and a 'post' to generate the list of changed files, and to update the timestamp to the latest one, respectively. There would be times when the admin might forget to run the 'post' step, thereby forgetting to update the timestamp, resulting in an incorrect list of changed files. When any 'glusterfind pre' command is given, we could have a check if the 'glusterfind post' command has been executed or not.. by simply checking the existence of 'status.pre' file in the $SESSION_DIR location, or by any other means. A confirmation message to the user letting him know that the 'glusterfind post' command is not run, and would he still like to continue, would help. Version-Release number of selected component (if applicable): Glusterfs upstream nightly glusterfs-3.7dev-0.777.git2308c07.el6.x86_64 How reproducible: Easily. This would particularly happen if the user chooses to NOT use the wrapper script that we are going to provide, and goes ahead with the CLI commands Steps to Reproduce: 1. Have a glusterfs volume, create a session using 'glusterfind create' 2. Create new files in the volume 3. Run 'glusterfind pre' command 4. Check the output file if it has a mention of files created in step2 5. Create more new files in the volume 6. Again run 'glusterfind pre' command 5. Check the output file if it has a mention of files created in step2 as well as step5 Actual results: Step6 should ask the user for a confirmation stating something like this: "Glusterfind post command has not been run after running the pre command last time. This would result in an aggregated list of changed files. Would you still like to continue? Y/N " Expected results: No warning is given Additional info: [root@dhcp43-140 ~]# rpm -qa | grep glusterfs glusterfs-regression-tests-3.7dev-0.777.git2308c07.el6.x86_64 glusterfs-3.7dev-0.777.git2308c07.el6.x86_64 glusterfs-extra-xlators-3.7dev-0.777.git2308c07.el6.x86_64 glusterfs-resource-agents-3.7dev-0.777.git2308c07.el6.noarch glusterfs-debuginfo-3.7dev-0.777.git2308c07.el6.x86_64 glusterfs-libs-3.7dev-0.777.git2308c07.el6.x86_64 glusterfs-api-3.7dev-0.777.git2308c07.el6.x86_64 glusterfs-devel-3.7dev-0.777.git2308c07.el6.x86_64 glusterfs-cli-3.7dev-0.777.git2308c07.el6.x86_64 glusterfs-server-3.7dev-0.777.git2308c07.el6.x86_64 glusterfs-geo-replication-3.7dev-0.777.git2308c07.el6.x86_64 glusterfs-rdma-3.7dev-0.777.git2308c07.el6.x86_64 glusterfs-ganesha-3.7dev-0.777.git2308c07.el6.x86_64 glusterfs-fuse-3.7dev-0.777.git2308c07.el6.x86_64 glusterfs-api-devel-3.7dev-0.777.git2308c07.el6.x86_64 [root@dhcp43-140 ~]#
Pre command always gets the latest information when we run more than once. Pre command will not update the session unless post command is called. So when we run pre command next time without post it gets latest information from previous session. Since this is not feature blocker, we can move out of 3.7 tracker.
>> Pre command always gets the latest information when we run more than once Does it not compare the time stamp in the status file (in $SESSION_DIR), and compare the mtime/ctime of files wrt to *that* timestamp? In the case where the customer has run pre command, supplied the outfile to the backup tool and forgotten to run the post command - the next pre command is going to get the list of files wrt to the time stamp present in the status file. And this will result in an outfile which will have one *big* list of files - which includes all the changed-files already backed up from the previously run pre command as well.
(In reply to Sweta Anandpara from comment #2) > >> Pre command always gets the latest information when we run more than once > > Does it not compare the time stamp in the status file (in $SESSION_DIR), and > compare the mtime/ctime of files wrt to *that* timestamp? > > In the case where the customer has run pre command, supplied the outfile to > the backup tool and forgotten to run the post command - the next pre command > is going to get the list of files wrt to the time stamp present in the > status file. And this will result in an outfile which will have one *big* > list of files - which includes all the changed-files already backed up from > the previously run pre command as well. pre and post commands will be configured as hook script in backup utilities, so will get executed automatically. If post command is failed or not run, that means some failure in consuming the file generated from pre command. So when we run next time it should pick those changes again since it is not completed last time.
This patch addresses this issue. Moving this to POST. http://review.gluster.org/#/c/10320/
REVIEW: http://review.gluster.org/10418 (tools/glusterfind: New option to pre --regenerate-outfile) posted (#1) for review on master by Aravinda VK (avishwan)
REVIEW: http://review.gluster.org/10418 (tools/glusterfind: New option to pre --regenerate-outfile) posted (#2) for review on master by Aravinda VK (avishwan)
REVIEW: http://review.gluster.org/10418 (tools/glusterfind: New option to pre --regenerate-outfile) posted (#3) for review on master by Aravinda VK (avishwan)
COMMIT: http://review.gluster.org/10418 committed in master by Vijay Bellur (vbellur) ------ commit 20353cc323704292753ea6b7b0034362109fef76 Author: Aravinda VK <avishwan> Date: Tue Apr 28 14:40:27 2015 +0530 tools/glusterfind: New option to pre --regenerate-outfile When pre command is run twice, it overwrites the outfile. Now pre command will fail when executed twice. To force the regeneration use --regenerate-outfile Change-Id: I0cf7a139522812ece4decdfbcba667a05ce5c35e Signed-off-by: Aravinda VK <avishwan> BUG: 1207028 Reviewed-on: http://review.gluster.org/10418 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Kotresh HR <khiremat>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user