Bug 1270031 - Moving a WebDAV file makes a copy of the file and deletes the old one.
Moving a WebDAV file makes a copy of the file and deletes the old one.
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: gvfs (Show other bugs)
6.8
Unspecified Unspecified
unspecified Severity unspecified
: rc
: ---
Assigned To: Ondrej Holy
Desktop QE
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-10-08 16:36 EDT by Benjamin Kahn
Modified: 2016-05-10 15:44 EDT (History)
2 users (show)

See Also:
Fixed In Version: gvfs-1.4.3-26.el6
Doc Type: Bug Fix
Doc Text:
Cause: Original file was copied and deleted, when using GIO API (e.g. Nautilus) to move a file over WebDAV. Consequence: This is problem for WebDAV clients (e.g. subversion, Alfresco) that need to track full document lifecycle. Fix: Move operation is implemented using native WebDAV MOVE method currently. Result: Files can be easily tracked. As a consequence also performance is improved when moving files.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-05-10 15:44:02 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Benjamin Kahn 2015-10-08 16:36:16 EDT
Description of problem:
When using a gvfs client to moving or rename a file over WebDAV, the file is copied and then the old version is deleted.

This causes problems for WebDAV clients that need to track full document lifecycle like subversion or Alfresco. 

Additionally, the current copy and delete pattern is missing error handling, so if the user does not have rights to delete the original file, you end up with a copy instead. (and an error box)

Version-Release number of selected component (if applicable):


How reproducible:
100%

Actual results:


Expected results:


Additional info:
This is fixed in this patch to GVFS which appears to already be in RHEL 7.2:
https://git.gnome.org/browse/gvfs/commit/daemon/gvfsbackenddav.c?id=b4387906ba3d5e73025e19245b5058e0db17c84f

This is causing some issues for internal systems.
Comment 4 Ondrej Holy 2015-11-20 07:40:12 EST
Mentioned patch in #c0 was backported with few consequential fixes. However it fixes only moving. Renaming is already implemented using native move operation and it isn't problematic, so changing bug title.
Comment 6 Bohdan Milar 2016-01-12 11:59:55 EST
I used log output of gvfsd to see what was gvfs doing with the files. It looks like the problem was not as easy as "move replaced by copy (+ delete)". Gvfs before patch (gvfs.x86_64 0:1.4.3-22.el6) was unable even to use "copy":

job_close_write send reply
backend_dbus_handler org.gtk.vfs.Mount:Move
Queued new job 0x1e50570 (GVfsJobMove)
send_reply(0x1e50570), failed=1 (Operation not supported by backend)
backend_dbus_handler org.gtk.vfs.Mount:Copy
Queued new job 0x1e691b0 (GVfsJobCopy)
send_reply(0x1e691b0), failed=1 (Operation not supported by backend)
...

This is what gvfs really did:

...
backend_dbus_handler org.gtk.vfs.Mount:QueryInfo
Queued new job 0x1e670c0 (GVfsJobQueryInfo)
Query info /test.txt
send_reply(0x1e670c0), failed=0 ()
backend_dbus_handler org.gtk.vfs.Mount:OpenForRead
Queued new job 0x1e675c0 (GVfsJobOpenForRead)
send_reply(0x1e675c0), failed=0 ()
Added new job source 0x7f7d9c013990 (GVfsReadChannel)
backend_dbus_handler org.gtk.vfs.Mount:OpenForWrite
Queued new job 0x1e3a540 (GVfsJobOpenForWrite)
send_reply(0x1e3a540), failed=0 ()
Added new job source 0x1e675c0 (GVfsWriteChannel)
Queued new job 0x1e692d0 (GVfsJobRead)
job_read send reply, 0 bytes
Queued new job 0x7f7d94022c10 (GVfsJobCloseRead)
job_close_read send reply
Queued new job 0x7f7d9c00e380 (GVfsJobCloseWrite)
job_close_write send reply
backend_dbus_handler org.gtk.vfs.Mount:QuerySettableAttributes
Queued new job 0x1e69360 (GVfsJobQueryAttributes)
send_reply(0x1e69360), failed=1 (Operation not supported by backend)
backend_dbus_handler org.gtk.vfs.Mount:QueryWritableNamespaces
Queued new job 0x1e693f0 (GVfsJobQueryAttributes)
send_reply(0x1e693f0), failed=1 (Operation not supported by backend)
backend_dbus_handler org.gtk.vfs.Mount:QueryInfo
Queued new job 0x1e67660 (GVfsJobQueryInfo)
Query info /test.txt
send_reply(0x1e67660), failed=0 ()
backend_dbus_handler org.gtk.vfs.Mount:Delete
Queued new job 0x7f7d9c00e180 (GVfsJobDelete)
send_reply(0x7f7d9c00e180), failed=0 ()
Comment 7 Bohdan Milar 2016-01-12 12:03:32 EST
After the patch (gvfs.x86_64 0:1.4.3-26.el6) the log output ends like this:

...
backend_dbus_handler org.gtk.vfs.Mount:Delete
Queued new job 0x7f7d9c00e180 (GVfsJobDelete)
send_reply(0x7f7d9c00e180), failed=0 ()
Comment 8 Ondrej Holy 2016-01-13 03:24:42 EST
Yes, native copy isn't also supported and open-read-write-close fallback is used instead...
Comment 9 Ondrej Holy 2016-01-13 03:48:54 EST
(In reply to Bohdan Milar from comment #7)
> After the patch (gvfs.x86_64 0:1.4.3-26.el6) the log output ends like this:
> 
> ...
> backend_dbus_handler org.gtk.vfs.Mount:Delete
> Queued new job 0x7f7d9c00e180 (GVfsJobDelete)
> send_reply(0x7f7d9c00e180), failed=0 ()

I miss some context, but I realized you are testing GVfs over fuse daemon (~/.gvfs) using POSIX commands, which is basically wrong. Fuse daemon is just fallback, GIO API isn't fully compatible with POSIX and thus it might behave in different way.

Be sure you use GVfs equivalents of the POSIX commands (e.g. gvfs-move dav://host/A dav://host/B), or some GIO based application (e.g. Nautilus) for your tests...

Then you shouldn't see delete job for move. I see following with GVFS_DEBUG=1 and GVFS_HTTP_DEBUG=all using Nautilus:
...
backend_dbus_handler org.gtk.vfs.Mount:Move
Queued new job 0x9c01400 (GVfsJobMove)
...
> MOVE /User41dcd48/Document.pages HTTP/1.1
...
progress_callback 301633/301633
...
Comment 10 Bohdan Milar 2016-03-07 10:18:44 EST
Thanks for your comment. I updated the test case. It now uses gvfs-mkdir and gvfs-move.
Comment 11 Bohdan Milar 2016-03-07 10:21:20 EST
Tested on x86_64, i386, ppc64, s390x using the created test case.
Bug verified as fixed.

Example of the debug output:
...
backend_dbus_handler org.gtk.vfs.Mount:Move
Queued new job 0x2181030 (GVfsJobMove)
progress_callback 7/7
send_reply(0x2181030), failed=0 ()
backend_dbus_handler org.gtk.vfs.Mount:QueryInfo
Queued new job 0x2196a20 (GVfsJobQueryInfo)
Query info /
send_reply(0x2196a20), failed=0 ()
backend_dbus_handler org.gtk.vfs.Mount:QueryInfo
Queued new job 0x218b840 (GVfsJobQueryInfo)
Query info /movetest
send_reply(0x218b840), failed=0 ()
Comment 13 errata-xmlrpc 2016-05-10 15:44:02 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0755.html

Note You need to log in before you can comment on or make changes to this bug.