Bug 1270031
Summary: | Moving a WebDAV file makes a copy of the file and deletes the old one. | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | Benjamin Kahn <bkahn> |
Component: | gvfs | Assignee: | Ondrej Holy <oholy> |
Status: | CLOSED ERRATA | QA Contact: | Desktop QE <desktop-qa-list> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 6.8 | CC: | bmilar, jkoten |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | gvfs-1.4.3-26.el6 | Doc Type: | Bug Fix |
Doc Text: |
Cause:
Original file was copied and deleted, when using GIO API (e.g. Nautilus) to move a file over WebDAV.
Consequence:
This is problem for WebDAV clients (e.g. subversion, Alfresco) that need to track full document lifecycle.
Fix:
Move operation is implemented using native WebDAV MOVE method currently.
Result:
Files can be easily tracked. As a consequence also performance is improved when moving files.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2016-05-10 19:44:02 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Benjamin Kahn
2015-10-08 20:36:16 UTC
Mentioned patch in #c0 was backported with few consequential fixes. However it fixes only moving. Renaming is already implemented using native move operation and it isn't problematic, so changing bug title. I used log output of gvfsd to see what was gvfs doing with the files. It looks like the problem was not as easy as "move replaced by copy (+ delete)". Gvfs before patch (gvfs.x86_64 0:1.4.3-22.el6) was unable even to use "copy": job_close_write send reply backend_dbus_handler org.gtk.vfs.Mount:Move Queued new job 0x1e50570 (GVfsJobMove) send_reply(0x1e50570), failed=1 (Operation not supported by backend) backend_dbus_handler org.gtk.vfs.Mount:Copy Queued new job 0x1e691b0 (GVfsJobCopy) send_reply(0x1e691b0), failed=1 (Operation not supported by backend) ... This is what gvfs really did: ... backend_dbus_handler org.gtk.vfs.Mount:QueryInfo Queued new job 0x1e670c0 (GVfsJobQueryInfo) Query info /test.txt send_reply(0x1e670c0), failed=0 () backend_dbus_handler org.gtk.vfs.Mount:OpenForRead Queued new job 0x1e675c0 (GVfsJobOpenForRead) send_reply(0x1e675c0), failed=0 () Added new job source 0x7f7d9c013990 (GVfsReadChannel) backend_dbus_handler org.gtk.vfs.Mount:OpenForWrite Queued new job 0x1e3a540 (GVfsJobOpenForWrite) send_reply(0x1e3a540), failed=0 () Added new job source 0x1e675c0 (GVfsWriteChannel) Queued new job 0x1e692d0 (GVfsJobRead) job_read send reply, 0 bytes Queued new job 0x7f7d94022c10 (GVfsJobCloseRead) job_close_read send reply Queued new job 0x7f7d9c00e380 (GVfsJobCloseWrite) job_close_write send reply backend_dbus_handler org.gtk.vfs.Mount:QuerySettableAttributes Queued new job 0x1e69360 (GVfsJobQueryAttributes) send_reply(0x1e69360), failed=1 (Operation not supported by backend) backend_dbus_handler org.gtk.vfs.Mount:QueryWritableNamespaces Queued new job 0x1e693f0 (GVfsJobQueryAttributes) send_reply(0x1e693f0), failed=1 (Operation not supported by backend) backend_dbus_handler org.gtk.vfs.Mount:QueryInfo Queued new job 0x1e67660 (GVfsJobQueryInfo) Query info /test.txt send_reply(0x1e67660), failed=0 () backend_dbus_handler org.gtk.vfs.Mount:Delete Queued new job 0x7f7d9c00e180 (GVfsJobDelete) send_reply(0x7f7d9c00e180), failed=0 () After the patch (gvfs.x86_64 0:1.4.3-26.el6) the log output ends like this: ... backend_dbus_handler org.gtk.vfs.Mount:Delete Queued new job 0x7f7d9c00e180 (GVfsJobDelete) send_reply(0x7f7d9c00e180), failed=0 () Yes, native copy isn't also supported and open-read-write-close fallback is used instead... (In reply to Bohdan Milar from comment #7) > After the patch (gvfs.x86_64 0:1.4.3-26.el6) the log output ends like this: > > ... > backend_dbus_handler org.gtk.vfs.Mount:Delete > Queued new job 0x7f7d9c00e180 (GVfsJobDelete) > send_reply(0x7f7d9c00e180), failed=0 () I miss some context, but I realized you are testing GVfs over fuse daemon (~/.gvfs) using POSIX commands, which is basically wrong. Fuse daemon is just fallback, GIO API isn't fully compatible with POSIX and thus it might behave in different way. Be sure you use GVfs equivalents of the POSIX commands (e.g. gvfs-move dav://host/A dav://host/B), or some GIO based application (e.g. Nautilus) for your tests... Then you shouldn't see delete job for move. I see following with GVFS_DEBUG=1 and GVFS_HTTP_DEBUG=all using Nautilus: ... backend_dbus_handler org.gtk.vfs.Mount:Move Queued new job 0x9c01400 (GVfsJobMove) ... > MOVE /User41dcd48/Document.pages HTTP/1.1 ... progress_callback 301633/301633 ... Thanks for your comment. I updated the test case. It now uses gvfs-mkdir and gvfs-move. Tested on x86_64, i386, ppc64, s390x using the created test case. Bug verified as fixed. Example of the debug output: ... backend_dbus_handler org.gtk.vfs.Mount:Move Queued new job 0x2181030 (GVfsJobMove) progress_callback 7/7 send_reply(0x2181030), failed=0 () backend_dbus_handler org.gtk.vfs.Mount:QueryInfo Queued new job 0x2196a20 (GVfsJobQueryInfo) Query info / send_reply(0x2196a20), failed=0 () backend_dbus_handler org.gtk.vfs.Mount:QueryInfo Queued new job 0x218b840 (GVfsJobQueryInfo) Query info /movetest send_reply(0x218b840), failed=0 () Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0755.html |