Red Hat Bugzilla – Bug 84518
Writing large file (>4GB) to samba share corrupts file
Last modified: 2014-08-31 19:24:41 EDT
Description of problem:
Writing large file (>4GB0 to samba share loses data.
Version-Release number of selected component (if applicable):
samba-2.2.1a-4 (most likely - I'll have to double-check when
I get home).
Steps to Reproduce:
Run NTBackup on a Windows XP machine saving >4GB saveset
to a samba-shared directory.
Should also reproduce with a plain file copy, only I
don't have a >4GB file handy.
After 4GB is written, Red Hat box jumps to 50% _system_ CPU
usage, negligible user CPU usage.
After backup job finished, ls -l shows a file of the correct
size. However, ls -s and du report the file as being only
20KB. So it's become a large, sparse file.
Worst thing is that it's not obvious to a user that the
file has been completely corrupted and the data lost.
>4GB file should be stored intact.
Filesystem is ext3
Kernel is 2.4.18-19.7.x.
Apparently, this is actually fixed in samba 2.2.5. Earlier
versions didn't have large file support.