Bug 84518 - Writing large file (>4GB) to samba share corrupts file
Writing large file (>4GB) to samba share corrupts file
Product: Red Hat Linux
Classification: Retired
Component: samba (Show other bugs)
i686 Linux
medium Severity medium
: ---
: ---
Assigned To: Jay Fenlason
David Lawrence
Depends On:
  Show dependency treegraph
Reported: 2003-02-18 05:26 EST by Kenn Humborg
Modified: 2014-08-31 19:24 EDT (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2003-02-27 13:04:46 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Kenn Humborg 2003-02-18 05:26:31 EST
Description of problem:

Writing large file (>4GB0 to samba share loses data.

Version-Release number of selected component (if applicable):

samba-2.2.1a-4 (most likely - I'll have to double-check when
I get home).

How reproducible:

Tried once.

Steps to Reproduce:

   Run NTBackup on a Windows XP machine saving >4GB saveset
   to a samba-shared directory.

   Should also reproduce with a plain file copy, only I 
   don't have a >4GB file handy.
Actual results:

   After 4GB is written, Red Hat box jumps to 50% _system_ CPU
   usage, negligible user CPU usage.  

   After backup job finished, ls -l shows a file of the correct
   size.  However, ls -s and du report the file as being only
   20KB.  So it's become a large, sparse file.

   Worst thing is that it's not obvious to a user that the
   file has been completely corrupted and the data lost.

Expected results:

   >4GB file should be stored intact.

Additional info:

   Filesystem is ext3
   Kernel is 2.4.18-19.7.x.
Comment 1 Kenn Humborg 2003-02-27 13:04:46 EST
Apparently, this is actually fixed in samba 2.2.5.  Earlier
versions didn't have large file support.

Note You need to log in before you can comment on or make changes to this bug.