Bug 84518

Summary: Writing large file (>4GB) to samba share corrupts file
Product: [Retired] Red Hat Linux Reporter: Kenn Humborg <kenn>
Component: sambaAssignee: Jay Fenlason <fenlason>
Status: CLOSED NOTABUG QA Contact: David Lawrence <dkl>
Severity: medium Docs Contact:
Priority: medium    
Version: 7.2CC: jfeeney
Target Milestone: ---   
Target Release: ---   
Hardware: i686   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2003-02-27 18:04:46 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Kenn Humborg 2003-02-18 10:26:31 UTC
Description of problem:

Writing large file (>4GB0 to samba share loses data.

Version-Release number of selected component (if applicable):

samba-2.2.1a-4 (most likely - I'll have to double-check when
I get home).

How reproducible:

Tried once.

Steps to Reproduce:

   Run NTBackup on a Windows XP machine saving >4GB saveset
   to a samba-shared directory.

   Should also reproduce with a plain file copy, only I 
   don't have a >4GB file handy.
    
Actual results:

   After 4GB is written, Red Hat box jumps to 50% _system_ CPU
   usage, negligible user CPU usage.  

   After backup job finished, ls -l shows a file of the correct
   size.  However, ls -s and du report the file as being only
   20KB.  So it's become a large, sparse file.

   Worst thing is that it's not obvious to a user that the
   file has been completely corrupted and the data lost.

Expected results:

   >4GB file should be stored intact.

Additional info:

   Filesystem is ext3
   Kernel is 2.4.18-19.7.x.

Comment 1 Kenn Humborg 2003-02-27 18:04:46 UTC
Apparently, this is actually fixed in samba 2.2.5.  Earlier
versions didn't have large file support.