Bug 84518 - Writing large file (>4GB) to samba share corrupts file
Summary: Writing large file (>4GB) to samba share corrupts file
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Linux
Classification: Retired
Component: samba
Version: 7.2
Hardware: i686
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Jay Fenlason
QA Contact: David Lawrence
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2003-02-18 10:26 UTC by Kenn Humborg
Modified: 2014-08-31 23:24 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2003-02-27 18:04:46 UTC


Attachments (Terms of Use)

Description Kenn Humborg 2003-02-18 10:26:31 UTC
Description of problem:

Writing large file (>4GB0 to samba share loses data.

Version-Release number of selected component (if applicable):

samba-2.2.1a-4 (most likely - I'll have to double-check when
I get home).

How reproducible:

Tried once.

Steps to Reproduce:

   Run NTBackup on a Windows XP machine saving >4GB saveset
   to a samba-shared directory.

   Should also reproduce with a plain file copy, only I 
   don't have a >4GB file handy.
    
Actual results:

   After 4GB is written, Red Hat box jumps to 50% _system_ CPU
   usage, negligible user CPU usage.  

   After backup job finished, ls -l shows a file of the correct
   size.  However, ls -s and du report the file as being only
   20KB.  So it's become a large, sparse file.

   Worst thing is that it's not obvious to a user that the
   file has been completely corrupted and the data lost.

Expected results:

   >4GB file should be stored intact.

Additional info:

   Filesystem is ext3
   Kernel is 2.4.18-19.7.x.

Comment 1 Kenn Humborg 2003-02-27 18:04:46 UTC
Apparently, this is actually fixed in samba 2.2.5.  Earlier
versions didn't have large file support.



Note You need to log in before you can comment on or make changes to this bug.