Bug 126775 - [PATCH] compress does not work if the file size is greater than 2GB
[PATCH] compress does not work if the file size is greater than 2GB
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 3
Classification: Red Hat
Component: ncompress (Show other bugs)
3.0
All Linux
medium Severity medium
: ---
: ---
Assigned To: Jeff Johnson
Ben Levenson
:
Depends On:
Blocks: 126776
  Show dependency treegraph
 
Reported: 2004-06-26 09:14 EDT by Bernd Schmidt
Modified: 2007-11-30 17:07 EST (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2004-07-14 13:23:11 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
A patch which seems to fix the problem (295 bytes, patch)
2004-06-26 09:21 EDT, Bernd Schmidt
no flags Details | Diff

  None (edit)
Description Bernd Schmidt 2004-06-26 09:14:29 EDT
From Issue Tracker (41696):

Problem:
  When the file they are compressing is greater than 2GB, they get a
segmentation fault. 

This problem has been seen at customer site and duplicated on our lab
on two systems.

Also see https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=66311
possible same symptom/problem
Comment 1 Bernd Schmidt 2004-06-26 09:18:42 EDT
Customer reported this against AS2.1, but RHEL3 still seems to have
the problem.  We'll need the RHEL3 package moved into AS2.1 once the
fix has been applied; I'll open a separate bugzilla for this.
Comment 2 Bernd Schmidt 2004-06-26 09:21:17 EDT
Created attachment 101437 [details]
A patch which seems to fix the problem

There was one file-size related variable ("checkpoint") left which was declared
as "long"; changing that to "long long" appears to fix the segfault.  I can't
really follow the algorithm, but apparently it got confused once "checkpoint"
became negative, and it tried to write past the end of an array.

Note that to reproduce the problem it seems you need a file with actual 2GB of
data or more; I failed to cause a segfault with a test file which was only
holes below 2G and a bit of data above that.
Comment 3 Jeff Johnson 2004-07-14 12:39:28 EDT
This test in progress, packages immediately after success:

sudo dd if=/dev/sda1 bs=1M | compress -c | uncompress -c > /dev/null
Comment 4 Jeff Johnson 2004-07-14 13:23:11 EDT
Fixed in
    ncompress-4.2.4-37 in AS2.1-errata-candidate
    ncompress-4.2.4-38 in 3.0-U3-HEAD
Apologies for the delay.

Note You need to log in before you can comment on or make changes to this bug.