This service will be undergoing maintenance at 00:00 UTC, 2016-08-01. It is expected to last about 1 hours
Bug 810469 - IO::Uncompress::Unzip->getHeaderInfo returns wrong (un)compressed member size for zip64 archives
IO::Uncompress::Unzip->getHeaderInfo returns wrong (un)compressed member size...
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: perl (Show other bugs)
6.2
Unspecified Unspecified
medium Severity medium
: rc
: ---
Assigned To: Petr Pisar
Martin Kyral
: Patch, Rebase
Depends On:
Blocks: 947784
  Show dependency treegraph
 
Reported: 2012-04-06 05:47 EDT by Jan Holcapek
Modified: 2013-11-20 23:40 EST (History)
5 users (show)

See Also:
Fixed In Version: perl-5.10.1-133.el6
Doc Type: Rebase: Bug Fixes Only
Doc Text:
Rebase package(s) to version: Compress-Raw-Bzip2, Compress-Raw-Zlib, and IO-Compress Perl distributions have been rebased to 2.021 version. Highlights and important bug fixes: Support for 64-bit ZIP archives has been improved. Especially, size of files bigger than 2^32 bytes is reported properly now.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-11-20 23:40:18 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)
Reproducer (1012 bytes, text/plain)
2012-04-11 11:14 EDT, Petr Pisar
no flags Details

  None (edit)
Description Jan Holcapek 2012-04-06 05:47:12 EDT
Description of problem:
IO::Uncompress::Unzip->getHeaderInfo returns wrong (un)compressed member size for zip64 archives.
This bug seems to be fixed in IO::Uncompress::Unzip version 2.021

Version-Release number of selected component (if applicable):
perl-5.10.1-115.el6
IO::Uncompress::Unzip version 2.020

How reproducible:
Always

Steps to Reproduce:
1. create large (>4 GB) file: dd if=/dev/zero of=/tmp/bigfile bs=1M count=6000
2. zip this file: zip /tmp/bigfile.zip /tmp/bigfile
3. check (un)compressed members size using getHeaderInfo:

perl -MData::Dumper -MIO::Uncompress::Unzip
my $z = IO::Uncompress::Unzip->new("/tmp/bigfile.zip");
print Dumper([ @{ $z->getHeaderInfo }{ qw(UncompressedLength CompressedLength) } ])
^D

Actual results:
$VAR1 = [
          bless( [
                   4294967295,
                   0
                 ], 'U64' ),
          bless( [
                   4294967295,
                   0
                 ], 'U64' )
        ];

Expected results:
$VAR1 = [
          bless( [
                   1996488704,
                   1
                 ], 'U64' ),
          bless( [
                   6105689,
                   0
                 ], 'U64' )
        ];


Additional info:
Comment 2 Petr Pisar 2012-04-11 11:14:39 EDT
Created attachment 576807 [details]
Reproducer

We cannot use pure memory-based test because of another bug (https://rt.cpan.org/Public/Bug/Display.html?id=76495). This reproducer uses `zip' tool to create the ZIP archive. zip is inefficient---it packs 4G of zeros into 4MB archive ;)
Comment 3 Petr Pisar 2012-04-11 11:45:55 EDT
I propose to upgrade IO-Compress from 2.020 to 2.021 because the 2.021 brings a lot of fixes on decompression as well as on compression side regarding the 64-bit ZIP support. It also speeds up seeking in big files. Upstream changelog:

2.021 30 August 2009

      * IO::Compress::Base.pm
        - Less warnnings when reading from a closed filehandle.
          [RT# 48350]
        - Fixed minor typo in an error message.
          [RT# 39719]

      * Makefile.PL
        The PREREQ_PM dependency on Scalar::Util got dropped when
        IO-Compress was created in 2.017.
        [RT# 47509]

      * IO::Compress::Zip.pm
        - Removed restriction that zip64 is only supported in streaming
          mode.
        - The "version made by" and "extract" fields in the zip64 end
          central record were swapped.
        - In the End Central Header record the "offset to the start of the
          central directory" will now always be set to 0xFFFFFFFF when
          zip64 is enabled.
        - In the End Central Header record the "total entries in the
          central directory" field will be set to 0xFFFF if zip64 is
          enabled AND there are more than 0xFFFF entries present.

      * IO::Uncompress::Unzip.pm
        - Don't consume lots of memory when walking a zip file. This makes
          life more bearable when dealing with zip64.

      * Compress::Zlib.pm
        - documented that memGunzip cannot cope with concatenated gzip data
          streams.

      * Changed test harness so that it can cope with PERL5OPT=-MCarp=verbose
        [RT# 47225]

      * IO::Compress::Gzip::Constants.pm
        - GZIP_FEXTRA_MAX_SIZE was set to 0xFF. Should be 0xFFFF.  This
          issue came up when attempting to unzip a file created by MS
          Office 2007.
Comment 5 Petr Pisar 2013-06-05 08:59:13 EDT
IO-Compress upgrade requires upgrading Compress-Raw-Bzip2 and Compress-Raw-Zlib to the same version 2.021. There are only cosmetic changes in upstream test suites:

2.021 30 August 2009

      * Changed test harness so that it can cope with PERL5OPT=-MCarp=verbose
        [RT# 47225]

So I'm going to rebase these two modules either.
Comment 14 errata-xmlrpc 2013-11-20 23:40:18 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1534.html

Note You need to log in before you can comment on or make changes to this bug.