Red Hat Bugzilla – Bug 810469
IO::Uncompress::Unzip->getHeaderInfo returns wrong (un)compressed member size for zip64 archives
Last modified: 2013-11-20 23:40:18 EST
Description of problem: IO::Uncompress::Unzip->getHeaderInfo returns wrong (un)compressed member size for zip64 archives. This bug seems to be fixed in IO::Uncompress::Unzip version 2.021 Version-Release number of selected component (if applicable): perl-5.10.1-115.el6 IO::Uncompress::Unzip version 2.020 How reproducible: Always Steps to Reproduce: 1. create large (>4 GB) file: dd if=/dev/zero of=/tmp/bigfile bs=1M count=6000 2. zip this file: zip /tmp/bigfile.zip /tmp/bigfile 3. check (un)compressed members size using getHeaderInfo: perl -MData::Dumper -MIO::Uncompress::Unzip my $z = IO::Uncompress::Unzip->new("/tmp/bigfile.zip"); print Dumper([ @{ $z->getHeaderInfo }{ qw(UncompressedLength CompressedLength) } ]) ^D Actual results: $VAR1 = [ bless( [ 4294967295, 0 ], 'U64' ), bless( [ 4294967295, 0 ], 'U64' ) ]; Expected results: $VAR1 = [ bless( [ 1996488704, 1 ], 'U64' ), bless( [ 6105689, 0 ], 'U64' ) ]; Additional info:
Created attachment 576807 [details] Reproducer We cannot use pure memory-based test because of another bug (https://rt.cpan.org/Public/Bug/Display.html?id=76495). This reproducer uses `zip' tool to create the ZIP archive. zip is inefficient---it packs 4G of zeros into 4MB archive ;)
I propose to upgrade IO-Compress from 2.020 to 2.021 because the 2.021 brings a lot of fixes on decompression as well as on compression side regarding the 64-bit ZIP support. It also speeds up seeking in big files. Upstream changelog: 2.021 30 August 2009 * IO::Compress::Base.pm - Less warnnings when reading from a closed filehandle. [RT# 48350] - Fixed minor typo in an error message. [RT# 39719] * Makefile.PL The PREREQ_PM dependency on Scalar::Util got dropped when IO-Compress was created in 2.017. [RT# 47509] * IO::Compress::Zip.pm - Removed restriction that zip64 is only supported in streaming mode. - The "version made by" and "extract" fields in the zip64 end central record were swapped. - In the End Central Header record the "offset to the start of the central directory" will now always be set to 0xFFFFFFFF when zip64 is enabled. - In the End Central Header record the "total entries in the central directory" field will be set to 0xFFFF if zip64 is enabled AND there are more than 0xFFFF entries present. * IO::Uncompress::Unzip.pm - Don't consume lots of memory when walking a zip file. This makes life more bearable when dealing with zip64. * Compress::Zlib.pm - documented that memGunzip cannot cope with concatenated gzip data streams. * Changed test harness so that it can cope with PERL5OPT=-MCarp=verbose [RT# 47225] * IO::Compress::Gzip::Constants.pm - GZIP_FEXTRA_MAX_SIZE was set to 0xFF. Should be 0xFFFF. This issue came up when attempting to unzip a file created by MS Office 2007.
IO-Compress upgrade requires upgrading Compress-Raw-Bzip2 and Compress-Raw-Zlib to the same version 2.021. There are only cosmetic changes in upstream test suites: 2.021 30 August 2009 * Changed test harness so that it can cope with PERL5OPT=-MCarp=verbose [RT# 47225] So I'm going to rebase these two modules either.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1534.html