Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Description of problem:
There is a checksum mismatch on the last two blocks of a vhd/vhdx if the disk is 1 GB (if it is >2GB there is no error).
Tested this by creating a data in 30 blocks (in case of a disk of 1 GB) and check it afterwards if it’s not corrupted.
Version-Release number of selected component (if applicable):
v4.4.1
How reproducible: 100%
Steps to Reproduce:
1. Attach a 1 GB VHD to a SCSI controller
2. Format it, create a btrfs filesystem and mount it
3. We perform a data integrity check by creating a file that will fill the mounted device. This will have 30 blocks in case of a 1 GB large VHD/VHDX
For example:
targetDevice="/dev/sdb1/"
testFile="/dev/shm/testsource"
blockSize=$((32*1024*1024))
_gb=$((1*1024*1024*1024))
targetSize=$(blockdev --getsize64 $targetDevice)
let "blocks=$targetSize / $blockSize"
if [ "$targetSize" -gt "$_gb" ] ; then
targetSize=$_gb
let "blocks=$targetSize / $blockSize"
fi
blocks=$((blocks-1))
mount $targetDevice /mnt/
targetDevice="/mnt/1"
dd if=/dev/urandom of=$testFile bs=$blockSize count=1 status=noxfer 2> /dev/null
4. Calculate its checksum then check the checksum for all the blocks individually.
checksum=$(sha1sum $testFile | cut -d " " -f 1)
for ((y=0 ; y<$blocks ; y++)) ; do
dd if=$testFile of=$targetDevice bs=$blockSize count=1 seek=$y status=noxfer 2> /dev/null
echo -n "Checking block $y ..."
testChecksum=$(dd if=$targetDevice bs=$blockSize count=1 skip=$y status=noxfer 2> /dev/null | sha1sum | cut -d " " -f 1)
if [ "$checksum" == "$testChecksum" ] ; then
echo "Checksum matched for block $y"
else
echo "Checksum mismatch at block $y"
exit 80
fi
done
5. You can see that the last two blocks have different checksum
Actual results:
[...]
Checking block 24 ...checksum: facb2ef7b500779e3857c7787a237b693714c2c9, testChecksum: facb2ef7b500779e3857c7787a237b693714c2c9
Checksum matched for block 24
Checking block 25 ...checksum: facb2ef7b500779e3857c7787a237b693714c2c9, testChecksum: facb2ef7b500779e3857c7787a237b693714c2c9
Checksum matched for block 25
Checking block 26 ...checksum: facb2ef7b500779e3857c7787a237b693714c2c9, testChecksum: facb2ef7b500779e3857c7787a237b693714c2c9
Checksum matched for block 26
Checking block 27 ...checksum: facb2ef7b500779e3857c7787a237b693714c2c9, testChecksum: facb2ef7b500779e3857c7787a237b693714c2c9
Checksum matched for block 27
Checking block 28 ...checksum: facb2ef7b500779e3857c7787a237b693714c2c9, testChecksum: 8ef29351cce142235e7eeba8ffe913341d714ccb
Checksum mismatch at block 28
Checking block 29 ...checksum: facb2ef7b500779e3857c7787a237b693714c2c9, testChecksum: da39a3ee5e6b4b0d3255bfef95601890afd80709
Checksum mismatch at block 29
Expected results:
Checksum should match on all blocks.
Additional info:
- With all other filesystems tested (ext3, ext4, xfs) this could not be reproduced.
- This issue didn’t persist on disks larger than 1 GB (tested with 2 GB, 3 GB and 2 TB)
- Could not reproduce on RHEL 7.2 with btrfs-progs v3.19.1
Description of problem: There is a checksum mismatch on the last two blocks of a vhd/vhdx if the disk is 1 GB (if it is >2GB there is no error). Tested this by creating a data in 30 blocks (in case of a disk of 1 GB) and check it afterwards if it’s not corrupted. Version-Release number of selected component (if applicable): v4.4.1 How reproducible: 100% Steps to Reproduce: 1. Attach a 1 GB VHD to a SCSI controller 2. Format it, create a btrfs filesystem and mount it 3. We perform a data integrity check by creating a file that will fill the mounted device. This will have 30 blocks in case of a 1 GB large VHD/VHDX For example: targetDevice="/dev/sdb1/" testFile="/dev/shm/testsource" blockSize=$((32*1024*1024)) _gb=$((1*1024*1024*1024)) targetSize=$(blockdev --getsize64 $targetDevice) let "blocks=$targetSize / $blockSize" if [ "$targetSize" -gt "$_gb" ] ; then targetSize=$_gb let "blocks=$targetSize / $blockSize" fi blocks=$((blocks-1)) mount $targetDevice /mnt/ targetDevice="/mnt/1" dd if=/dev/urandom of=$testFile bs=$blockSize count=1 status=noxfer 2> /dev/null 4. Calculate its checksum then check the checksum for all the blocks individually. checksum=$(sha1sum $testFile | cut -d " " -f 1) for ((y=0 ; y<$blocks ; y++)) ; do dd if=$testFile of=$targetDevice bs=$blockSize count=1 seek=$y status=noxfer 2> /dev/null echo -n "Checking block $y ..." testChecksum=$(dd if=$targetDevice bs=$blockSize count=1 skip=$y status=noxfer 2> /dev/null | sha1sum | cut -d " " -f 1) if [ "$checksum" == "$testChecksum" ] ; then echo "Checksum matched for block $y" else echo "Checksum mismatch at block $y" exit 80 fi done 5. You can see that the last two blocks have different checksum Actual results: [...] Checking block 24 ...checksum: facb2ef7b500779e3857c7787a237b693714c2c9, testChecksum: facb2ef7b500779e3857c7787a237b693714c2c9 Checksum matched for block 24 Checking block 25 ...checksum: facb2ef7b500779e3857c7787a237b693714c2c9, testChecksum: facb2ef7b500779e3857c7787a237b693714c2c9 Checksum matched for block 25 Checking block 26 ...checksum: facb2ef7b500779e3857c7787a237b693714c2c9, testChecksum: facb2ef7b500779e3857c7787a237b693714c2c9 Checksum matched for block 26 Checking block 27 ...checksum: facb2ef7b500779e3857c7787a237b693714c2c9, testChecksum: facb2ef7b500779e3857c7787a237b693714c2c9 Checksum matched for block 27 Checking block 28 ...checksum: facb2ef7b500779e3857c7787a237b693714c2c9, testChecksum: 8ef29351cce142235e7eeba8ffe913341d714ccb Checksum mismatch at block 28 Checking block 29 ...checksum: facb2ef7b500779e3857c7787a237b693714c2c9, testChecksum: da39a3ee5e6b4b0d3255bfef95601890afd80709 Checksum mismatch at block 29 Expected results: Checksum should match on all blocks. Additional info: - With all other filesystems tested (ext3, ext4, xfs) this could not be reproduced. - This issue didn’t persist on disks larger than 1 GB (tested with 2 GB, 3 GB and 2 TB) - Could not reproduce on RHEL 7.2 with btrfs-progs v3.19.1