Bug 54371
Summary: | mke2fs (e2fsprogs) appears to have an end condition bug | ||
---|---|---|---|
Product: | [Retired] Red Hat Linux | Reporter: | Andrew Smith <rhbz> |
Component: | kernel | Assignee: | Arjan van de Ven <arjanv> |
Status: | CLOSED NOTABUG | QA Contact: | Aaron Brown <abrown> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 7.1 | CC: | sarsenault, sct |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | i386 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2002-02-19 22:48:56 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Andrew Smith
2001-10-04 23:13:54 UTC
OK, just to put more belief in the end condition bug idea: I have just installed ANOTHER RedHat 7.1 machine and AGAIN it has a few bad blocks at the end of the / partition: **** Sun Oct 14 05:42:00 EST 2001 Filesystem 1k-blocks Used Available Use% Mounted on /dev/hdc5 3637796 2441168 1011836 71% / /dev/hdc1 49838 3832 43433 9% /boot /dev/hda1 4208784 261284 3947500 7% /dos **** Sun Oct 14 05:53:14 EST 2001 badblocks /=/dev/hdc5 Checking for bad blocks in read-only mode >From block 0 to 3695863 3695860 3695861 3695862 Pass completed, 3 bad blocks found. **** Sun Oct 14 06:02:45 EST 2001 badblocks /boot=/dev/hdc1 Checking for bad blocks in read-only mode >From block 0 to 51471 Pass completed, 0 bad blocks found. Compaq Deskpro. Fuji 17G drive. If you need more info let me know sarsenault. [root@... /root]# df -h Filesystem Size Used Avail Use% Mounted on /dev/hda6 5.6G 1.1G 4.2G 20% / /dev/hda7 1.8G 33M 1.7G 2% /backup /dev/hda1 53M 9.1M 41M 18% /boot /dev/hda5 7.7G 751M 6.5G 11% /home none 109M 0 108M 0% /dev/shm [root@... /root]# badblocks -v /dev/hda1 Checking for bad blocks in read-only mode From block 0 to 56196 Pass completed, 0 bad blocks found. [root@... /root]# badblocks -v /dev/hda5 Checking for bad blocks in read-only mode From block 0 to 8193118 8193116 8193117 Pass completed, 2 bad blocks found. [root@... /root]# badblocks -v /dev/hda6 Checking for bad blocks in read-only mode From block 0 to 5919921 5919920 Pass completed, 1 bad blocks found. [root@... /root]# badblocks -v /dev/hda7 Checking for bad blocks in read-only mode From block 0 to 1951866 1951864 1951865 Pass completed, 2 bad blocks found. This is not reassuring but I am not believing the results due to this post. I sure hope someone comes up with a answer. This is a kernel problem. I think a partial fix is in our current errata kernel, a real clean fix can only go into the development kernel 2.5.x. cu, Florian La Roche Can you check if your partition is an odd number of sectors in size ? Which kernel are you using, exactly? This sounds to me as if there's a block size problem manifesting on filesystems using a 1k blocksize. If the buffered IO during the badblocks test happens to use a 4k blocksize by default, you'd get exactly these symptoms. OK - the partitions were created with the standard 7.1 install so I'd guess my current kernel is not going to explain anything, but anyway it is: 2.4.9-12 (from the redhat updates) Having looked at this again, I see two actual problems: 1) The number of blocks on each partition (reported by df) is quit a bit smaller than the number used by badblocks to check the whole partition - is this expected behaviour? Should df report the true size of the partition or does ext2 not use the whole parition and thus waste about 1% or 2% of it? Or is this extra space used for something else? 2) Badblocks is checking past the size specified by df Output of /proc/partitions (for machine 1 or 2 at top - they are the same) major minor #blocks name rio rmerge rsect ruse wio wmerge wsect wuse running use aveq 3 0 19938240 hda 4414849 15753092 160234604 7482097 324656 2487693 22511544 5061971 -20 15050850 9646918 3 1 1028128 hda1 294 0 304 210 0 0 0 0 0 210 210 3 2 56227 hda2 10907 173918 369650 10170 5 8 32 510 0 9890 10680 3 3 1 hda3 0 0 0 0 0 0 0 0 0 0 0 3 5 18322101 hda5 4403538 15578289 159856690 7469557 324595 2486748 22503568 5057831 0 14774260 12797228 3 6 530113 hda6 110 885 7960 2160 56 937 7944 3630 0 3250 5790 "df" counts usable blocks, but there are reserved blocks over and above that count for the inode tables. So you would expect "df" to show smaller than the partition size. "tune2fs -l" will list the superblock on a device, and will tell you the true total block size that the filesystem has been created with. /dev/hda5 is exactly 18322101 blocks long. That's quite large, so hda5 is going to have a blocksize of 4k. You have hda5 mounted, so the kernel has already been forced into using that 4k blocksize for all IO access to that partition. You have not given "badblocks" a block size argument, so it has assumed the smallest possible, 1k, to give the greatest possible coverage on the device. Badblocks has then tried to access the 1k blocks beyond the last complete 4k block in the partition (because the partition has an odd size), and because the kernel is already using a 4k blocksize, it tries to pad the 1k read out to a complete 4k block and fails because that extends beyond the end of the device. Solution: use the "-b 4096" option to badblocks to tell it what the blocksize really is. This is not strictly a bug, more a restriction on the kernel's ability to deal with multiple blocksizes at once on a device. Please reopen this bug report if the "-b 4096" doesn't fix things for you. |