Red Hat Bugzilla – Bug 476135
df shows incorrect 1k-blocks size for mounted raid filesystem
Last modified: 2009-01-04 14:03:00 EST
Created attachment 326703 [details]
Output from various commands -- see especially md2 & md3 size
Description of problem:
For a local fix to bug 470174 I installed a SATA+RAID adapter and two HDs as part of replacing the SCSI adapter and 1 HD. For the transition period they need to co-exist. My intent is to configure software raid for the new HDs on the existing build of F10.
The basic design is to place five software RAID filesystems on the two SATA HDs. RAID-1 for /home and /db [a filesystem containing http, mysql, ftp, etc, server data structures normally found in /var]. RAID-0 for swap, /, and /retired-os filesystems. the RAID-0 filesystems are LOGICAL filesystems, the RAID-1 are PRIMARY. The size errors are with the LOGICAL filesystems only(?). After moving data around, the SCSI HD and one IDE HD are to be removed.
First problem: The drives do not boot in the same sequence each time. I see both:
IDE IDE SCSI SATA SATA
IDE IDE SATA STAT SCSI
Even though the SATA init precedes the SCSI init every time during BIOS init. I can work around this and include it only for information.
Second problem: After mounting four mdx devices, I find some filesystems show wrong size reported by fd, and various other inconsistencies between mdadm, fdisk, cfdisk, parted, and /proc/mdstat. This is a show-stopper, even though the file systems will accept data transfers in & out.
Third problem: The OS shows random various lock-ups seen as very slow response without leaving errors in /var/log. The delays on all sessions can last for many seconds, one lasted perhaps two minutes while a stalled "...# rm file" cleared itself on one of the raid file systems. This is troubling -- I'm expecting smoke from somewhere...
Fourth problem: One file managed to exist on one RAID filesystem even after the build process performed "...# mkfs.ext3 ...". My expectation is that the new filesystem would be empty. I can work around this as needed.
Version-Release number of selected component (if applicable):
I built F10-Preview and apply update every couple days -- if mdadm is the root problem, it is now 184.108.40.206-1.fc10.
I have similar results during several builds -- I've scripted the build to try various adjustments.
I don't find other discussion on these issues, so this may be non-fatal.
BUT before I unplug the old hardware it seems that various views of the same devices should be more consistent?
I am able to copy 24GB of data into the RAID0 md5 and md6 without error, followed by comparing the copies and finding them identical to input. From this test it seems that RAID0 is in fact not striping the data into sdd & sde partitions that are merely 20GB capacity. Clearly I don't know where the data is being placed. It does seem that the df measurement of filesystem size is correct and the actions of mdadm to create raid0 is faulted / perhaps by my mis-use...?
My error -- df block count is not [probably] flawed.
Several other issues above remain, but are so minor they don't(?) deserve a bug report // this report deserves to be closed [notabug] in my opinion.