[build@sharpie ~]$ sudo mock -r fedora-20-ppc64 --shell ... <mock-chroot>[root@sharpie ~]# wget http://kojipkgs.fedoraproject.org//packages/pyparted/3.9.3/1.fc21/src/pyparted-3.9.3-1.fc21.src.rpm <mock-chroot>[root@sharpie ~]# yum-builddep pyparted-3.9.3-1.fc21.src.rpm <mock-chroot>[root@sharpie ~]# rpm -i pyparted-3.9.3-1.fc21.src.rpm <mock-chroot>[root@sharpie ~]# (cd ~/build/; rpmbuild -ba SPECS/pyparted.spec 2>&1 | tee errors.pyparted) ... Description of problem: runTest (tests.test__ped_disk.DiskGetMaxPartitionGeoemtryTestCase) ... skipped 'Unimplemented test case.' runTest (tests.test__ped_disk.DiskGetMaxPrimaryPartitionCountTestCase) ... ok runTest (tests.test__ped_disk.DiskGetMaxSupportedPartitionCountTestCase) ... ok runTest (tests.test__ped_disk.DiskGetPartitionAlignmentTestCase) ... ok runTest (tests.test__ped_disk.DiskGetPartitionBySectorTestCase) ... skipped 'Unimplemented test case.' runTest (tests.test__ped_disk.DiskGetPartitionTestCase) ... skipped 'Unimplemented test case.' runTest (tests.test__ped_disk.DiskGetPrimaryPartitionCountTestCase) ... ok runTest (tests.test__ped_disk.DiskGetSetTestCase) ... skipped 'Unimplemented test case.' runTest (tests.test__ped_disk.DiskIsFlagAvailableTestCase) ... ok runTest (tests.test__ped_disk.DiskMaxPartitionLengthTestCase) ... ok runTest (tests.test__ped_disk.DiskMaxPartitionStartSectorTestCase) ... ok runTest (tests.test__ped_disk.DiskMaxmimzePartitionTestCase) ... skipped 'Unimplemented test case.' runTest (tests.test__ped_disk.DiskMinimizeExtendedPartitionTestCase) ... skipped 'Unimplemented test case.' runTest (tests.test__ped_disk.DiskNewLabeledTestCase) ... ok runTest (tests.test__ped_disk.DiskNewUnlabeledTestCase) ... ok runTest (tests.test__ped_disk.DiskNextPartitionTestCase) ... skipped 'Unimplemented test case.' runTest (tests.test__ped_disk.DiskPrintTestCase) ... skipped 'Unimplemented test case.' runTest (tests.test__ped_disk.DiskRemovePartitionTestCase) ... skipped 'Unimplemented test case.' runTest (tests.test__ped_disk.DiskSetFlagTestCase) ... ok runTest (tests.test__ped_disk.DiskSetPartitionGeomTestCase) ... skipped 'Unimplemented test case.' runTest (tests.test__ped_disk.DiskStrTestCase) ... ok ====================================================================== FAIL: runTest (tests.test__ped_ped.FileSystemProbeSpecificTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/pyparted-3.9.3/tests/test__ped_ped.py", line 282, in runTest self.assertEquals(result.end, self._geometry.end) AssertionError: 255L != 271L ---------------------------------------------------------------------- Ran 272 tests in 2.496s FAILED (failures=1, skipped=131) make: *** [test] Error 1 error: Bad exit status from /var/tmp/rpm-tmp.vnM6QU (%check) RPM build errors: Bad exit status from /var/tmp/rpm-tmp.vnM6QU (%check) Version-Release number of selected component (if applicable): How reproducible: Always
The same failure occurs on AArch64, so this is not specific only to ppc64: http://arm.koji.fedoraproject.org/koji/taskinfo?taskID=2354502 FAIL: runTest (tests.test__ped_ped.FileSystemProbeSpecificTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/pyparted-3.9.3/tests/test__ped_ped.py", line 282, in runTest self.assertEquals(result.end, self._geometry.end) AssertionError: 255L != 271L
The difference here is with how mke2fs is creating the filesystem. On aarch64 and ppc64 it uses 8 less 1k blocks. If this is expected then we need to change pyparted's test. If not then e2fsprogs needs to be fixed. To duplicate: fallocate -l 140000 temp-device /sbin/mke2fs -F -q temp-device tune2fs -l temp-device Note that on x86_64 the Block count is 136 and on aarch64 it is 128. On rawhide aarch64 I used e2fsprogs-1.42.10-2.fc21.aarch64 and on rawhide x86_64 I used e2fsprogs-1.42.10-2.fc21.x86_64 I'll attach the raw fs images for each since they're small.
Created attachment 900462 [details] ext2 fs created on aarch64
Created attachment 900464 [details] ext2 fs created on x86_64
I don't know that it's intentional, but why does parted care? ;) And 137k filesystems? Really? I'll look into it, but I must admit, I've never even realized that a 137k filesystem could exist.
It's just a test of the filesystem probe code, so the size isn't critical. It's just that it has always matched the device size so something has changed someplace. If it is expected that's fine, I just want to know for sure so that we don't paper over a real bug.
Ok, it has to do w/ the page size difference (64k on the aarch64 box I tested on): fs_blocks_count = dev_size; if (sys_page_size > EXT2_BLOCK_SIZE(&fs_param)) fs_blocks_count &= ~((blk64_t) ((sys_page_size / EXT2_BLOCK_SIZE(&fs_param))-1)); so we round down the block count to perfectly fit to the end of a page. Honestly, I have no idea why we do that; the filesystem can be moved from one page sized host to another, so I might dig into that a bit more to satisfy my own curiosity. But I don't think parted can count on mkfs always fully using any device it's handed; it's free to do rounding like this to optimize for any reason, if needed. Thanks, -Eric
commit a7ccdff8e128c24564011d760a996496b0a981b3 Author: Theodore Ts'o <tytso> Date: Tue Jul 8 18:03:48 2003 -0400 In mke2fs and resize2fs, round the default size of the filesystem to be an even multiple of the pagesize to work around a potential Linux kernel bug. Use the testio manager in mke2fs if CONFIG_TESTIO_DEBUG is set. Heh, sure! Probably doesn't matter anymore, but it's there, and it's not technically _wrong_ -Eric
I'll close this; if you want to re-open it & put it on pyparted, feel free.
Thanks for looking into this, I'll fix the test in pyparted.
Created attachment 900522 [details] patch for test
This will be in the next pyparted build.
This builds because parted.getLabels() is failing to get a list of valid device nodes. Looks like Python's idea of the machine type is slightly different. Investigating.
platform.machine() in Python returns "armv7l" on our builders, so we need to account for that.
Fixed by commit 9086315eed6