Description of problem: btrfs-convert fails with a core dump when converting a filesystem. The filesystem was checked with "e2fsck -y -f" before the conversion attempt. The system has 32Gb of ram installed and was otherwise idle. # btrfs-convert -L /dev/mapper/vg_backup-lv_backup btrfs-convert from btrfs-progs v5.18 Source filesystem: Type: ext2 Label: backup Blocksize: 4096 UUID: 8407aaf9-3225-4b7d-b6fd-af75e97822c7 Target filesystem: Label: Blocksize: 4096 Nodesize: 16384 UUID: b0054b66-d06b-458d-b6dd-74dd4d2d5e78 Checksum: crc32c Features: extref, skinny-metadata, no-holes (default) Data csum: yes Inline data: yes Copy xattr: yes Reported stats: Total space: 12412111552512 Free space: 16709885534208 (134.63%) Inode count: 915660800 Free inodes: 903394212 Block count: 3030300672 Create initial btrfs filesystem Create ext2 image file Create btrfs metadata convert/source-fs.c:277: record_file_blocks: BUG_ON `cur_off - key.offset >= extent_num_bytes` triggered, value 1 btrfs-convert(record_file_blocks+0x3ff)[0x5596fd4115df] btrfs-convert(block_iterate_proc+0xc2)[0x5596fd4116e2] btrfs-convert(+0x158b8)[0x5596fd4118b8] /lib64/libext2fs.so.2(+0x13c8f)[0x7fba3169ac8f] /lib64/libext2fs.so.2(+0x1f743)[0x7fba316a6743] /lib64/libext2fs.so.2(ext2fs_block_iterate2+0x30)[0x7fba316a6b70] btrfs-convert(+0x1595b)[0x5596fd41195b] btrfs-convert(+0x16877)[0x5596fd412877] btrfs-convert(+0x181ae)[0x5596fd4141ae] btrfs-convert(main+0x477)[0x5596fd40a937] /lib64/libc.so.6(+0x29550)[0x7fba31439550] /lib64/libc.so.6(__libc_start_main+0x89)[0x7fba31439609] btrfs-convert(_start+0x25)[0x5596fd40aa65] Aborted (core dumped) # uname -a Linux magneto.discordia 5.18.5-200.fc36.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Jun 16 14:51:11 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Did you first run `e2fsck -fvy` before attempting the conversion? Could you run it now and provide the output? And also attach the file created from `debugfs -R show_super_stats /dev/mapper/vg_backup-lv_backup > debugfsshowsuper.txt`
I ran "e2fsck -f -y /dev/mapper/vg_backup-lv_backup" before the conversion attempt, and it didn't report any errors. Below is a run of "e2fsck -f -v -y /dev/mapper/vg_backup-lv_backup" after the attempt. # e2fsck -f -v -y /dev/mapper/vg_backup-lv_backup e2fsck 1.46.5 (30-Dec-2021) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information 12266588 inodes used (1.34%, out of 915660800) 37031 non-contiguous files (0.3%) 17584 non-contiguous directories (0.1%) # of inodes with ind/dind/tind blocks: 0/0/0 Extent depth histogram: 12083505/16214/69 3107368091 blocks used (42.42%, out of 7325267968) 0 bad blocks 1391 large files 4370504 regular files 7679524 directories 13 character device files 0 block device files 62 fifos 69886733 links 216349 symbolic links (166590 fast symbolic links) 127 sockets ------------ 82153312 files
Created attachment 1892740 [details] Output of `debugfs -R show_super_stats /dev/mapper/vg_backup-lv_backup > debugfsshowsuper.txt`
>Reported stats: > Total space: 12412111552512 > Free space: 16709885534208 (134.63%) What is this? There's more free than total? That's not right. Total is 12T according to this but then from the super >Block count: 7325267968 27T? I'm confused, which is it? What's the size of the LV? I'm seeing feature flag meta_bg on this ext4, but not with default mkfs.ext4 on a 28T loop device. And RAID stride and stripe width are set in the super, consistent with mkfs.ext4 detecting an underlying RAID. Not immediately seeing how this is getting tripped up but it's definitely confused somehow, maybe Qu will have an idea tomorrow.
I get a similar result on a loop mounted 28T ext4 using default mkfs options. > Total space: 12412111552512 > Free space: 29531792404480 (237.93%) It does complete without error, however it's also an empty file system.
Some more info on the filesystem. [root@magneto ~]# df -H /backup Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_backup-lv_backup 30T 13T 16T 44% /backup [root@magneto ~]# df /backup Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vg_backup-lv_backup 29069097456 12197497948 15582365232 44% /backup [root@magneto ~]# lvdisplay --- Logical volume --- LV Path /dev/vg_backup/lv_backup LV Name lv_backup VG Name vg_backup LV UUID dMsxzi-m2eH-0Apt-gWga-BXOk-JWYz-F4Rs60 LV Write Access read/write LV Creation host, time magneto.discordia, 2019-06-21 15:30:10 -0400 LV Status available # open 1 LV Size <27.29 TiB Current LE 7153582 Segments 3 Allocation inherit Read ahead sectors auto - currently set to 12288 Block device 253:0 [root@magneto ~]# pvdisplay --- Physical volume --- PV Name /dev/md0 VG Name vg_backup PV Size 21.83 TiB / not usable 0 Allocatable yes (but full) PE Size 4.00 MiB Total PE 5722974 Free PE 0 Allocated PE 5722974 PV UUID Nkh8ZO-djsP-VKJ1-VuS8-KIVO-plVf-ZIwoKU --- Physical volume --- PV Name /dev/md1 VG Name vg_backup PV Size <5.46 TiB / not usable 5.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 1430608 Free PE 0 Allocated PE 1430608 PV UUID fSr655-OeFV-ORJZ-5fmd-Jbh5-2LQb-zHlR7f [root@magneto ~]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md1 : active raid6 sdi1[1] sdp1[7] sdo1[6] sdn1[3] sdj1[0] sdk1[2] sdl1[4] sdm1[5] 5859775488 blocks super 1.2 level 6, 512k chunk, algorithm 2 [8/8] [UUUUUUUU] bitmap: 0/8 pages [0KB], 65536KB chunk md0 : active raid6 sdg1[14] sdh1[15] sdf1[13] sde1[12] sdd1[11] sdc1[10] sdb1[9] sda1[8] 23441307648 blocks super 1.2 level 6, 512k chunk, algorithm 2 [8/8] [UUUUUUUU] [=================>...] check = 87.4% (3417230460/3906884608) finish=131.7min speed=61945K/sec bitmap: 0/8 pages [0KB], 262144KB chunk unused devices: <none>
https://github.com/kdave/btrfs-progs/issues/487
With this change.... diff --git a/convert/source-ext2.c b/convert/source-ext2.c index 9fad4c50..01b630ba 100644 --- a/convert/source-ext2.c +++ b/convert/source-ext2.c @@ -92,8 +92,8 @@ static int ext2_open_fs(struct btrfs_convert_context *cctx, const char *name) cctx->fs_data = ext2_fs; cctx->blocksize = ext2_fs->blocksize; - cctx->block_count = ext2_fs->super->s_blocks_count; - cctx->total_bytes = (u64)ext2_fs->super->s_blocks_count * ext2_fs->blocksize; + cctx->block_count = ext2fs_blocks_count(ext2_fs->super); + cctx->total_bytes = cctx->block_count * ext2_fs->blocksize; cctx->label = strndup((char *)ext2_fs->super->s_volume_name, 16); cctx->first_data_block = ext2_fs->super->s_first_data_block; cctx->inodes_count = ext2_fs->super->s_inodes_count; the total space and block count is reported correctly. But the conversion still fails. [root@magneto ~]# ./btrfs-convert /dev/mapper/vg_backup-lv_backup btrfs-convert from btrfs-progs v5.18.1 Source filesystem: Type: ext2 Label: backup Blocksize: 4096 UUID: 8407aaf9-3225-4b7d-b6fd-af75e97822c7 Target filesystem: Label: Blocksize: 4096 Nodesize: 16384 UUID: 1b2340e7-3b46-4e4c-a702-a36d9abdb909 Checksum: crc32c Features: extref, skinny-metadata, no-holes (default) Data csum: yes Inline data: yes Copy xattr: yes Reported stats: Total space: 30004297596928 Free space: 16711655530496 (55.70%) Inode count: 915660800 Free inodes: 903394212 Block count: 7325267968 Create initial btrfs filesystem Create ext2 image file Create btrfs metadata convert/source-fs.c:277: record_file_blocks: BUG_ON `cur_off - key.offset >= extent_num_bytes` triggered, value 1 ./btrfs-convert[0x414d36] ./btrfs-convert[0x414dba] ./btrfs-convert(record_file_blocks+0x2d6)[0x41598a] ./btrfs-convert(block_iterate_proc+0xfe)[0x4152d4] ./btrfs-convert[0x4166aa] /lib64/libext2fs.so.2(+0x13c8f)[0x7fe937dddc8f] /lib64/libext2fs.so.2(+0x1f743)[0x7fe937de9743] /lib64/libext2fs.so.2(ext2fs_block_iterate2+0x30)[0x7fe937de9b70] ./btrfs-convert[0x41679c] ./btrfs-convert[0x417e2c] ./btrfs-convert[0x4180b0] ./btrfs-convert[0x40d33f] ./btrfs-convert[0x410087] ./btrfs-convert(main+0x78c)[0x41199e] /lib64/libc.so.6(+0x29550)[0x7fe937b7c550] /lib64/libc.so.6(__libc_start_main+0x89)[0x7fe937b7c609] Aborted (core dumped)
Also, is there any ETA on someone taking a look at this? I ask because I can only hold onto the filesystem in its current state for so long. At some point I'll need to put the disk array back in service, and I don't have enough hardware to keep a copy of the filesystem, so I'll lose the ability to reproduce the bug.
I suggest using `e2image -Q` to create a qcow2 image of the ext4 file system metadata (excludes data), if you can save it somewhere in a file sharing service until a dev can get around to looking at it. This will contain directory and file names. If you need that scrubbed you can add -s option, but it might make it less useful to Btrfs developers (not sure).
This message is a reminder that Fedora Linux 36 is nearing its end of life. Fedora will stop maintaining and issuing updates for Fedora Linux 36 on 2023-05-16. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as EOL if it remains open with a 'version' of '36'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, change the 'version' to a later Fedora Linux version. Note that the version field may be hidden. Click the "Show advanced fields" button if you do not see it. Thank you for reporting this issue and we are sorry that we were not able to fix it before Fedora Linux 36 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora Linux, you are encouraged to change the 'version' to a later version prior to this bug being closed.
Fedora Linux 36 entered end-of-life (EOL) status on 2023-05-16. Fedora Linux 36 is no longer maintained, which means that it will not receive any further security or bug fix updates. As a result we are closing this bug. If you can reproduce this bug against a currently maintained version of Fedora Linux please feel free to reopen this bug against that version. Note that the version field may be hidden. Click the "Show advanced fields" button if you do not see the version field. If you are unable to reopen this bug, please file a new report against an active release. Thank you for reporting this bug and we are sorry it could not be fixed.