Hide Forgot
Description of problem: Customer has a host with a 3.3 TiB hardware raid6, configured with a GPT partition mapping and a single btrfs partition. Recently the btrfs partition reported it full. In the investigation they that the btrfs fi df command showed the incorrect total size. btrfs fi show and gdisk both correctly report the 3.3 TiB size. They believe this to be the cause of btrfs reporting that it was out of disk space which led to having to add a temporary loopback device. Jul 13 9:03:30 EDT root@hostname#> gdisk -l /dev/sdc GPT fdisk (gdisk) version 0.8.6 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/sdc: 7029129216 sectors, 3.3 TiB Logical sector size: 512 bytes Disk identifier (GUID): 335A16FD-3178-48D1-9E62-BE8C5376AD69 Partition table holds up to 128 entries First usable sector is 34, last usable sector is 7029129182 Partitions will be aligned on 8-sector boundaries Total free space is 888 sectors (444.0 KiB) Number Start (sector) End (sector) Size Code Name 1 34 7029128294 3.3 TiB 0700 Jul 13 9:04:59 EDT root@hostname#> btrfs filesystem show /data/replication Label: 'mariadb logs' uuid: c9dfd858-7af7-434d-b0ea-18a16a47ab92 Total devices 2 FS bytes used 122.18GiB devid 1 size 3.27TiB used 185.01GiB path /dev/sdc1 devid 2 size 10.00GiB used 9.97GiB path /dev/loop0 Btrfs v3.16.2 btrfs filesystem df /data/replication Data, single: total=122.01GiB, used=121.64GiB System, single: total=32.00MiB, used=48.00KiB Metadata, RAID1: total=9.97GiB, used=13.17MiB Metadata, DUP: total=4.50GiB, used=532.75MiB Metadata, single: total=44.00GiB, used=0.00 GlobalReserve, single: total=192.00MiB, used=0.00 Version-Release number of selected component (if applicable): How reproducible: Unclear Actual results: incorrect usage Expected results: correct usage Additional info: customer provided an sosreport
>Recently the btrfs partition reported it full. I don't know what this means, can you provide user and kernel messages? >btrfs fi df command showed the incorrect total size No, but it's understandable to think this is true. From your example: >Data, single: total=122.01GiB, used=121.64GiB Translated this is saying "Total amount allocated to single profile data block groups is 122.01Gib, and of that 121.64GiB is used." It has nothing to do with the total device size. Likewise: >Metadata, RAID1: total=9.97GiB, used=13.17MiB >Metadata, DUP: total=4.50GiB, used=532.75MiB >Metadata, single: total=44.00GiB, used=0.00 There are metadata block groups with different profiles: raid1, dup, and single. This isn't a normal situation, but it'll prevent removing the loop device until metadata is consolidated because parts of single and DUP metadata could be on the loop device. btrfs-progs 3.16 is old and I'm pretty sure it won't permit consolidating to DUP profile with multiple devices, so btrfs -mconvert=single -f <mp> btrfs dev del /dev/loop0 btrfs -mconvert=dup <mp> Anyway, no doubt the problem is already resolved seeing as it was filed a year ago. But this bug report is probably notabug, it's just an (understandable) misunderstanding of the lingo used by btrfs fi df output. Long term it's better to get btrfs-progs up to v4.3 at least, where 'filesystem usage' command exists and has more sane output, and have users report that instead of fi show and fi df.
Closing with INSUFFICIENT_DATA - one case is CLOSED - may be fixed in latest version - lack of confirmation the problem fixed