Bug 1243986

Summary: df reports wrong total size for btrfs
Product: Red Hat Enterprise Linux 7 Reporter: Ryan Crews <rcrews>
Component: btrfs-progsAssignee: fs-maint
Status: CLOSED INSUFFICIENT_DATA QA Contact: Eryu Guan <eguan>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 7.1CC: bugzilla, dwysocha, eguan, rcrews
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-11-29 19:47:15 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1298243    

Description Ryan Crews 2015-07-16 18:54:42 UTC
Description of problem:
Customer has a host with a 3.3 TiB hardware raid6, configured with a GPT partition mapping and a single btrfs partition. Recently the btrfs partition reported it full.

In the investigation they that the btrfs fi df command showed the incorrect total size. btrfs fi show and gdisk both correctly report the 3.3 TiB size.

They believe this to be the cause of btrfs reporting that it was out of disk space which led to having to add a temporary loopback device.

Jul 13 9:03:30 EDT root@hostname#> gdisk -l /dev/sdc
GPT fdisk (gdisk) version 0.8.6

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sdc: 7029129216 sectors, 3.3 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 335A16FD-3178-48D1-9E62-BE8C5376AD69
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 7029129182
Partitions will be aligned on 8-sector boundaries
Total free space is 888 sectors (444.0 KiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1              34      7029128294   3.3 TiB     0700  

Jul 13 9:04:59 EDT root@hostname#> btrfs filesystem show /data/replication
Label: 'mariadb logs'  uuid: c9dfd858-7af7-434d-b0ea-18a16a47ab92
        Total devices 2 FS bytes used 122.18GiB
        devid    1 size 3.27TiB used 185.01GiB path /dev/sdc1
        devid    2 size 10.00GiB used 9.97GiB path /dev/loop0

Btrfs v3.16.2


btrfs filesystem df /data/replication
Data, single: total=122.01GiB, used=121.64GiB
System, single: total=32.00MiB, used=48.00KiB
Metadata, RAID1: total=9.97GiB, used=13.17MiB
Metadata, DUP: total=4.50GiB, used=532.75MiB
Metadata, single: total=44.00GiB, used=0.00
GlobalReserve, single: total=192.00MiB, used=0.00

Version-Release number of selected component (if applicable):




How reproducible: Unclear

Actual results: incorrect usage

Expected results: correct usage

Additional info: customer provided an sosreport

Comment 6 Chris Murphy 2016-08-14 18:51:42 UTC
>Recently the btrfs partition reported it full.

I don't know what this means, can you provide user and kernel messages?

>btrfs fi df command showed the incorrect total size

No, but it's understandable to think this is true. From your example:

>Data, single: total=122.01GiB, used=121.64GiB

Translated this is saying "Total amount allocated to single profile data block groups is 122.01Gib, and of that 121.64GiB is used." It has nothing to do with the total device size.

Likewise:
>Metadata, RAID1: total=9.97GiB, used=13.17MiB
>Metadata, DUP: total=4.50GiB, used=532.75MiB
>Metadata, single: total=44.00GiB, used=0.00

There are metadata block groups with different profiles: raid1, dup, and single. This isn't a normal situation, but it'll prevent removing the loop device until metadata is consolidated because parts of single and DUP metadata could be on the loop device. btrfs-progs 3.16 is old and I'm pretty sure it won't permit consolidating to DUP profile with multiple devices, so

btrfs -mconvert=single -f <mp>
btrfs dev del /dev/loop0
btrfs -mconvert=dup <mp>

Anyway, no doubt the problem is already resolved seeing as it was filed a year ago. But this bug report is probably notabug, it's just an (understandable) misunderstanding of the lingo used by btrfs fi df output. Long term it's better to get btrfs-progs up to v4.3 at least, where 'filesystem usage' command exists and has more sane output, and have users report that instead of fi show and fi df.

Comment 7 Dave Wysochanski 2016-11-29 19:47:15 UTC
Closing with INSUFFICIENT_DATA
- one case is CLOSED
- may be fixed in latest version
- lack of confirmation the problem fixed

Comment 8 Red Hat Bugzilla 2023-09-14 03:02:10 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days