RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1243986 - df reports wrong total size for btrfs
Summary: df reports wrong total size for btrfs
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: btrfs-progs
Version: 7.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: fs-maint
QA Contact: Eryu Guan
URL:
Whiteboard:
Depends On:
Blocks: 1298243
TreeView+ depends on / blocked
 
Reported: 2015-07-16 18:54 UTC by Ryan Crews
Modified: 2023-09-14 03:02 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-29 19:47:15 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Ryan Crews 2015-07-16 18:54:42 UTC
Description of problem:
Customer has a host with a 3.3 TiB hardware raid6, configured with a GPT partition mapping and a single btrfs partition. Recently the btrfs partition reported it full.

In the investigation they that the btrfs fi df command showed the incorrect total size. btrfs fi show and gdisk both correctly report the 3.3 TiB size.

They believe this to be the cause of btrfs reporting that it was out of disk space which led to having to add a temporary loopback device.

Jul 13 9:03:30 EDT root@hostname#> gdisk -l /dev/sdc
GPT fdisk (gdisk) version 0.8.6

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sdc: 7029129216 sectors, 3.3 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 335A16FD-3178-48D1-9E62-BE8C5376AD69
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 7029129182
Partitions will be aligned on 8-sector boundaries
Total free space is 888 sectors (444.0 KiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1              34      7029128294   3.3 TiB     0700  

Jul 13 9:04:59 EDT root@hostname#> btrfs filesystem show /data/replication
Label: 'mariadb logs'  uuid: c9dfd858-7af7-434d-b0ea-18a16a47ab92
        Total devices 2 FS bytes used 122.18GiB
        devid    1 size 3.27TiB used 185.01GiB path /dev/sdc1
        devid    2 size 10.00GiB used 9.97GiB path /dev/loop0

Btrfs v3.16.2


btrfs filesystem df /data/replication
Data, single: total=122.01GiB, used=121.64GiB
System, single: total=32.00MiB, used=48.00KiB
Metadata, RAID1: total=9.97GiB, used=13.17MiB
Metadata, DUP: total=4.50GiB, used=532.75MiB
Metadata, single: total=44.00GiB, used=0.00
GlobalReserve, single: total=192.00MiB, used=0.00

Version-Release number of selected component (if applicable):




How reproducible: Unclear

Actual results: incorrect usage

Expected results: correct usage

Additional info: customer provided an sosreport

Comment 6 Chris Murphy 2016-08-14 18:51:42 UTC
>Recently the btrfs partition reported it full.

I don't know what this means, can you provide user and kernel messages?

>btrfs fi df command showed the incorrect total size

No, but it's understandable to think this is true. From your example:

>Data, single: total=122.01GiB, used=121.64GiB

Translated this is saying "Total amount allocated to single profile data block groups is 122.01Gib, and of that 121.64GiB is used." It has nothing to do with the total device size.

Likewise:
>Metadata, RAID1: total=9.97GiB, used=13.17MiB
>Metadata, DUP: total=4.50GiB, used=532.75MiB
>Metadata, single: total=44.00GiB, used=0.00

There are metadata block groups with different profiles: raid1, dup, and single. This isn't a normal situation, but it'll prevent removing the loop device until metadata is consolidated because parts of single and DUP metadata could be on the loop device. btrfs-progs 3.16 is old and I'm pretty sure it won't permit consolidating to DUP profile with multiple devices, so

btrfs -mconvert=single -f <mp>
btrfs dev del /dev/loop0
btrfs -mconvert=dup <mp>

Anyway, no doubt the problem is already resolved seeing as it was filed a year ago. But this bug report is probably notabug, it's just an (understandable) misunderstanding of the lingo used by btrfs fi df output. Long term it's better to get btrfs-progs up to v4.3 at least, where 'filesystem usage' command exists and has more sane output, and have users report that instead of fi show and fi df.

Comment 7 Dave Wysochanski 2016-11-29 19:47:15 UTC
Closing with INSUFFICIENT_DATA
- one case is CLOSED
- may be fixed in latest version
- lack of confirmation the problem fixed

Comment 8 Red Hat Bugzilla 2023-09-14 03:02:10 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.