Bug 830412 - New ext4 file system initially uses up a lot of disk space, a new ext3 fs converted to ext4 doesn't
New ext4 file system initially uses up a lot of disk space, a new ext3 fs con...
Product: Fedora
Classification: Fedora
Component: kernel (Show other bugs)
Unspecified Linux
unspecified Severity medium
: ---
: ---
Assigned To: Eric Sandeen
Fedora Extras Quality Assurance
Depends On:
  Show dependency treegraph
Reported: 2012-06-09 06:17 EDT by markzzzsmith
Modified: 2012-07-31 13:02 EDT (History)
11 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2012-07-31 13:02:21 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
mkfs.ext4 etc. commands and command output (3.56 KB, application/octet-stream)
2012-06-09 06:17 EDT, markzzzsmith
no flags Details

  None (edit)
Description markzzzsmith 2012-06-09 06:17:38 EDT
Created attachment 590583 [details]
mkfs.ext4 etc. commands and command output

Description of problem:

After creating a 1.6TB ext4 file system for /home, I found that 28GB of disk space was used on the empty file system. I assumed that may have been because of the 5% reserved blocks default, however I have set that to zero via tune2fs -m 0, and the file system was still showing 10s of GBs of space in use. I find it very surprising that an empty file system would start out with so much used space.

Experimenting today with a spare 250GB disk, I've found that a new ext4 file system, with 0% reserved space, shows 3.6GB used. I've found that a new ext3 file system, with 0% reserved space, on the same disk only shows 158MB used. Interestingly, if I convert the ext3 file system to an ext4 one, using the instructions at:


the now ext4 file system also only shows 158MB in use. So it seems there is some  sort of issue with mkfs.ext4. 

Version-Release number of selected component (if applicable):


How reproducible:

I'd assume all new ext4 file systems are suffering from this. As ext4 seems to be the default since Fedora 14, all systems since then may be impacted.

Steps to Reproduce:
1. create new ext4 file system, with 0% reserved blocks - mkfs.ext4 -m 0 /dev/blah>
2. mount new file system, and observe disk space utilisation via df -h
Actual results:

For a 250GB disk, 3.6GB is shown as used:

$ df -h /dev/sdd1
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdd1       233G  3.6G  229G   2% /run/media/mark/1d27e4f6-7ce9-44ba-8720-93f9f9760a81

Expected results:

Significantly less used space for a brand new file system. For a new ext3 file system converted to an ext4 file system, the used space is only 158MB:

[mark@opy ~]$ df -h /dev/sdd1
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdd1       230G  158M  229G   1% /run/media/mark/ce6924c2-1858-4db5-8d61-4f5d36bdb46b
[mark@opy ~]$

Additional info:

I've attached a log of the commands and command output for creating an ext4 file system, viewing the used space, and creating an ext3 file system and then converting it to ext4, and then viewing the used space.
Comment 1 Eric Sandeen 2012-06-09 17:38:16 EDT
Hm, TBH the ext4 count sounds more right - ext* actually does use a lot of available space for inodes tables and inode/block bitmaps.

And looking at an ext3 filesystem of this size:

[root@inode mkfs-test]# dumpe2fs -h fsfile-ext3.img | grep -i block
dumpe2fs 1.41.12 (17-May-2010)
Block count:              429391872
Reserved block count:     0
Free blocks:              422604134

429391872-422604134 is 6787738 used 4k blocks, or 25G - not 333M despite what df says... let me see what's going on.
Comment 2 markzzzsmith 2012-06-09 22:27:43 EDT

FWIW, I've redone my /home file system by converting from a new ext3 fs to ext4. The 'df' result of that was:

Filesystem      Size  Used Avail Use% Mounted on
/dev/md5        1.8T  354M  1.8T   1% /home

While I don't know the mechanics of ext file systems, at face value, 354M generally seems to be a more reasonable overhead. While 25GB out of 1.6 or so TB is a low percentage, it is still a very significant amount of usable storage (e.g. I'm ripping my CDs into flacs, and 25GB is approximately equivalent to around 45 CDs), which is why I thought there might be an issue somewhere.

Before I lodged this bug I did some google searches, and didn't come up with anything that suggested that 25GB was reasonable for a 1.6TB file system. Is the formula you've used one that can be used to predict how much overhead ext4 has? Are there any caveats to that? I might looked to add some information about what to expect to the ext4 wiki on kernel.org.
Comment 3 Eric Sandeen 2012-06-11 11:04:46 EDT
Reasonable or not, it's just a fact that all of extN's mkfs-created statioc metadata takes up a fair bit of room.

In particular, 107347968 inodes x 256 bytes/inode = 25G right there.

At some point the df reporting changed, I'll get it figured out, but I think it is only a reporting issue.
Comment 4 Eric Sandeen 2012-06-13 14:24:35 EDT
An upstream bug report on the ext4 list may implicate:

commit f975d6bcc7a698a10cc755115e27d3612dcfe322
Author: Theodore Ts'o <tytso@mit.edu>
Date:   Fri Sep 9 19:00:51 2011 -0400

    ext4: teach ext4_statfs() to deal with clusters if bigalloc is enabled
    Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Comment 5 Eric Sandeen 2012-06-14 14:55:45 EDT
So by default, ext3 uses "bsddf" accounting, which tries to exclude the "overhead" of metadata in the filesystem, an return only the amount of space available for data blocks.

       bsddf / minixdf
              Set  the  behaviour  for the statfs system call. The minixdf
              behaviour is to return in the f_blocks field the total  num-
              ber  of blocks of the file system, while the bsddf behaviour
              (which is the default) is to subtract  the  overhead  blocks
              used  by  the  ext2  file  system and not available for file

The commit in comment #4 changed how this overhead accounting was done.  Strangely, for an empty filesystem, one would expect that in statfs output, total blocks == free blocks for this case; if we exclude all metadata, an empty filesystem should show all blocks free.  This isn't the case either before or after the above commit.  :(

If you want to really know how many blocks out of the total on the block device are available, you can mount -o minixdf.  For a 1.6T ext3 fs, it really is 26G used right out of the gate:

Filesystem            Size  Used Avail Use% Mounted on
                      1.6T   26G  1.6T   2% /mnt/test2/mkfs-test/mnt

Comment 6 Eric Sandeen 2012-07-24 16:31:15 EDT
So to be clear, this is indeed just a reporting issue; the filesystems have the same amount of actual space used by the metadata, so you've not actually lost anything...
Comment 7 markzzzsmith 2012-07-24 17:08:14 EDT
Hi Eric,

Yes, I'd realised that.

Perhaps one of the contributing issues what that my expectations were incorrect. I'd been struggling along with around 10GB of free space on the 250GB drives I replaced for 3 to 6 months, so when 26GB was consumed on the new drives without any actual data, it looked excessive, and when things look excessive, they can be caused by bugs. I've been wondering if there is some where those expectations could be set correctly, e.g. within the Fedora installer, it indicates how much disk space will be taken up by metadata.

Thanks for your time.
Comment 8 zmark 2012-07-24 17:11:30 EDT
This patch series that Ted Tso posted on the linux-ext4 mailing list fixed the problem when I applied it on top of kernel 3.5:  http://www.spinics.net/lists/linux-ext4/msg32586.html  Ted mentioned that they should be part of his pull request during this merge window, so hopefully they'll make their way back to stable via backporting within a few weeks.
Comment 9 Eric Sandeen 2012-07-31 13:02:21 EDT
Ah that's right, I had forgotten that Ted had sent some patches, thanks.

I'm just going to close this one UPSTREAM, I don't think it warrants the exception activity for an explicit backport.  If you disagree, feel free to reopen and I'll try to get to it ...


Note You need to log in before you can comment on or make changes to this bug.