Bug 455572

Summary: GFS2: [RFE] fallocate support for GFS2
Product: [Fedora] Fedora Reporter: Steve Whitehouse <swhiteho>
Component: GFS-kernelAssignee: Ben Marzinski <bmarzins>
Status: CLOSED UPSTREAM QA Contact:
Severity: low Docs Contact:
Priority: low    
Version: rawhideCC: adas, bmarzins, rpeterso, swhiteho
Target Milestone: ---Keywords: FutureFeature
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
: 626561 626585 (view as bug list) Environment:
Last Closed: 2010-10-27 13:25:33 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 626561, 626585    
Attachments:
Description Flags
FIrst draft of the fallocate patch
none
A revised fallocate patch
none
Latest fallocate patch. posted to cluster-devel none

Description Steve Whitehouse 2008-07-16 10:57:15 UTC
We should support fallocate even though its just as quick to do streaming writes
since the FALLOC_FL_KEEP_SIZE flag has slightly different behaviour to extending
via streaming writes.

Comment 1 Steve Whitehouse 2010-05-14 09:34:27 UTC
We need to check that fsck.gfs2 can cope with files in which the size is less than the number of allocated blocks.

Comment 2 Robert Peterson 2010-05-21 13:17:12 UTC
I checked the fsck.gfs2 code and didn't see any places where
it cared whether the di_size matches the di_blocks count.
Just to be a little more secure in the knowledge, I created
a 1MB file in gfs2, unmounted it and patched the di_size
to 0x100 bytes.  So the size was 0x100, but the allocated
blocks di_blocks was 0x102.  The fsck.gfs2 ran just fine with
no complaints about this situation.

Comment 3 Ben Marzinski 2010-07-20 06:33:40 UTC
Created attachment 433077 [details]
FIrst draft of the fallocate patch

This is still a work-in-progress, but this patch works (at least on some quick single-machine tests).  However, it's not quite right.  Instead of rounding the requested allocation to the nearest fs-block, it rounds it to the nearest page.

Comment 4 Ben Marzinski 2010-07-22 22:19:11 UTC
So this definitely doesn't work as is.  For starters, GFS2 isn't able to handle
an arbitrarily large allocation during one transaction.

Comment 5 Steve Whitehouse 2010-07-30 11:52:06 UTC
Yes, thats true. You need to be especially careful of this with journaled data files. You should be able to create large enough transactions that the performance is still substantially better than the single page system though.

Also, I think you should avoid using block_prepare_write() and instead call gfs2_block_map directly to map (up to) a whole extent at once. That can then be written with zeros before attempting the next allocation. That should make things much faster, overall.

The other thing we need to look at is error handling. If there is an error should we give up after having allocated as much as possible or should we trim off the blocks? If the latter then I have a useful helper function which is part of my new truncate code that is waiting for the next merge window to by completed.

Other than that, it looks pretty good, and smaller than I'd expected too.

Comment 6 Ben Marzinski 2010-08-06 23:16:38 UTC
Created attachment 437280 [details]
A  revised fallocate patch

This is a revised version of the fallocate patch that can handle fallocate requests that are larger than a single resource group.  It starts by looking for resource groups that are more than half empty, and each time it is unable to satisfy its target size, it cuts the goal in half.  Regardless of what size it asked for, when it finds a resource group, it reserves as many blocks as it can from that resource group.  With this patch, fallocates usually are around five times as fast as dd'ing zeros to the file.

Comment 7 Ben Marzinski 2010-08-06 23:21:27 UTC
I looked at doing away with block_prepare_write(), but it looks like I need to do most of what it does.  For the rare times when I might need to call it multiple times for one page there is probably some savings to be had, but that could only happen when using fallocate to fill in a holey file whose blocksize is less than
the page size.

Comment 8 Steve Whitehouse 2010-08-09 13:06:50 UTC
This looks really good. Just a few (minor) niggles though:

fs2_page_add_databufs and gfs2_write_alloc_required should be able to retain their "unsigned int" sized size arguments since the max allocation cannot be larger than one rgrp which is a max of 2^32 - (sizeof rgrp header blocks) long. Maybe that was done to try and lose some casts along the way?

At the top of gfs2_fallocate() there are a few shift and mask operations that it would be good to have meaningfully named macros or inlined functions for.

Beyond that it looks really good to me. I realised that there will be a merge order issue wrt the new truncate code, since i_disksize is going to go away once that has been merged. I'm currenly waiting for the merge of the vfs tree in the current merge window, and once that has been done, I'll be able to rebase the new truncate code.

The only difference that it is likely to make is that you won't need to update i_disksize separately from i_size. One further though occurs: if we allow the inode to grow beyond its filesize with fallocate, then should a truncate which is set above the current file size still remove any allocated blocks beyond the requested truncate point. Hmm. I wonder how other filesystems handle that. Anyway we can figure that out as we go along.

The fact that you've managed a 5x speed up with this, means that we now have a target for Dave Chinner's multipage write work,too.

Comment 9 Ben Marzinski 2010-08-23 15:57:20 UTC
Posted.

Comment 10 Ben Marzinski 2010-08-23 19:52:54 UTC
Created attachment 440470 [details]
Latest fallocate patch. posted to cluster-devel

This is the version of the fallocate patch that I posted, and Steve accepted.

Comment 11 Steve Whitehouse 2010-10-27 13:25:33 UTC
Now in Linus kernel