Bug 455572
Summary: | GFS2: [RFE] fallocate support for GFS2 | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | Steve Whitehouse <swhiteho> | ||||||||
Component: | GFS-kernel | Assignee: | Ben Marzinski <bmarzins> | ||||||||
Status: | CLOSED UPSTREAM | QA Contact: | |||||||||
Severity: | low | Docs Contact: | |||||||||
Priority: | low | ||||||||||
Version: | rawhide | CC: | adas, bmarzins, rpeterso, swhiteho | ||||||||
Target Milestone: | --- | Keywords: | FutureFeature | ||||||||
Target Release: | --- | ||||||||||
Hardware: | All | ||||||||||
OS: | Linux | ||||||||||
Whiteboard: | |||||||||||
Fixed In Version: | Doc Type: | Enhancement | |||||||||
Doc Text: | Story Points: | --- | |||||||||
Clone Of: | |||||||||||
: | 626561 626585 (view as bug list) | Environment: | |||||||||
Last Closed: | 2010-10-27 13:25:33 UTC | Type: | --- | ||||||||
Regression: | --- | Mount Type: | --- | ||||||||
Documentation: | --- | CRM: | |||||||||
Verified Versions: | Category: | --- | |||||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||
Embargoed: | |||||||||||
Bug Depends On: | |||||||||||
Bug Blocks: | 626561, 626585 | ||||||||||
Attachments: |
|
Description
Steve Whitehouse
2008-07-16 10:57:15 UTC
We need to check that fsck.gfs2 can cope with files in which the size is less than the number of allocated blocks. I checked the fsck.gfs2 code and didn't see any places where it cared whether the di_size matches the di_blocks count. Just to be a little more secure in the knowledge, I created a 1MB file in gfs2, unmounted it and patched the di_size to 0x100 bytes. So the size was 0x100, but the allocated blocks di_blocks was 0x102. The fsck.gfs2 ran just fine with no complaints about this situation. Created attachment 433077 [details]
FIrst draft of the fallocate patch
This is still a work-in-progress, but this patch works (at least on some quick single-machine tests). However, it's not quite right. Instead of rounding the requested allocation to the nearest fs-block, it rounds it to the nearest page.
So this definitely doesn't work as is. For starters, GFS2 isn't able to handle an arbitrarily large allocation during one transaction. Yes, thats true. You need to be especially careful of this with journaled data files. You should be able to create large enough transactions that the performance is still substantially better than the single page system though. Also, I think you should avoid using block_prepare_write() and instead call gfs2_block_map directly to map (up to) a whole extent at once. That can then be written with zeros before attempting the next allocation. That should make things much faster, overall. The other thing we need to look at is error handling. If there is an error should we give up after having allocated as much as possible or should we trim off the blocks? If the latter then I have a useful helper function which is part of my new truncate code that is waiting for the next merge window to by completed. Other than that, it looks pretty good, and smaller than I'd expected too. Created attachment 437280 [details]
A revised fallocate patch
This is a revised version of the fallocate patch that can handle fallocate requests that are larger than a single resource group. It starts by looking for resource groups that are more than half empty, and each time it is unable to satisfy its target size, it cuts the goal in half. Regardless of what size it asked for, when it finds a resource group, it reserves as many blocks as it can from that resource group. With this patch, fallocates usually are around five times as fast as dd'ing zeros to the file.
I looked at doing away with block_prepare_write(), but it looks like I need to do most of what it does. For the rare times when I might need to call it multiple times for one page there is probably some savings to be had, but that could only happen when using fallocate to fill in a holey file whose blocksize is less than the page size. This looks really good. Just a few (minor) niggles though: fs2_page_add_databufs and gfs2_write_alloc_required should be able to retain their "unsigned int" sized size arguments since the max allocation cannot be larger than one rgrp which is a max of 2^32 - (sizeof rgrp header blocks) long. Maybe that was done to try and lose some casts along the way? At the top of gfs2_fallocate() there are a few shift and mask operations that it would be good to have meaningfully named macros or inlined functions for. Beyond that it looks really good to me. I realised that there will be a merge order issue wrt the new truncate code, since i_disksize is going to go away once that has been merged. I'm currenly waiting for the merge of the vfs tree in the current merge window, and once that has been done, I'll be able to rebase the new truncate code. The only difference that it is likely to make is that you won't need to update i_disksize separately from i_size. One further though occurs: if we allow the inode to grow beyond its filesize with fallocate, then should a truncate which is set above the current file size still remove any allocated blocks beyond the requested truncate point. Hmm. I wonder how other filesystems handle that. Anyway we can figure that out as we go along. The fact that you've managed a 5x speed up with this, means that we now have a target for Dave Chinner's multipage write work,too. Posted. Created attachment 440470 [details]
Latest fallocate patch. posted to cluster-devel
This is the version of the fallocate patch that I posted, and Steve accepted.
Now in Linus kernel |