Bug 485451
Summary: | gfs_grow very slow with 1k file system block size | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 5 | Reporter: | Nate Straz <nstraz> | ||||||||||||||
Component: | gfs-utils | Assignee: | Robert Peterson <rpeterso> | ||||||||||||||
Status: | CLOSED ERRATA | QA Contact: | Cluster QE <mspqa-list> | ||||||||||||||
Severity: | medium | Docs Contact: | |||||||||||||||
Priority: | low | ||||||||||||||||
Version: | 5.3 | CC: | edamato, jkortus, swhiteho, syeghiay | ||||||||||||||
Target Milestone: | rc | ||||||||||||||||
Target Release: | --- | ||||||||||||||||
Hardware: | All | ||||||||||||||||
OS: | Linux | ||||||||||||||||
Whiteboard: | |||||||||||||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||||||||||||
Doc Text: | Story Points: | --- | |||||||||||||||
Clone Of: | |||||||||||||||||
: | 494372 494731 (view as bug list) | Environment: | |||||||||||||||
Last Closed: | 2009-09-02 11:01:27 UTC | Type: | --- | ||||||||||||||
Regression: | --- | Mount Type: | --- | ||||||||||||||
Documentation: | --- | CRM: | |||||||||||||||
Verified Versions: | Category: | --- | |||||||||||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||||||||
Embargoed: | |||||||||||||||||
Bug Depends On: | |||||||||||||||||
Bug Blocks: | 494372, 494731 | ||||||||||||||||
Attachments: |
|
Description
Nate Straz
2009-02-13 16:21:37 UTC
I should note that both runs are running with a load on the file system while the gfs_grow is running. Created attachment 331865 [details]
Output log from repetitive gfs_grow test
I ran a test which started with a 500GB file system, then grew the file system by 500GB five times. I got the following runtimes in seconds
GFS 1k: 145 255 368 483 600
GFS 4k: 4 5 5 5 5 4
GFS2 1k: 70 69 153 241 306
GFS2 4k: 17 3 4 6 7
This shows that as the file system size grows, gfs_grow and gfs2_grow take
longer to complete.
Attached is the full output log from my test run.
gfs2_grow also takes up a ton of memory while growing a 1k block size file system. Here's top while growing a file system by 500G: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 21510 root 18 0 1024m 722m 496 R 1.0 72.1 1:11.50 gfs2_grow Test output: lvextend -l +100%FREE growfs/grow1 on dash-03 Extending logical volume grow1 to 3.87 TB Logical volume grow1 successfully resized growing grow1 on dash-02 FS: Mount Point: /mnt/grow1 FS: Device: /dev/mapper/growfs-grow1 FS: Size: 3638531454 (0xd8df957e) FS: RG size: 523982 (0x7fece) DEV: Size: 4158324736 (0xf7db0000) The file system grew by 507610MB. Created attachment 332967 [details]
Try 1 patch
This is still a preliminary (proof-of-concept) patch.
In my tests with a load running on a single node (the same node doing
gfs_grow) I found that writing the new RGs was very slow; even slower
than writing the new rindex entries. The times looked like this:
gather_info: 220 ms.
create_rgrps: 14 ms.
write rgrps: 68537 ms.
write rindex: 48621 ms.
With this patch, the times are now:
write rgrps: 1011 ms.
write rindex: 21334 ms.
FWIW, this was extending a gfs file system from 5G to 305G.
The speedup in rgrps is simple: Instead of doing:
(1) open, (2) lseek, (3) write, (4) sync, (5) close for every block
it now does one open, one lseek per RG, several consecutive
writes, then one sync and one close. This should be pretty safe
because if the gfs_grow is interrupted or loses power during that
write, the next gfs_grow will recalculate the same starting block
number and rewrite the RGs in their entirety, and since the rindex
hasn't been written at that point, the new rindex should be accurate.
The speedup in rindex writing was achieved by writing ten rindex
entries at a shot rather than one per write. We could easily change
it to more than ten, since I defined it as a #define. In theory,
we could get fancy and try to write the number of entries that best
fits onto our block size, after calculating where in the block we
started on that first rindex append. I'm not sure it's worth it:
Aside from our QA, how often are customers executing this code?
This isn't guaranteed to be quite as safe. Still, at this point, the
RGs have been written out and synced to disk. If the rindex write is
interrupted, you're probably just as likely to run into the same
problem with a single 96-byte write as you are with a 960-byte write.
Perhaps Steve or Abhi can look this over and see if the can punch
holes in my theory.
I've been testing a gfs_grow which writes 40 rindex entries at a time and the speed still has not improved enough. strace shows that the ioctls are taking a long time: [root@dash-01 ~]# strace -T -p 24674 Process 24674 attached - interrupt to quit ioctl(3, 0x472d, 0x7fffbda5a600) = 3840 <38.554636> ioctl(3, 0x472d, 0x7fffbda5a600) = 3840 <249.566196> ioctl(3, 0x472d, 0x7fffbda5a600) = 3840 <33.458248> ioctl(3, 0x472d, 0x7fffbda5a600) = 3840 <418.865492> ioctl(3, 0x472d There is a lot of lock traffic going on. I'm trying to analyze the DLM network traffic with wireshark. I captured the extra section of the DLM packets with the following command: tshark -R dlm3.m.extra -z proto,colinfo,dlm3.m.extra,dlm3.m.extra then piped it through a perl script to decode the extra data into something more readable. I ran the capture for 26:24 on dash-01, where a gfs_grow was in progress. 571965 locks captured, 361 lock ops per second. 138 inode, 571821 RG, 7 quota 59357 unique locks found 4 inode, 59351 RG, 2 quota Created attachment 337417 [details]
Try 2 patch
I verified that the holdup here is in doing this:
error = gfs_glock_nq_init(ip->i_gl, LM_ST_EXCLUSIVE,
LM_FLAG_PRIORITY | GL_SYNC | GL_NOCANCEL_OTHER,
&i_gh);
This often takes 1-2 seconds--yes seconds--to acquire the rindex glock.
Part of the slowdown is that the GL_SYNC forces it to sync to disk,
including the journal entry, and the inplace_reserve forces it to
re-read the rindex file, possibly on all nodes, because the version
number of the inode glock is changed.
FYI, the glock_purge "feature" doesn't help the speed at all.
This patch is much like the previous one. The biggest difference is
that it tries to write 43 rindex entries per write rather than 1 in
the original and 10 or 40 in previous incarnations. The magic number
of 43 was calculated from (the 4K page size / 96 bytes per rindex entry).
The goal here is to allocate a full page of data if we can and write
that to the rindex file so the reads and journals don't have to fight
with each other quite as much. This should probably be tweaked so
that it retrieves the current rindex size and calculates exactly how
best to do it to minimize disk IO. Perhaps tomorrow I'll have another
go at that concept. For now, this patch allows the grow test to run
in approximately 20 minutes per iteration, which is considerably
better than before.
Beyond the tweaks mentioned (which I'll try tomorrow) there's really
nothing more I can improve upon here unless I change the gfs kernel
code to introduce specialized code for gfs_grow, such as a new
performance-tweaked ioctl to use instead of hfile_write.
*** Bug 486408 has been marked as a duplicate of this bug. *** Created attachment 337968 [details]
Try 3 patch
This patch seems to work just as well but has an added safety
measure for near-full file systems. It also has a big comment
explaining why I'm doing what I'm doing.
Created attachment 338126 [details]
Try 4 patch
This patch is similar to its predecessor, but the chunk size is
calculated dynamically based on the block size. That way, performance
should be improved as well for systems with bigger block sizes, such as
ia64.
Created attachment 338614 [details]
Final patch
This is the final patch. Overall, it seemed to give the best
performance.
The final patch was tested by QE with their growfs.pl test. It has been pushed to the master branch of the gfs1-utils git tree, and the STABLE3, STABLE2 and RHEL5 branches of the cluster git tree for inclusion into RHEL5.4. Bug #494731 has been opened to cross-write the fix to RHEL4.8. Changing status to Modified. Verified with gfs-utils-0.1.19-3.el5 that growfs tests are running in a reasonable amount of time with 1k file system block sizes. Verified with gfs-utils-0.1.20-1.el5 An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on therefore solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHBA-2009-1336.html |