Bug 1670718
Summary: | md-cache should be loaded at a position in graph where it sees stats in write cbk | |||
---|---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Raghavendra G <rgowdapp> | |
Component: | glusterd | Assignee: | bugs <bugs> | |
Status: | CLOSED WONTFIX | QA Contact: | ||
Severity: | low | Docs Contact: | ||
Priority: | low | |||
Version: | mainline | CC: | amarts, amukherj, bugs, guillaume.pavese, jahernan, moagrawa, pasik, pkarampu, srakonde, ykaul | |
Target Milestone: | --- | Keywords: | EasyFix, Performance, Triaged | |
Target Release: | --- | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | If docs needed, set a value | ||
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1670719 (view as bug list) | Environment: | ||
Last Closed: | 2020-01-13 10:19:38 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1670719, 1672818, 1732875 |
Description
Raghavendra G
2019-01-30 09:02:05 UTC
REVIEW: https://review.gluster.org/22124 (performance/md-cache: load as a child of write-behind) posted (#1) for review on master by Raghavendra G Is this a blocker to release-6? Can we please re-evaluate? Do we want this in or shall we close it? (In reply to Yaniv Kaul from comment #3) > Do we want this in or shall we close it? I'm not sure about the importance of this issue. I think we can close it. (In reply to Sanju from comment #4) > (In reply to Yaniv Kaul from comment #3) > > Do we want this in or shall we close it? > > I'm not sure about the importance of this issue. I think we can close it. From the patch description: "This benefits write workload as md-cache can absorb fstats calls from kernel." I'd like to understand better if there is a benefit here. I'm re-running the regressions tests, to see at least if it's stable first. The main issue here is that write-behind is returning NULL for post iatt after a cached write. When md-cache sees a NULL iatt, it invalidates its cache. This happens because write-behind is not caching metadata, so it can't provide a meaningful iatt when the write has not been processed by bricks. On the other side, placing md-cache after write-behind requires that write-behind serializes all operations that return an iatt (virtually all) so that when they reach md-cache (at least those that the user has sent sequentially and write-behind has answered directly), the previous operations have been completely executed and updated the cached metadata. I'm not 100% sure if this is what write-behind is doing in all cases, but doing so is inefficient. For example, if user sends a write request and then an fstat request, write-behind will absorb the write, but when it sees the fstat request, the cached write needs to be flushed and the fstat delayed until the write finishes so that the fstat gets the correct answer. I think the good approach here would be to unify caching layers and provide an authoritative cache with consistency guarantees. There's a github issue [1] for this, but probably it's a very ambitious approach for a first implementation. Maybe we could get most of the benefits by using a lock based cache that integrates metadata and data. With this approach, an fstat after a write could be served even without flushing the cached write, so both requests could be served directly from client cache with no network/brick activity. If that's the way to go, I would close this bug and create a new one (or a feature request in github) to implement that caching. What do you think ? [1] https://github.com/gluster/glusterfs/issues/218 I'd close and leave the feature request in githbub. This one certainly looks like a sizable project - more than we should commit to right now. +1 on moving the discussion to github with enhancement tag. Lets get the discussion going there, and see when and 'who' can pick this up. Based on latest comments I'm closing this bug and I've updated the github issue to work on these lines. [1] https://github.com/gluster/glusterfs/issues/218 The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days |