| Summary: | enhancements to quick read | ||
|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | Raghavendra G <raghavendra> |
| Component: | quick-read | Assignee: | Raghavendra G <raghavendra> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | |
| Severity: | medium | Docs Contact: | |
| Priority: | high | ||
| Version: | mainline | CC: | amarts, chida, dushyanth.h, ian.rogers, krishna, lakshmipathi, vijay, vikas |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | All | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | Type: | --- | |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
|
Description
Raghavendra G
2010-03-15 09:03:16 UTC
The filter io-cache seems more advanced than quick-read. Rather than adding memory pruning to quick-read how about adding an option quickread-smailler-than to io-cache? It's values would be 0 (default) for the current behaviour, or a multiple of io-cache's internal page size (automatically rounded) to specify quickread? I know this breaks the "stacked small filters" philosophy of gluster, but it would be much more efficient in cpu and memory and probably simpler to implement. Ian *** Bug 727 has been marked as a duplicate of this bug. *** PATCH: http://patches.gluster.com/patch/3087 in master (performance/quick-read: read directly from backend for fds opened with O_DIRECT flag.) PATCH: http://patches.gluster.com/patch/3111 in release-2.0 (performance/quick-read: read directly from backend for fds opened with O_DIRECT flag.) PATCH: http://patches.gluster.com/patch/3088 in release-3.0 (performance/quick-read: read directly from backend for fds opened with O_DIRECT flag.) quick-read memory hog is critical issue as lot of customers are coming back with memory hog issue. Needs to be fixed ASAP. When is this scheduled to be fixed? Patch is sent to review, and once committed, we will make a release. (In reply to comment #7) > Patch is sent to review, and once committed, we will make a release. Hey guys, what is the status of the patch that limits the QR mem usage? PATCH: http://patches.gluster.com/patch/3279 in master (performance/quick-read: implement an upper size limit for the cache.) PATCH: http://patches.gluster.com/patch/3312 in release-3.0 (performance/quick-read: implement an upper size limit for the cache.) PATCH: http://patches.gluster.com/patch/3362 in master (performance/quick-read: set default cache-size value to 128MB.) PATCH: http://patches.gluster.com/patch/3361 in release-3.0 (performance/quick-read: set default cache-size value to 128MB.) PATCH: http://patches.gluster.com/patch/3391 in master (quick-read: fix size parameter to GF_CALLOC of priv to fix mem corruption) *** Bug 954 has been marked as a duplicate of this bug. *** Following quick-read features are tested/verified- LRU behaviour Concurrent r/w access on a file from same client and different clients. Check cache info. hits/misses With very high cache , read() should succeed even when server is disconnected. Write once and sure cache is freed. kernel compile quick read default total cache size 128mb |