Bug 1568758
Summary: | Block delete times out for blocks created of very large size | ||||||
---|---|---|---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Sweta Anandpara <sanandpa> | ||||
Component: | sharding | Assignee: | Krutika Dhananjay <kdhananj> | ||||
Status: | CLOSED ERRATA | QA Contact: | Sweta Anandpara <sanandpa> | ||||
Severity: | high | Docs Contact: | |||||
Priority: | unspecified | ||||||
Version: | rhgs-3.4 | CC: | amukherj, kdhananj, pkarampu, prasanna.kalever, rhinduja, rhs-bugs, sanandpa, sasundar, sheggodu, storage-qa-internal, vdas, xiubli | ||||
Target Milestone: | --- | Keywords: | Rebase | ||||
Target Release: | RHGS 3.5.0 | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | glusterfs-6.0-2 | Doc Type: | Bug Fix | ||||
Doc Text: |
Deleting a file with a large number of shards timed out because unlink operations occurred on all shards in parallel, which led to contention on the .shard directory. Timeouts resulted in failed deletions and stale shards remaining in the .shard directory. Shard deletion is now a background process that deletes one batch of shards at a time, to control contention on the .shard directory and prevent timeouts. The size of shard deletion batches is controlled with the features.shard-deletion-rate option, which is set to 100 by default.
|
Story Points: | --- | ||||
Clone Of: | Environment: | ||||||
Last Closed: | 2019-10-30 12:19:38 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | 1520882 | ||||||
Bug Blocks: | 1503143, 1696807 | ||||||
Attachments: |
|
Description
Sweta Anandpara
2018-04-18 08:59:53 UTC
Sweta, Could you disable sharding and redo this test? I am suspecting that this has to do with sharding xlator taking lot of time to delete the individual shards. Krutika is working on doing unlinks in background as part of https://bugzilla.redhat.com/show_bug.cgi?id=1520882 for 3.4.0. (In reply to Pranith Kumar K from comment #3) > Sweta, > Could you disable sharding and redo this test? I am suspecting that this > has to do with sharding xlator taking lot of time to delete the individual > shards. Krutika is working on doing unlinks in background as part of > https://bugzilla.redhat.com/show_bug.cgi?id=1520882 for 3.4.0. Please note that you need to both create and delete the block volume while sharding is disabled for us to confirm that the delay was introduced because of sharding. Note: The fixes to this issue have been merged upstream: https://review.gluster.org/#/q/status:merged+project:glusterfs+branch:master+topic:ref-1568521 Moving this bug to POST state. The fix for this issue is already merged and the other bug BZ 1520882 is ON_QA. It would be more relevant to have this bug too on ON_QA, as the fix addresses this issue too. Why is that, this bug is not moved ON_QA ? (In reply to SATHEESARAN from comment #18) > The fix for this issue is already merged and the other bug BZ 1520882 is > ON_QA. > It would be more relevant to have this bug too on ON_QA, as the fix > addresses this issue too. > > Why is that, this bug is not moved ON_QA ? Ok. I don't completely understand the process, but shouldn't this be done only when all 3 acks are in place? Let me know if that is not the case. -Krutika Created attachment 1586539 [details]
Verification logs on rhgs3.5.0
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2019:3249 |