Bug 1520882
Summary: | [GSS]shard files present even after deleting vm from the rhev | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Abhishek Kumar <abhishku> | |
Component: | sharding | Assignee: | Krutika Dhananjay <kdhananj> | |
Status: | CLOSED ERRATA | QA Contact: | SATHEESARAN <sasundar> | |
Severity: | high | Docs Contact: | ||
Priority: | unspecified | |||
Version: | rhgs-3.2 | CC: | akrishna, amukherj, apaladug, asriram, kdhananj, rhinduja, rhs-bugs, sabose, sanandpa, sasundar, sheggodu, storage-qa-internal | |
Target Milestone: | --- | Keywords: | ZStream | |
Target Release: | RHGS 3.4.z Batch Update 2 | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.12.2-27 | Doc Type: | Bug Fix | |
Doc Text: |
Previously, when a file with a large number of shards was deleted, the shard translator synchronously sent unlink operations on all the shards simultaneously. This caused replicate translator to acquire locks on the .shard directory. Hence, when a huge number of locks got accumulated in locks translator, the search for a possible matching lock got slower. This caused timeouts which lead to disconnects and subsequent failure of file deletion, leading to stale shards being left in the .shard directory. With this fix, the deletion file operation is inclusive and the cleanup of shards is moved to the background. Irrespective of how big a sharded file is,its deletion operation will return immediately, and the space consumed by the associated shards to be deleted will be reclaimed eventually.
|
Story Points: | --- | |
Clone Of: | ||||
: | 1522624 1568521 (view as bug list) | Environment: | ||
Last Closed: | 2018-12-17 17:07:02 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1568521 | |||
Bug Blocks: | 1503143, 1522624, 1568758 |
Description
Abhishek Kumar
2017-12-05 11:46:00 UTC
https://review.gluster.org/c/glusterfs/+/20623 https://review.gluster.org/q/topic:%22ref-1568521%22+(status:open%20OR%20status:merged) Updated Doctext field. Kindly review for technical accuracy. (In reply to Anjana from comment #23) > Updated Doctext field. Kindly review for technical accuracy. Hi, The doc itself looks good. But I just wanted to highlight that this won't be a known issue once RHGS-3.4 Batch Update 2 is rolled out, which is where it is being fixed. My understanding is that if this doc text is going to make it to the "known issues" section of batch update 2, then this won't be necessary as it is going to be fixed there. -Krutika Changed doc-type. Changed the doc text in CCFR format. -Krutika Verified with glusterfs-3.12.2-31.el7rhgs and RHV 4.2.7-1 1. Created a 2 TB disk on the gluster SD backed with 64MB sharded gluster volume 2. Created a filesystem on the disk and populated data almost around 2TB. 3. Deleted the VM image from RHV storage domain. Observed that there are no hangs, no issues with SD or hosts, all the shards are deleted. No ghost shards left behind. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:3827 |