Bug 1468483 - Sharding sends all application sent fsyncs to the main shard file
Summary: Sharding sends all application sent fsyncs to the main shard file
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: sharding
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Krutika Dhananjay
QA Contact: bugs@gluster.org
URL:
Whiteboard:
Depends On:
Blocks: 1493085 1583462
TreeView+ depends on / blocked
 
Reported: 2017-07-07 08:33 UTC by Nithya Balachandran
Modified: 2018-06-20 17:57 UTC (History)
2 users (show)

Fixed In Version: glusterfs-v4.1.0
Clone Of:
: 1493085 (view as bug list)
Environment:
Last Closed: 2018-06-20 17:57:11 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Nithya Balachandran 2017-07-07 08:33:40 UTC
Description of problem:

While testing the VM use case with sharding (4 MB shards) enabled, we added additional dht logs to track the fops being sent on fds. Post the test, the logs indicate that most fsyncs from the application are being on the main shard file instead of the shards to which the writes were actually sent.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Worker Ant 2018-02-14 07:17:47 UTC
REVIEW: https://review.gluster.org/19566 (features/shard: Upon FSYNC from upper layers, wind fsync on all changed shards) posted (#1) for review on master by Krutika Dhananjay

Comment 2 Worker Ant 2018-02-21 10:40:39 UTC
REVIEW: https://review.gluster.org/19608 (features/shard: Fix shard inode refcount when it's part of priv->lru_list) posted (#1) for review on master by Krutika Dhananjay

Comment 3 Worker Ant 2018-02-26 09:53:39 UTC
REVIEW: https://review.gluster.org/19630 (features/shard: Pass the correct block-num to store in inode ctx) posted (#1) for review on master by Krutika Dhananjay

Comment 4 Worker Ant 2018-02-26 10:32:57 UTC
REVIEW: https://review.gluster.org/19633 (features/shard: Leverage block_num info in inode-ctx in read callback) posted (#1) for review on master by Krutika Dhananjay

Comment 5 Worker Ant 2018-02-27 01:53:48 UTC
COMMIT: https://review.gluster.org/19630 committed in master by "Krutika Dhananjay" <kdhananj> with a commit message- features/shard: Pass the correct block-num to store in inode ctx

Change-Id: Icf3a5d0598a081adb7d234a60bd15250a5ce1532
BUG: 1468483
Signed-off-by: Krutika Dhananjay <kdhananj>

Comment 6 Worker Ant 2018-02-27 03:26:12 UTC
COMMIT: https://review.gluster.org/19633 committed in master by "Krutika Dhananjay" <kdhananj> with a commit message- features/shard: Leverage block_num info in inode-ctx in read callback

... instead of adding this information in fd_ctx in call path and
retrieving it again in the callback.

Change-Id: Ibbddbbe85baadb7e24aacf5ec8a1250d493d7800
BUG: 1468483
Signed-off-by: Krutika Dhananjay <kdhananj>

Comment 7 Worker Ant 2018-03-02 05:26:28 UTC
COMMIT: https://review.gluster.org/19608 committed in master by "Pranith Kumar Karampuri" <pkarampu> with a commit message- features/shard: Fix shard inode refcount when it's part of priv->lru_list.

For as long as a shard's inode is in priv->lru_list, it should have a non-zero
ref-count. This patch achieves it by taking a ref on the inode when it
is added to lru list. When it's time for the inode to be evicted
from the lru list, a corresponding unref is done.

Change-Id: I289ffb41e7be5df7489c989bc1bbf53377433c86
BUG: 1468483
Signed-off-by: Krutika Dhananjay <kdhananj>

Comment 8 Worker Ant 2018-03-05 08:39:28 UTC
COMMIT: https://review.gluster.org/19566 committed in master by "Pranith Kumar Karampuri" <pkarampu> with a commit message- features/shard: Upon FSYNC from upper layers, wind fsync on all changed shards

Change-Id: Ib74354f57a18569762ad45a51f182822a2537421
BUG: 1468483
Signed-off-by: Krutika Dhananjay <kdhananj>

Comment 9 Shyamsundar 2018-06-20 17:57:11 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v4.1.0, please open a new bug report.

glusterfs-v4.1.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-June/000102.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.