Bug 1585104 - Sharding sends all application sent fsyncs to the main shard file
Summary: Sharding sends all application sent fsyncs to the main shard file
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhhi
Version: rhhi-1.1
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHHI-V 1.5
Assignee: Sahina Bose
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On: 1583462
Blocks: 1520836
TreeView+ depends on / blocked
 
Reported: 2018-06-01 09:59 UTC by SATHEESARAN
Modified: 2018-11-08 05:40 UTC (History)
13 users (show)

Fixed In Version: glusterfs-3.8.4-54.12
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1583462
Environment:
Last Closed: 2018-11-08 05:39:29 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:3523 0 None None None 2018-11-08 05:40:35 UTC

Description SATHEESARAN 2018-06-01 09:59:06 UTC
+++ This bug was initially created as a clone of Bug #1583462 +++


******* 
This bug is especially relevant in use cases where we want to use RHV-RHGS HC configuration on single brick plain distribute gluster volumes. ********

Description of problem:

While testing the VM use case with sharding (4 MB shards) enabled, we added additional dht logs to track the fops being sent on fds. Post the test, the logs indicate that most fsyncs from the application are being on the main shard file instead of the shards to which the writes were actually sent.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 SATHEESARAN 2018-06-27 02:36:33 UTC
The dependent bug is ON_QA and moving this bug too to ON_QA.
@Sahina, could you provide devel_ack on this bug ?

Comment 5 SATHEESARAN 2018-07-05 13:19:35 UTC
Tested with glusterfs-3.8.4-54.13.el7rhgs with the following steps:

1. Created few VMs with their images on the distribute volume
2. Started all the VMs and installed OS on them.
3. Triggered lot of fsyncs inside the VM
4. Ran I/O inside them for few hours

There were no issues seen.

Comment 7 errata-xmlrpc 2018-11-08 05:39:29 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:3523


Note You need to log in before you can comment on or make changes to this bug.