Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1493085 - Sharding sends all application sent fsyncs to the main shard file
Sharding sends all application sent fsyncs to the main shard file
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: sharding (Show other bugs)
3.3
x86_64 Linux
medium Severity high
: ---
: RHGS 3.4.0
Assigned To: Krutika Dhananjay
SATHEESARAN
: Triaged
Depends On: 1468483
Blocks: 1523608 1503134 1583462
  Show dependency treegraph
 
Reported: 2017-09-19 06:57 EDT by Krutika Dhananjay
Modified: 2018-09-04 02:38 EDT (History)
11 users (show)

See Also:
Fixed In Version: glusterfs-3.12.2-13
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1468483
: 1583462 (view as bug list)
Environment:
Last Closed: 2018-09-04 02:36:24 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 None None None 2018-09-04 02:38 EDT

  None (edit)
Description Krutika Dhananjay 2017-09-19 06:57:02 EDT
+++ This bug was initially created as a clone of Bug #1468483 +++


******* This bug is especially relevant in use cases where we want to use RHV-RHGS HC configuration on single brick plain distribute gluster volumes. ********

Description of problem:

While testing the VM use case with sharding (4 MB shards) enabled, we added additional dht logs to track the fops being sent on fds. Post the test, the logs indicate that most fsyncs from the application are being on the main shard file instead of the shards to which the writes were actually sent.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:
Comment 4 Krutika Dhananjay 2018-02-14 02:19:02 EST
https://review.gluster.org/#/c/19566/1 <--- upstream patch.

Still lot of testing pending. It's a reasonably big change. -2 until I'm done.
Comment 5 Krutika Dhananjay 2018-03-05 04:05:06 EST
All 4 patches concerning this bz are merged upstream - https://review.gluster.org/#/q/topic:bug-1468483+(status:open+OR+status:merged)
Comment 11 SATHEESARAN 2018-08-23 15:22:16 EDT
Tested with RHGS 3.4.0 nightly build ( glusterfs-3.12.2-16.el7rhgs ) with RHHI environment ( RHV 4.2 ).

1. Created a single brick distribute volume
2. Created a new storage domain with the above created volume
3. Created few VMs with their boot disks on this domain.
4. Installed operating systems on the VM
5. Created more disks for the VM
6. Ran some workload.

No problems found
Comment 13 errata-xmlrpc 2018-09-04 02:36:24 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607

Note You need to log in before you can comment on or make changes to this bug.