+++ This bug was initially created as a clone of Bug #1468483 +++ ******* This bug is especially relevant in use cases where we want to use RHV-RHGS HC configuration on single brick plain distribute gluster volumes. ******** Description of problem: While testing the VM use case with sharding (4 MB shards) enabled, we added additional dht logs to track the fops being sent on fds. Post the test, the logs indicate that most fsyncs from the application are being on the main shard file instead of the shards to which the writes were actually sent. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
https://review.gluster.org/#/c/19566/1 <--- upstream patch. Still lot of testing pending. It's a reasonably big change. -2 until I'm done.
All 4 patches concerning this bz are merged upstream - https://review.gluster.org/#/q/topic:bug-1468483+(status:open+OR+status:merged)
Tested with RHGS 3.4.0 nightly build ( glusterfs-3.12.2-16.el7rhgs ) with RHHI environment ( RHV 4.2 ). 1. Created a single brick distribute volume 2. Created a new storage domain with the above created volume 3. Created few VMs with their boot disks on this domain. 4. Installed operating systems on the VM 5. Created more disks for the VM 6. Ran some workload. No problems found
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2607