+++ This bug was initially created as a clone of Bug #1583462 +++ ******* This bug is especially relevant in use cases where we want to use RHV-RHGS HC configuration on single brick plain distribute gluster volumes. ******** Description of problem: While testing the VM use case with sharding (4 MB shards) enabled, we added additional dht logs to track the fops being sent on fds. Post the test, the logs indicate that most fsyncs from the application are being on the main shard file instead of the shards to which the writes were actually sent. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
The dependent bug is ON_QA and moving this bug too to ON_QA. @Sahina, could you provide devel_ack on this bug ?
Tested with glusterfs-3.8.4-54.13.el7rhgs with the following steps: 1. Created few VMs with their images on the distribute volume 2. Started all the VMs and installed OS on them. 3. Triggered lot of fsyncs inside the VM 4. Ran I/O inside them for few hours There were no issues seen.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2018:3523