Description of problem: For testing the inter operability of data tiering and nfs-ganesha, I started fs-sanity over vers=3 and the fs-sanity is taking long time for execution. Usually, fs-sanity finishes in 6 hours. Version-Release number of selected component (if applicable): glusterfs-3.7.5-6.el7rhgs.x86_64 nfs-ganesha-2.2.0-10.el7rhgs.x86_64 How reproducible: Started executing on one setup and performance lag is seen there Steps to Reproduce: 1. create data tiered volume 2. set up the nfs-ganesha 3. trigger fs-sanity from nfs mount, mount with vers=3 Actual results: the fs-sanity is taking longer time to finish Expected results: the performance should be better. Additional info: The glusterfs cluster is based on VM's. The test was not about performance rather to make sure that file system operations work properly.
This doesn't seem to be NFS related issue as the fs-sanity has been successful. At first glance looks more like delay has something to do with additional operations being done as part of tiering.
Request on QE to re-run the test on latest build glusterfs-3.7.5-7, as development believes that fix for BZ 1273348 may resolve this issue as well
QE has reported that the re-run of the test on the latest build glusterfs-3.7.5-7 has shown positive results
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0193.html