+++ This bug was initially created as a clone of Bug #1322214 +++ +++ This bug was initially created as a clone of Bug #1314421 +++ Description of problem: In a oVirt-Gluster hyperconverged environment, adding disk to VM from a glusterfs storage pool fails when glusterfs is running in posix/directio mode The gluster volume is configured to run in directIO mode by adding option o-direct on in the /var/lib/glusterd/vols/gl_01/*.vol files. Example below volume gl_01-posix type storage/posix option o-direct on option brick-gid 36 option brick-uid 36 option volume-id c131155a-d40c-4d9e-b056-26c61b924c26 option directory /bricks/b01/g end-volume When the option is removed and the volume is restarted, disks can be added to the VM from the glusterfs pool. Version-Release number of selected component (if applicable): RHEV version is RHEV 3.6 glusterfs-client-xlators-3.7.5-11.el7rhgs.x86_64 glusterfs-cli-3.7.5-11.el7rhgs.x86_64 glusterfs-libs-3.7.5-11.el7rhgs.x86_64 glusterfs-3.7.5-11.el7rhgs.x86_64 glusterfs-api-3.7.5-11.el7rhgs.x86_64 glusterfs-fuse-3.7.5-11.el7rhgs.x86_64 glusterfs-server-3.7.5-11.el7rhgs.x86_64 How reproducible: Easily reproducible Steps to Reproduce: 1. Create a GlusterFS storage pool in an oVirt environment 2. Configure GlusterFS in a posix/directIO mode 3. Create a new VM or add disk to an existing VM. The add disk part fails Actual results: Expected results: Additional info: --- Additional comment from Krutika Dhananjay on 2016-03-17 08:11:14 EDT --- Hi Sanjay, In light of the recent discussion we had wrt direct-io behavior on a mail thread, I have the following question: Assuming the 'cache=none' command line option implies that the vm image files will all be opened with O_DIRECT flag (which means that the write buffers will already be aligned with the "sector size of the underlying block device", the only layer in the combined client-server stack that could prevent us from achieving o-direct-like behavior because of caching would be the write-behind translator. Therefore, I am wondering if it is sufficient to enable 'performance.strict-o-direct' to achieve the behavior you expect to see with o-direct? -Krutika --- Additional comment from Sanjay Rao on 2016-03-17 08:20:02 EDT --- I have tested with different options. The only option that enabled true directIO on the glusterfs server was the posix setting. I can verify again with the performance.strict-o-direct with the recent glusterfs version (glusterfs-server-3.7.5-18.33) installed on my system just to be sure. --- Additional comment from Vijay Bellur on 2016-03-29 23:27:59 EDT --- REVIEW: http://review.gluster.org/13846 (features/shard: Make o-direct writes work with sharding) posted (#1) for review on master by Krutika Dhananjay (kdhananj) --- Additional comment from Vijay Bellur on 2016-04-06 05:50:14 EDT --- REVIEW: http://review.gluster.org/13846 (features/shard: Make o-direct writes work with sharding) posted (#2) for review on master by Krutika Dhananjay (kdhananj) --- Additional comment from Vijay Bellur on 2016-04-11 02:01:27 EDT --- REVIEW: http://review.gluster.org/13846 (features/shard: Make o-direct writes work with sharding) posted (#3) for review on master by Krutika Dhananjay (kdhananj) --- Additional comment from Vijay Bellur on 2016-04-11 05:07:14 EDT --- REVIEW: http://review.gluster.org/13846 (features/shard: Make o-direct writes work with sharding) posted (#4) for review on master by Krutika Dhananjay (kdhananj) --- Additional comment from Vijay Bellur on 2016-04-11 15:49:00 EDT --- COMMIT: http://review.gluster.org/13846 committed in master by Jeff Darcy (jdarcy) ------ commit c272c71391cea9db817f4e7e38cfc25a7cff8bd5 Author: Krutika Dhananjay <kdhananj> Date: Tue Mar 29 18:36:08 2016 +0530 features/shard: Make o-direct writes work with sharding With files opened with o-direct, the expectation is that the IO performed on the fds is byte aligned wrt the sector size of the underlying device. With files getting sharded, a single write from the application could be broken into more than one write falling on different shards which _might_ cause the original byte alignment property to be lost. To get around this, shard translator will send fsync on odirect writes to emulate o-direct-like behavior in the backend. Change-Id: Ie8a6c004df215df78deff5cf4bcc698b4e17a7ae BUG: 1322214 Signed-off-by: Krutika Dhananjay <kdhananj> Reviewed-on: http://review.gluster.org/13846 Smoke: Gluster Build System <jenkins.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.com> --- Additional comment from Vijay Bellur on 2016-05-03 09:30:23 EDT --- REVIEW: http://review.gluster.org/14191 (core, shard: Make shards inherit main file's O_DIRECT flag if present) posted (#1) for review on master by Krutika Dhananjay (kdhananj) --- Additional comment from Vijay Bellur on 2016-05-04 11:35:34 EDT --- REVIEW: http://review.gluster.org/14191 (core, shard: Make shards inherit main file's O_DIRECT flag if present) posted (#2) for review on master by Krutika Dhananjay (kdhananj) --- Additional comment from Vijay Bellur on 2016-05-04 20:26:54 EDT --- REVIEW: http://review.gluster.org/14191 (core, shard: Make shards inherit main file's O_DIRECT flag if present) posted (#3) for review on master by Krutika Dhananjay (kdhananj) --- Additional comment from Vijay Bellur on 2016-05-04 22:30:44 EDT --- REVIEW: http://review.gluster.org/14215 (protocol/client: Filter o-direct in readv/writev) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu) --- Additional comment from Vijay Bellur on 2016-05-05 03:58:04 EDT --- REVIEW: http://review.gluster.org/14191 (core, shard: Make shards inherit main file's O_DIRECT flag if present) posted (#4) for review on master by Krutika Dhananjay (kdhananj) --- Additional comment from Vijay Bellur on 2016-05-05 23:37:30 EDT --- COMMIT: http://review.gluster.org/14215 committed in master by Pranith Kumar Karampuri (pkarampu) ------ commit 74837896c38bafdd862f164d147b75fcbb619e8f Author: Pranith Kumar K <pkarampu> Date: Thu May 5 07:59:03 2016 +0530 protocol/client: Filter o-direct in readv/writev Change-Id: I519c666b3a7c0db46d47e08a6a7e2dbecc05edf2 BUG: 1322214 Signed-off-by: Pranith Kumar K <pkarampu> Reviewed-on: http://review.gluster.org/14215 Smoke: Gluster Build System <jenkins.com> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.com> Reviewed-by: Krutika Dhananjay <kdhananj> --- Additional comment from Vijay Bellur on 2016-05-09 07:37:27 EDT --- REVIEW: http://review.gluster.org/14271 (storage/posix: Print offset too when readv fails) posted (#1) for review on master by Krutika Dhananjay (kdhananj)
REVIEW: http://review.gluster.org/14299 (protocol/client: Filter o-direct in readv/writev) posted (#1) for review on release-3.8 by Pranith Kumar Karampuri (pkarampu)
COMMIT: http://review.gluster.org/14299 committed in release-3.8 by Niels de Vos (ndevos) ------ commit 33aad915e4187c9ca5fdff593c08c361cfa4b2f6 Author: Pranith Kumar K <pkarampu> Date: Thu May 5 07:59:03 2016 +0530 protocol/client: Filter o-direct in readv/writev >Change-Id: I519c666b3a7c0db46d47e08a6a7e2dbecc05edf2 >BUG: 1322214 >Signed-off-by: Pranith Kumar K <pkarampu> >Reviewed-on: http://review.gluster.org/14215 >Smoke: Gluster Build System <jenkins.com> >NetBSD-regression: NetBSD Build System <jenkins.org> >CentOS-regression: Gluster Build System <jenkins.com> >Reviewed-by: Krutika Dhananjay <kdhananj> BUG: 1335284 Change-Id: I119a5f1eebf657b01d8d924ff1f59a49eb472667 Signed-off-by: Pranith Kumar K <pkarampu> Reviewed-on: http://review.gluster.org/14299 Smoke: Gluster Build System <jenkins.com> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.com> Reviewed-by: Niels de Vos <ndevos>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user