Bug 1322214 - [HC] Add disk in a Hyper-converged environment fails when glusterfs is running in directIO mode
Summary: [HC] Add disk in a Hyper-converged environment fails when glusterfs is runnin...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: core
Version: mainline
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On: 1314421
Blocks: 1325843 1335284
TreeView+ depends on / blocked
 
Reported: 2016-03-30 03:26 UTC by Krutika Dhananjay
Modified: 2016-06-16 14:02 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.8rc2
Doc Type: Bug Fix
Doc Text:
Clone Of: 1314421
: 1325843 1335284 (view as bug list)
Environment:
Last Closed: 2016-06-16 14:02:08 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Krutika Dhananjay 2016-03-30 03:26:49 UTC
+++ This bug was initially created as a clone of Bug #1314421 +++

Description of problem:
In a oVirt-Gluster hyperconverged environment, adding disk to VM from a glusterfs storage pool fails when glusterfs is running in posix/directio mode

The gluster volume is configured to run in directIO mode by adding 

option o-direct on 

in the /var/lib/glusterd/vols/gl_01/*.vol files. Example below

volume gl_01-posix
    type storage/posix
    option o-direct on
    option brick-gid 36
    option brick-uid 36
    option volume-id c131155a-d40c-4d9e-b056-26c61b924c26
    option directory /bricks/b01/g
end-volume

When the option is removed and the volume is restarted, disks can be added to the VM from the glusterfs pool.


Version-Release number of selected component (if applicable):

RHEV version is RHEV 3.6

glusterfs-client-xlators-3.7.5-11.el7rhgs.x86_64
glusterfs-cli-3.7.5-11.el7rhgs.x86_64
glusterfs-libs-3.7.5-11.el7rhgs.x86_64
glusterfs-3.7.5-11.el7rhgs.x86_64
glusterfs-api-3.7.5-11.el7rhgs.x86_64
glusterfs-fuse-3.7.5-11.el7rhgs.x86_64
glusterfs-server-3.7.5-11.el7rhgs.x86_64



How reproducible:
Easily reproducible

Steps to Reproduce:
1. Create a GlusterFS storage pool in an oVirt environment 
2. Configure GlusterFS in a posix/directIO mode
3. Create a new VM or add disk to an existing VM. The add disk part fails

Actual results:


Expected results:


Additional info:

--- Additional comment from Krutika Dhananjay on 2016-03-17 08:11:14 EDT ---

Hi Sanjay,

In light of the recent discussion we had wrt direct-io behavior on a mail thread, I have the following question:

Assuming the 'cache=none' command line option implies that the vm image files will all be opened with O_DIRECT flag (which means that the write buffers will already be aligned with the "sector size of the underlying block device", the only layer in the combined client-server stack that could prevent us from achieving o-direct-like behavior because of caching would be the write-behind translator.

Therefore, I am wondering if it is sufficient to enable 'performance.strict-o-direct' to achieve the behavior you expect to see with o-direct?

-Krutika

--- Additional comment from Sanjay Rao on 2016-03-17 08:20:02 EDT ---

I have tested with different options. The only option that enabled true directIO on the glusterfs server was the posix setting.

I can verify again with the performance.strict-o-direct with the recent glusterfs version (glusterfs-server-3.7.5-18.33) installed on my system just to be sure.

Comment 1 Vijay Bellur 2016-03-30 03:27:59 UTC
REVIEW: http://review.gluster.org/13846 (features/shard: Make o-direct writes work with sharding) posted (#1) for review on master by Krutika Dhananjay (kdhananj@redhat.com)

Comment 2 Vijay Bellur 2016-04-06 09:50:14 UTC
REVIEW: http://review.gluster.org/13846 (features/shard: Make o-direct writes work with sharding) posted (#2) for review on master by Krutika Dhananjay (kdhananj@redhat.com)

Comment 3 Vijay Bellur 2016-04-11 06:01:27 UTC
REVIEW: http://review.gluster.org/13846 (features/shard: Make o-direct writes work with sharding) posted (#3) for review on master by Krutika Dhananjay (kdhananj@redhat.com)

Comment 4 Vijay Bellur 2016-04-11 09:07:14 UTC
REVIEW: http://review.gluster.org/13846 (features/shard: Make o-direct writes work with sharding) posted (#4) for review on master by Krutika Dhananjay (kdhananj@redhat.com)

Comment 5 Vijay Bellur 2016-04-11 19:49:00 UTC
COMMIT: http://review.gluster.org/13846 committed in master by Jeff Darcy (jdarcy@redhat.com) 
------
commit c272c71391cea9db817f4e7e38cfc25a7cff8bd5
Author: Krutika Dhananjay <kdhananj@redhat.com>
Date:   Tue Mar 29 18:36:08 2016 +0530

    features/shard: Make o-direct writes work with sharding
    
    With files opened with o-direct, the expectation is that
    the IO performed on the fds is byte aligned wrt the sector size
    of the underlying device. With files getting sharded, a single
    write from the application could be broken into more than one write
    falling on different shards which _might_ cause the original byte alignment
    property to be lost. To get around this, shard translator will send fsync
    on odirect writes to emulate o-direct-like behavior in the backend.
    
    Change-Id: Ie8a6c004df215df78deff5cf4bcc698b4e17a7ae
    BUG: 1322214
    Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
    Reviewed-on: http://review.gluster.org/13846
    Smoke: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.com>

Comment 6 Vijay Bellur 2016-05-03 13:30:23 UTC
REVIEW: http://review.gluster.org/14191 (core, shard: Make shards inherit main file's O_DIRECT flag if present) posted (#1) for review on master by Krutika Dhananjay (kdhananj@redhat.com)

Comment 7 Vijay Bellur 2016-05-04 15:35:34 UTC
REVIEW: http://review.gluster.org/14191 (core, shard: Make shards inherit main file's O_DIRECT flag if present) posted (#2) for review on master by Krutika Dhananjay (kdhananj@redhat.com)

Comment 8 Vijay Bellur 2016-05-05 00:26:54 UTC
REVIEW: http://review.gluster.org/14191 (core, shard: Make shards inherit main file's O_DIRECT flag if present) posted (#3) for review on master by Krutika Dhananjay (kdhananj@redhat.com)

Comment 9 Vijay Bellur 2016-05-05 02:30:44 UTC
REVIEW: http://review.gluster.org/14215 (protocol/client: Filter o-direct in readv/writev) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu@redhat.com)

Comment 10 Vijay Bellur 2016-05-05 07:58:04 UTC
REVIEW: http://review.gluster.org/14191 (core, shard: Make shards inherit main file's O_DIRECT flag if present) posted (#4) for review on master by Krutika Dhananjay (kdhananj@redhat.com)

Comment 11 Vijay Bellur 2016-05-06 03:37:30 UTC
COMMIT: http://review.gluster.org/14215 committed in master by Pranith Kumar Karampuri (pkarampu@redhat.com) 
------
commit 74837896c38bafdd862f164d147b75fcbb619e8f
Author: Pranith Kumar K <pkarampu@redhat.com>
Date:   Thu May 5 07:59:03 2016 +0530

    protocol/client: Filter o-direct in readv/writev
    
    Change-Id: I519c666b3a7c0db46d47e08a6a7e2dbecc05edf2
    BUG: 1322214
    Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
    Reviewed-on: http://review.gluster.org/14215
    Smoke: Gluster Build System <jenkins@build.gluster.com>
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>

Comment 12 Vijay Bellur 2016-05-09 11:37:27 UTC
REVIEW: http://review.gluster.org/14271 (storage/posix: Print offset too when readv fails) posted (#1) for review on master by Krutika Dhananjay (kdhananj@redhat.com)

Comment 13 Vijay Bellur 2016-05-19 19:45:13 UTC
REVIEW: http://review.gluster.org/14441 (protocol/client:  Reflect readv/writev changes in filter-O_DIRECT description) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu@redhat.com)

Comment 14 Vijay Bellur 2016-05-31 05:20:58 UTC
REVIEW: http://review.gluster.org/14271 (storage/posix: Print offset,size and gfid too when readv fails) posted (#2) for review on master by Krutika Dhananjay (kdhananj@redhat.com)

Comment 15 Vijay Bellur 2016-05-31 19:01:02 UTC
COMMIT: http://review.gluster.org/14441 committed in master by Jeff Darcy (jdarcy@redhat.com) 
------
commit e341d2827800e32997f888668597785178a40626
Author: Pranith Kumar K <pkarampu@redhat.com>
Date:   Fri May 20 01:09:40 2016 +0530

    protocol/client:  Reflect readv/writev changes in filter-O_DIRECT description
    
    Commit 74837896c38bafdd862f164d147b75fcbb619e8f introduced filtering
    of O_DIRECT option even for readv/writev but the option description is not
    updated.
    
    Change-Id: I7c2b69fdb496ca27d1b06a458f2f3eab0d16d417
    BUG: 1322214
    Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
    Reviewed-on: http://review.gluster.org/14441
    Smoke: Gluster Build System <jenkins@build.gluster.com>
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Niels de Vos <ndevos@redhat.com>
    Reviewed-by: Jeff Darcy <jdarcy@redhat.com>

Comment 16 Vijay Bellur 2016-06-02 13:36:03 UTC
REVIEW: http://review.gluster.org/14191 (core, shard: Make shards inherit main file's O_DIRECT flag if present) posted (#5) for review on master by Krutika Dhananjay (kdhananj@redhat.com)

Comment 17 Vijay Bellur 2016-06-02 14:49:39 UTC
REVIEW: http://review.gluster.org/14191 (core, shard: Make shards inherit main file's O_DIRECT flag if present) posted (#6) for review on master by Krutika Dhananjay (kdhananj@redhat.com)

Comment 18 Niels de Vos 2016-06-16 14:02:08 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.