Bug 1322014 - DirectIO enabling
Summary: DirectIO enabling
Keywords:
Status: CLOSED DUPLICATE of bug 1314421
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: core
Version: unspecified
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: Anoop
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-03-29 14:51 UTC by Sanjay Rao
Modified: 2016-09-17 14:43 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-04-28 09:44:31 UTC
Embargoed:


Attachments (Terms of Use)

Description Sanjay Rao 2016-03-29 14:51:25 UTC
Description of problem:
For testing in a hyperconverged environment, GlusterFS needs to be tested with DirectIO option. To do so, the option performance.strict-o-direct was turned on. But the gluster cache still stays on.

The options for the gluster volume in this setup are as below

[root@gprfs029 ~]# gluster v info
 
Volume Name: gl_01
Type: Replicate
Volume ID: 047bc55d-283d-4f40-8e6f-a656762fd281
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: gprfs029-10ge:/bricks/b01/g
Brick2: gprfs030-10ge:/bricks/b01/g
Brick3: gprfs031-10ge:/bricks/b01/g
Options Reconfigured:
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: on
cluster.quorum-type: auto
cluster.server-quorum-type: server
network.ping-timeout: 10
server.allow-insecure: on
nfs.disable: on
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
features.shard-block-size: 512MB
performance.strict-o-direct: on

Vmstat from one of the gluster servers before starting I/O activity

 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 1  0 117712 130373296      0 188556    0    0    16   178    0    0  1  3 96  1  0
 0  0 117712 130373152      0 188524    0    0     0    47  298 1087  0  0 100  0  0
 0  0 117712 130373216      0 188624    0    0    35     0  408 1252  0  0 100  0  0
 0  0 117712 130372416      0 188684    0    0     0    48  245  903  0  0 100  0  0

vmstat output after I/O activity
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 1  0 117712 125605504      0 4954520    0    0    16   178    0    0  1  3 96  1  0
 0  0 117712 125391520      0 5170204    0    0  2179    75 5147 9463  4  1 93  2  0
 3  0 117712 125353224      0 5206700    0    0   372    44 1242 2745  1  0 98  0  0
 0  0 117712 125117576      0 5445648    0    0  2417 109503 6934 14784  5  1 92  2  0
 2  1 117712 124882792      0 5678012    0    0  2367     0 5331 10941  3  1 94  2  0



As the output shows the Free memory has dropped and cache has gone up when I/O activity starts on the gluster hosts.

When the posix option is used to configure directIO, the Free memory on the hosts stays constant.


Version-Release number of selected component (if applicable):

Kernel - 3.10.0-327.el7.x86_64

glusterfs-server-3.7.5-18.33.git18535c9.el7rhgs.x86_64

How reproducible:
Readily reproducible

Steps to Reproduce:
1. Configure gluster volume with strict-o-direct
2. Run I/O
3.Check the cache growing using any tool like vmstat.

Actual results:


Expected results:


Additional info:

Comment 2 Sahina Bose 2016-04-28 09:44:31 UTC

*** This bug has been marked as a duplicate of bug 1314421 ***


Note You need to log in before you can comment on or make changes to this bug.