Bug 856467 - linux-aio support in storage/posix
linux-aio support in storage/posix
Status: CLOSED WONTFIX
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
2.0
Unspecified Unspecified
medium Severity unspecified
: ---
: ---
Assigned To: Anand Avati
spandura
: FutureFeature
: 880165 880889 880901 880911 (view as bug list)
Depends On: 837495
Blocks: 852277 906181
  Show dependency treegraph
 
Reported: 2012-09-12 01:43 EDT by Vidya Sakar
Modified: 2015-09-01 19:06 EDT (History)
11 users (show)

See Also:
Fixed In Version: glusterfs-3.3.0rhsvirt1-7.el6rhs
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: 837495
: 906181 (view as bug list)
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Vidya Sakar 2012-09-12 01:43:27 EDT
+++ This bug was initially created as a clone of Bug #837495 +++

Support for native Linux AIO in storage/posix translator

--- Additional comment from vbellur@redhat.com on 2012-07-14 21:09:16 EDT ---

CHANGE: http://review.gluster.com/3627 (storage/posix: implement native linux AIO support) merged in master by Anand Avati (avati@redhat.com)

--- Additional comment from vbellur@redhat.com on 2012-08-27 11:08:54 EDT ---

CHANGE: http://review.gluster.org/3849 (storage/posix: implement native linux AIO support) merged in release-3.3 by Vijay Bellur (vbellur@redhat.com)
Comment 2 Amar Tumballi 2012-09-17 15:03:45 EDT
patches are merged upstream. should be available for 2.1 testing with first brew build.
Comment 4 Amar Tumballi 2012-11-27 05:44:59 EST
Set below option on any of the volume:

# gluster volume set <VOLNAME> storage.linux-aio enable

And make sure that any of the filesystem operations (like fs sanity) are running fine.

We can make sure if the option is actually set in the process by checking 'gluster volume info' and also grepping the brick process log file with 'linux-aio' string and see that its set in volume file dump.
Comment 5 spandura 2012-11-28 02:28:16 EST
The fs_sanity failed when run with "linux-aio" set to  value "enable" :

glusterfs version:
==================
[11/28/12 - 12:50:31 root@rhs-gp-srv12 system_light]# glusterfs --version
glusterfs 3.3.0rhsvirt1 built on Nov  7 2012 10:11:13

[11/28/12 - 12:50:37 root@rhs-gp-srv12 system_light]# rpm -qa | grep gluster
glusterfs-fuse-3.3.0rhsvirt1-8.el6.x86_64
glusterfs-3.3.0rhsvirt1-8.el6.x86_64


Refer to Bugs : 880165, 880889, 880901, 880911

Volume info:-
====================
[11/28/12 - 12:47:24 root@rhs-client16 ~]# gluster v info replicate
 
Volume Name: replicate
Type: Replicate
Volume ID: d93217ad-aa06-49df-80bf-b0539e5eba72
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: rhs-client1:/disk1
Brick2: rhs-client16:/disk1
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36
cluster.eager-lock: enable
storage.linux-aio: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off

Storage_Node1 brick log file:
==================================
[11/28/12 - 12:56:20 root@rhs-client1 bricks]# pwd
/var/log/glusterfs/bricks

[11/28/12 - 12:56:21 root@rhs-client1 bricks]# grep "linux-aio" disk1.*
disk1.log-20121125:  5:     option linux-aio enable


Storage_Node2 brick log file:
===================================
[11/28/12 - 12:56:20 root@rhs-client16 bricks]# pwd
/var/log/glusterfs/bricks

[11/28/12 - 12:56:21 root@rhs-client16 bricks]# grep "linux-aio" disk1.*
disk1.log-20121125:  5:     option linux-aio enable
Comment 6 Vijay Bellur 2012-12-11 01:44:47 EST
*** Bug 880165 has been marked as a duplicate of this bug. ***
Comment 7 Vijay Bellur 2012-12-11 01:45:26 EST
*** Bug 880901 has been marked as a duplicate of this bug. ***
Comment 8 Vijay Bellur 2012-12-11 01:45:29 EST
*** Bug 880911 has been marked as a duplicate of this bug. ***
Comment 9 Vijay Bellur 2012-12-11 01:45:35 EST
*** Bug 880889 has been marked as a duplicate of this bug. ***
Comment 10 Amar Tumballi 2013-01-23 06:26:15 EST
tried to reproduce all the 4 bugs marked as dups of this bug, and all of them work fine now on downstream repo. Will mark it for ON_QA again with the new build available.
Comment 11 Vijay Bellur 2013-01-24 04:36:20 EST
Lowering priority as the issue is no longer reproducible.
Comment 12 Amar Tumballi 2013-02-12 23:33:29 EST
linux-aio got removed from virt profile and also from volume set help. Hence moving it to MODIFIED.
Comment 15 spandura 2014-01-07 00:32:55 EST
Ran fs_sanity on the build "glusterfs 3.4.0.54rhs built on Jan  5 2014 06:26:17" with "linux-aio" "enabled" . Bonnie fails with drastic "I/O error (re-write read): Transport endpoint is not connected". 

The tests passes when "linux-aio" is "disabled" on the volume. 

Output from the fs_sanity execution when "linux-aio" is "enabled": 
====================================================================
executing bonnie
Using uid:0, gid:0.
Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...Bonnie: drastic I/O error (re-write read): Transport endpoint is not connected
Can't read a full block, only got 8292 bytes.
Can't read a full block, only got 8209 bytes.
Can't read a full block, only got 8210 bytes.
Can't write block.: Transport endpoint is not connected
Can't sync file.
Can't write block.: Transport endpoint is not connected
Can't sync file.
Can't read a full block, only got 8209 bytes.
Can't write block.: Transport endpoint is not connected
Can't sync file.
Can't read a full block, only got 8210 bytes.
Can't write block.: Transport endpoint is not connected
Can't sync file.
Can't write block.: Transport endpoint is not connected
Can't sync file.

[root@dj:~] Jan-06-2014 22:10:36 $gluster v info
 
Volume Name: vol
Type: Distributed-Replicate
Volume ID: 6136c5d9-9b40-4503-aa4a-11ff3da44e88
Status: Started
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: dj:/rhs/brick1/b1
Brick2: fan:/rhs/brick1/b1-rep1
Brick3: mia:/rhs/brick1/b1-rep2
Brick4: dj:/rhs/brick1/b2
Brick5: fan:/rhs/brick1/b2-rep1
Brick6: mia:/rhs/brick1/b2-rep2
Options Reconfigured:
storage.linux-aio: enable


The bug is not fixed. Moving the bug to "ASSIGNED" state.
Comment 16 Vivek Agarwal 2014-01-07 00:48:58 EST
Per discussion on 1/6, removing from corbett list
Comment 18 Vivek Agarwal 2015-03-23 03:39:50 EDT
The product version of Red Hat Storage on which this issue was reported has reached End Of Life (EOL) [1], hence this bug report is being closed. If the issue is still observed on a current version of Red Hat Storage, please file a new bug report on the current version.







[1] https://rhn.redhat.com/errata/RHSA-2014-0821.html
Comment 19 Vivek Agarwal 2015-03-23 03:40:27 EDT
The product version of Red Hat Storage on which this issue was reported has reached End Of Life (EOL) [1], hence this bug report is being closed. If the issue is still observed on a current version of Red Hat Storage, please file a new bug report on the current version.







[1] https://rhn.redhat.com/errata/RHSA-2014-0821.html

Note You need to log in before you can comment on or make changes to this bug.