+++ This bug was initially created as a clone of Bug #837495 +++ Support for native Linux AIO in storage/posix translator --- Additional comment from vbellur on 2012-07-14 21:09:16 EDT --- CHANGE: http://review.gluster.com/3627 (storage/posix: implement native linux AIO support) merged in master by Anand Avati (avati) --- Additional comment from vbellur on 2012-08-27 11:08:54 EDT --- CHANGE: http://review.gluster.org/3849 (storage/posix: implement native linux AIO support) merged in release-3.3 by Vijay Bellur (vbellur)
patches are merged upstream. should be available for 2.1 testing with first brew build.
Set below option on any of the volume: # gluster volume set <VOLNAME> storage.linux-aio enable And make sure that any of the filesystem operations (like fs sanity) are running fine. We can make sure if the option is actually set in the process by checking 'gluster volume info' and also grepping the brick process log file with 'linux-aio' string and see that its set in volume file dump.
The fs_sanity failed when run with "linux-aio" set to value "enable" : glusterfs version: ================== [11/28/12 - 12:50:31 root@rhs-gp-srv12 system_light]# glusterfs --version glusterfs 3.3.0rhsvirt1 built on Nov 7 2012 10:11:13 [11/28/12 - 12:50:37 root@rhs-gp-srv12 system_light]# rpm -qa | grep gluster glusterfs-fuse-3.3.0rhsvirt1-8.el6.x86_64 glusterfs-3.3.0rhsvirt1-8.el6.x86_64 Refer to Bugs : 880165, 880889, 880901, 880911 Volume info:- ==================== [11/28/12 - 12:47:24 root@rhs-client16 ~]# gluster v info replicate Volume Name: replicate Type: Replicate Volume ID: d93217ad-aa06-49df-80bf-b0539e5eba72 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: rhs-client1:/disk1 Brick2: rhs-client16:/disk1 Options Reconfigured: storage.owner-gid: 36 storage.owner-uid: 36 cluster.eager-lock: enable storage.linux-aio: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off Storage_Node1 brick log file: ================================== [11/28/12 - 12:56:20 root@rhs-client1 bricks]# pwd /var/log/glusterfs/bricks [11/28/12 - 12:56:21 root@rhs-client1 bricks]# grep "linux-aio" disk1.* disk1.log-20121125: 5: option linux-aio enable Storage_Node2 brick log file: =================================== [11/28/12 - 12:56:20 root@rhs-client16 bricks]# pwd /var/log/glusterfs/bricks [11/28/12 - 12:56:21 root@rhs-client16 bricks]# grep "linux-aio" disk1.* disk1.log-20121125: 5: option linux-aio enable
*** Bug 880165 has been marked as a duplicate of this bug. ***
*** Bug 880901 has been marked as a duplicate of this bug. ***
*** Bug 880911 has been marked as a duplicate of this bug. ***
*** Bug 880889 has been marked as a duplicate of this bug. ***
tried to reproduce all the 4 bugs marked as dups of this bug, and all of them work fine now on downstream repo. Will mark it for ON_QA again with the new build available.
Lowering priority as the issue is no longer reproducible.
linux-aio got removed from virt profile and also from volume set help. Hence moving it to MODIFIED.
Ran fs_sanity on the build "glusterfs 3.4.0.54rhs built on Jan 5 2014 06:26:17" with "linux-aio" "enabled" . Bonnie fails with drastic "I/O error (re-write read): Transport endpoint is not connected". The tests passes when "linux-aio" is "disabled" on the volume. Output from the fs_sanity execution when "linux-aio" is "enabled": ==================================================================== executing bonnie Using uid:0, gid:0. Writing a byte at a time...done Writing intelligently...done Rewriting...done Reading a byte at a time...done Reading intelligently...done start 'em...Bonnie: drastic I/O error (re-write read): Transport endpoint is not connected Can't read a full block, only got 8292 bytes. Can't read a full block, only got 8209 bytes. Can't read a full block, only got 8210 bytes. Can't write block.: Transport endpoint is not connected Can't sync file. Can't write block.: Transport endpoint is not connected Can't sync file. Can't read a full block, only got 8209 bytes. Can't write block.: Transport endpoint is not connected Can't sync file. Can't read a full block, only got 8210 bytes. Can't write block.: Transport endpoint is not connected Can't sync file. Can't write block.: Transport endpoint is not connected Can't sync file. [root@dj:~] Jan-06-2014 22:10:36 $gluster v info Volume Name: vol Type: Distributed-Replicate Volume ID: 6136c5d9-9b40-4503-aa4a-11ff3da44e88 Status: Started Number of Bricks: 2 x 3 = 6 Transport-type: tcp Bricks: Brick1: dj:/rhs/brick1/b1 Brick2: fan:/rhs/brick1/b1-rep1 Brick3: mia:/rhs/brick1/b1-rep2 Brick4: dj:/rhs/brick1/b2 Brick5: fan:/rhs/brick1/b2-rep1 Brick6: mia:/rhs/brick1/b2-rep2 Options Reconfigured: storage.linux-aio: enable The bug is not fixed. Moving the bug to "ASSIGNED" state.
Per discussion on 1/6, removing from corbett list
The product version of Red Hat Storage on which this issue was reported has reached End Of Life (EOL) [1], hence this bug report is being closed. If the issue is still observed on a current version of Red Hat Storage, please file a new bug report on the current version. [1] https://rhn.redhat.com/errata/RHSA-2014-0821.html