Description of problem: In any hyperconverged setup, client and brick process may run on same node, in such cases it is always better to use unix domain socket (UDS). Currently we use TCP loopback connection for communicating between gluster client and brick process. Version-Release number of selected component (if applicable): mainline Expected Result: Better Performance
REVIEW: http://review.gluster.org/12709 (transport: introducing unix domain socket for I/O) posted (#1) for review on master by Prasanna Kumar Kalever (pkalever)
REVIEW: http://review.gluster.org/12709 ([WIP]transport: introducing unix domain socket for I/O) posted (#2) for review on master by Prasanna Kumar Kalever (pkalever)
REVIEW: http://review.gluster.org/12709 (transport: introducing unix domain socket for I/O) posted (#3) for review on master by Prasanna Kumar Kalever (pkalever)
REVIEW: http://review.gluster.org/12709 (transport: introducing unix domain socket for I/O) posted (#4) for review on master by Prasanna Kumar Kalever (pkalever)
REVIEW: http://review.gluster.org/12709 (transport: introducing unix domain socket for I/O) posted (#5) for review on master by Vijay Bellur (vbellur)
REVIEW: http://review.gluster.org/12709 (transport: introducing unix domain socket for I/O) posted (#6) for review on master by Jeff Darcy (jdarcy)
REVIEW: http://review.gluster.org/12709 (transport: introducing unix domain socket for I/O) posted (#7) for review on master by Prasanna Kumar Kalever (prasanna.kalever)
REVIEW: http://review.gluster.org/12709 (transport: introducing unix domain socket for I/O) posted (#8) for review on master by Prasanna Kumar Kalever (prasanna.kalever)
REVIEW: http://review.gluster.org/12709 (transport: introducing unix domain socket for I/O) posted (#9) for review on master by Prasanna Kumar Kalever (prasanna.kalever)
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions
REVIEW: http://review.gluster.org/12709 (transport: introducing unix domain socket for I/O) posted (#10) for review on master by Prasanna Kumar Kalever (pkalever)
REVIEW: http://review.gluster.org/12709 (transport: introducing unix domain socket for I/O) posted (#11) for review on master by Prasanna Kumar Kalever (pkalever)
REVIEW: http://review.gluster.org/12709 (transport: introducing unix domain socket for I/O) posted (#12) for review on master by Prasanna Kumar Kalever (pkalever)
REVIEW: http://review.gluster.org/12709 (transport: introducing unix domain socket for I/O) posted (#13) for review on master by Prasanna Kumar Kalever (pkalever)
REVIEW: http://review.gluster.org/12709 (transport: introducing unix domain socket for I/O) posted (#14) for review on master by Prasanna Kumar Kalever (pkalever)
REVIEW: http://review.gluster.org/12709 (transport: introducing unix domain socket for I/O) posted (#15) for review on master by Prasanna Kumar Kalever (pkalever)
REVIEW: http://review.gluster.org/12709 (transport: introducing unix domain socket for I/O) posted (#16) for review on master by Prasanna Kumar Kalever (pkalever)
We will revisit it after couple of releases, and hence DEFERRED.
Created attachment 1698078 [details] fio stats with loopback and uds connection mode on latest master Considerations: randwrite and randread: no_jobs: 16 (twice num CPUs) loops: 2 seqwrite and seqread: no_jobs: 1 loops: 2 Observations: We see some improvement in reads and writes that too in few MBs but not in so proving qty. About network load using nload: We see an equal amount of traffic flowing (in MBPS) across loopback address in loopback connection mode. When we switch to uds, the traffic across loopback will be almost nil (some noise found which is expected) List of commands: randwrite: # fio --name=randwrite --name=randwrite --ioengine=sync --rw=randwrite --bs=4k --direct=0 --size=512M --nr_files=1 --numjobs=16 --fsync_on_close=1 --end_fsync=1 --fallocate=none --sync=1 --randrepeat=0 --overwrite=0 --directory=/mnt/lustre --loops=2 seq_write: # fio --name=seqwrite --ioengine=sync --rw=write --bs=1m --direct=0 --size=10G --nr_files=1 --numjobs=1 --fsync_on_close=1 --end_fsync=1 --fallocate=none --sync=1 --randrepeat=0 --overwrite=0 --directory=/mnt/lustre --loops=2 randread: # fio --name=randread --ioengine=sync --rw=randread --nr_files=1 --bs=4k --direct=0 --size=512m --numjobs=16 --fsync_on_close=1 --sync=1 --end_fsync=1 --fallocate=none --randrepeat=0 --invalidate=1 --directory=/mnt/lustre --loops=2 seq_read: # fio --name=seqread --ioengine=sync --rw=read --bs=1m --direct=0 --size=50G --numjobs=1 --fsync_on_close=1 --end_fsync=1 --fallocate=none --randrepeat=0 --invalidate=1 --directory=/mnt/lustre --loops=2