Hide Forgot
I guess, quick fix to work around this bug is to generate volume files through volgen as required, ie, lets place write-behind above read-ahead in the graph.
Write-behind queues all fops in a request queue. Requests other than write are sent only one at a time i.e., next request in the queue is not STACK_WOUND till a reply to the first request is got. This means, though read-ahead sends enough number of read requests to fill its cache (it just STACK_WINDs all the reads, without waiting for the replies), write-behind in a way is doing the flow control of read-requests to the underlying translators. This results in larger time for completion of all read-requests which in turn affects performance of read-ahead (Ideally read-ahead should have data ready in its cache by the time application does the next read).
PATCH: http://patches.gluster.com/patch/3027 in release-3.0 (changed the order of write-behind - read-ahead in volgen.)
PATCH: http://patches.gluster.com/patch/3037 in master (performance/write-behind: Resume all the consecutive non-write operations in the request queue in a single go.)
PATCH: http://patches.gluster.com/patch/3036 in release-2.0 (performance/write-behind: Resume all the consecutive non-write operations in the request queue in a single go.)
PATCH: http://patches.gluster.com/patch/3035 in release-3.0 (performance/write-behind: Resume all the consecutive non-write operations in the request queue in a single go.)
Tried to reproduce/verify this bug in local setup and then in US machines over TCP/ib-verbs,but unable to reproduce/verify this bug.