Description of problem: ----------------------- A regression seems to have been introduced on plain dstribute volumes on random write workloads over FUSE mounts. 3.1.3 : 518085 kB/sec 3.2 : 328057 kB/sec Regression : -36% Version-Release number of selected component (if applicable): ------------------------------------------------------------- 3.8.4-15 How reproducible: ----------------- Every time. Actual results: --------------- 36% regression with io-threads on on 3.2 bits. Expected results: ----------------- Regression Threshold : +-10% Additional info: ---------------- Volume Name: testvol Type: Distribute Volume ID: 35b73a47-bdc7-48b2-81a1-9b66624ae57c Status: Started Snapshot Count: 0 Number of Bricks: 4 Transport-type: tcp Bricks: Brick1: gqas014.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick0 Brick2: gqas005.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick1 Brick3: gqas006.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick2 Brick4: gqas015.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick3 Options Reconfigured: network.inode-lru-limit: 90000 performance.md-cache-timeout: 600 performance.cache-invalidation: on performance.cache-samba-metadata: on performance.stat-prefetch: on features.cache-invalidation-timeout: 600 features.cache-invalidation: on client.event-threads: 2 server.event-threads: 2 cluster.lookup-optimize: off performance.client-io-threads: on transport.address-family: inet performance.readdir-ahead: on nfs.disable: off [root@gqas005 ~]#
3.1.3 : 518085 kB/sec 3.2 Defaults : 328057 kB/sec 3.2 io-threads off : 527510 B/sec Switching off io-threads brings back the lost regression.
Hi Nithya, There was no mdcache in 3.1.3..Also,io-threads were disabled for my tests. I'll attach the server profiles in a while..
Is this still an issue? If not can we close this bug?
I don't think we would be working on fixing rhgs-3.2 any more. I'm taking the liberty of closing the BZ even though its not replicate component. Please re-open if needed or perf issues are seen on the latest rhgs version.