Hide Forgot
The bug can be replicated by running postmark benchmark configured for the metadata operations - around 1kb files in high numbers Use a standard server side vol-spec file with io-threads 8. and client side: # file: /etc/glusterfs/glusterfs-client.vol volume s1 type protocol/client option transport-type ib-verbs option remote-host 10.24.2.128 option remote-subvolume brick end-volume volume s2 type protocol/client option transport-type ib-verbs option remote-host 10.24.2.129 option remote-subvolume brick end-volume volume dstr type cluster/distribute subvolumes s1 s2 end-volume volume writebehind type performance/write-behind option window-size 2KB option flush-behind on subvolumes dstr end-volume ---------------------------- then run postmark with following pmrc file set size 1024 1048 set number 25000 set location /gmnt/n142 ## Location of glusterfs mounted directory set read 8192 set write 8192 set report verbose run /root/pmr142 quit --------------------------------- If this is hit on 2 servers with 4 clients running postmark you would see performance of ~700 creates/sec. (or 1 server with 2 clients) But if the number is reduced to 12500 the performance you get is ~1400 creates/sec. The performance should not critically depend upon number of files being created. We tried to analyze the cause and found this was a server-side artifact. If we saw "top" on the servers while postmark was being run with 50000 files we saw very high CPU utilization. The CPU utilization went down to negligible for half the number of files - this shows up in the performance.
Whoever looks at this please investigate if this is a resurface of Bz 16.
We will need more detailed performance tests to be done before cornering on the performance of each xlators. Please open a new bug when we start the performance tests/metric tasks.