This should be fixed in gluster-swift-1.7.4 rpms. Reopen if seen again.
This BZ has been verified using catalyst workload on RHS2.1.It seems to be fixed, as new PDQ performance related changes are merged to RHS2.1. [root@dhcp207-9 ~]# rpm -qa|grep gluster gluster-swift-object-1.8.0-6.3.el6rhs.noarch vdsm-gluster-4.10.2-22.7.el6rhs.noarch gluster-swift-plugin-1.8.0-2.el6rhs.noarch glusterfs-geo-replication-3.4.0.12rhs.beta3-1.el6rhs.x86_64 glusterfs-3.4.0.12rhs.beta3-1.el6rhs.x86_64 gluster-swift-1.8.0-6.3.el6rhs.noarch glusterfs-server-3.4.0.12rhs.beta3-1.el6rhs.x86_64 gluster-swift-proxy-1.8.0-6.3.el6rhs.noarch gluster-swift-account-1.8.0-6.3.el6rhs.noarch glusterfs-rdma-3.4.0.12rhs.beta3-1.el6rhs.x86_64 glusterfs-fuse-3.4.0.12rhs.beta3-1.el6rhs.x86_64 gluster-swift-container-1.8.0-6.3.el6rhs.noarch All performance related tests(From QE perspective) will be done using catalyst workload(If required in future may be ssbench).Which has 15 runs of 10000 requests(PUT/GET/HEAD/DELETE) each distributed among 10 threads.These comprehensive test include all file formats and varied sizes.These test executed on a machine with following configuration:- RAM:- 7500Gb CPU:- 1 Volume Info:- All bricks are created as a logical volume(on localhost) of 10G each, and each volume has 4 of such bricks. [root@dhcp207-9 ~]# gluster volume info Volume Name: test Type: Distribute Volume ID: 440fdac0-a3bd-4ab1-a70c-f4c390d97100 Status: Started Number of Bricks: 4 Transport-type: tcp Bricks: Brick1: localhost:/mnt/lv1/lv1 Brick2: localhost:/mnt/lv2/lv2 Brick3: localhost:/mnt/lv3/lv3 Brick4: localhost:/mnt/lv4/lv4 Volume Name: test2 Type: Distribute Volume ID: 6d922203-6657-4ed3-897a-069ef6c396bf Status: Started Number of Bricks: 4 Transport-type: tcp Bricks: Brick1: localhost:/mnt/lv5/lv5 Brick2: localhost:/mnt/lv6/lv6 Brick3: localhost:/mnt/lv7/lv7 Brick4: localhost:/mnt/lv8/lv8 PS: Performance Engineering will be responsible for all large scale test , which will be done on BAGL cluster.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1262.html