Bug 858438 - gluster-object: messages with "async update later"
gluster-object: messages with "async update later"
Status: CLOSED CURRENTRELEASE
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: gluster-swift (Show other bugs)
2.0
Unspecified Unspecified
medium Severity unspecified
: ---
: RHGS 2.1.2
Assigned To: Vivek Agarwal
pushpesh sharma
: ZStream
Depends On: 846641
Blocks:
  Show dependency treegraph
 
Reported: 2012-09-18 18:22 EDT by Scott Haines
Modified: 2016-02-17 19:02 EST (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 846641
Environment:
Last Closed: 2013-10-07 07:34:03 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Comment 2 Junaid 2013-01-22 02:01:28 EST
This seems to be a performance issue. Can you verify the same in gluster-swift-1.7.4 rpms.
Comment 3 pushpesh sharma 2013-07-17 06:36:08 EDT
This BZ has been verified using catalyst workload on RHS2.1.It seems to be fixed, as new PDQ performance related changes are merged to RHS2.1. 

[root@dhcp207-9 ~]# rpm -qa|grep gluster
gluster-swift-object-1.8.0-6.3.el6rhs.noarch
vdsm-gluster-4.10.2-22.7.el6rhs.noarch
gluster-swift-plugin-1.8.0-2.el6rhs.noarch
glusterfs-geo-replication-3.4.0.12rhs.beta3-1.el6rhs.x86_64
glusterfs-3.4.0.12rhs.beta3-1.el6rhs.x86_64
gluster-swift-1.8.0-6.3.el6rhs.noarch
glusterfs-server-3.4.0.12rhs.beta3-1.el6rhs.x86_64
gluster-swift-proxy-1.8.0-6.3.el6rhs.noarch
gluster-swift-account-1.8.0-6.3.el6rhs.noarch
glusterfs-rdma-3.4.0.12rhs.beta3-1.el6rhs.x86_64
glusterfs-fuse-3.4.0.12rhs.beta3-1.el6rhs.x86_64
gluster-swift-container-1.8.0-6.3.el6rhs.noarch


All performance related tests(From QE perspective) will be done using catalyst workload(If required in future may be ssbench).Which has 15 runs of 10000 requests(PUT/GET/HEAD/DELETE) each distributed among 10 threads.These comprehensive test include all file formats and varied sizes.These test executed on a machine with following configuration:-

RAM:- 7500Gb
CPU:- 1
Volume Info:-

All bricks are created as a logical volume(on localhost) of 10G each, and each volume has 4 of such bricks.

[root@dhcp207-9 ~]# gluster volume info
 
Volume Name: test
Type: Distribute
Volume ID: 440fdac0-a3bd-4ab1-a70c-f4c390d97100
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: localhost:/mnt/lv1/lv1
Brick2: localhost:/mnt/lv2/lv2
Brick3: localhost:/mnt/lv3/lv3
Brick4: localhost:/mnt/lv4/lv4
 
Volume Name: test2
Type: Distribute
Volume ID: 6d922203-6657-4ed3-897a-069ef6c396bf
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: localhost:/mnt/lv5/lv5
Brick2: localhost:/mnt/lv6/lv6
Brick3: localhost:/mnt/lv7/lv7
Brick4: localhost:/mnt/lv8/lv8


PS: Performance Engineering will be responsible for all large scale test , which will be done on BAGL cluster.

Note You need to log in before you can comment on or make changes to this bug.