Bug 858438

Summary: gluster-object: messages with "async update later"
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Scott Haines <shaines>
Component: gluster-swiftAssignee: Vivek Agarwal <vagarwal>
Status: CLOSED CURRENTRELEASE QA Contact: pushpesh sharma <psharma>
Severity: unspecified Docs Contact:
Priority: medium    
Version: 2.0CC: bbandari, gluster-bugs, lpabon, rhs-bugs, sankarshan, saujain, sdharane
Target Milestone: ---Keywords: ZStream
Target Release: RHGS 2.1.2   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 846641 Environment:
Last Closed: 2013-10-07 11:34:03 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 846641    
Bug Blocks:    

Comment 2 Junaid 2013-01-22 07:01:28 UTC
This seems to be a performance issue. Can you verify the same in gluster-swift-1.7.4 rpms.

Comment 3 pushpesh sharma 2013-07-17 10:36:08 UTC
This BZ has been verified using catalyst workload on RHS2.1.It seems to be fixed, as new PDQ performance related changes are merged to RHS2.1. 

[root@dhcp207-9 ~]# rpm -qa|grep gluster
gluster-swift-object-1.8.0-6.3.el6rhs.noarch
vdsm-gluster-4.10.2-22.7.el6rhs.noarch
gluster-swift-plugin-1.8.0-2.el6rhs.noarch
glusterfs-geo-replication-3.4.0.12rhs.beta3-1.el6rhs.x86_64
glusterfs-3.4.0.12rhs.beta3-1.el6rhs.x86_64
gluster-swift-1.8.0-6.3.el6rhs.noarch
glusterfs-server-3.4.0.12rhs.beta3-1.el6rhs.x86_64
gluster-swift-proxy-1.8.0-6.3.el6rhs.noarch
gluster-swift-account-1.8.0-6.3.el6rhs.noarch
glusterfs-rdma-3.4.0.12rhs.beta3-1.el6rhs.x86_64
glusterfs-fuse-3.4.0.12rhs.beta3-1.el6rhs.x86_64
gluster-swift-container-1.8.0-6.3.el6rhs.noarch


All performance related tests(From QE perspective) will be done using catalyst workload(If required in future may be ssbench).Which has 15 runs of 10000 requests(PUT/GET/HEAD/DELETE) each distributed among 10 threads.These comprehensive test include all file formats and varied sizes.These test executed on a machine with following configuration:-

RAM:- 7500Gb
CPU:- 1
Volume Info:-

All bricks are created as a logical volume(on localhost) of 10G each, and each volume has 4 of such bricks.

[root@dhcp207-9 ~]# gluster volume info
 
Volume Name: test
Type: Distribute
Volume ID: 440fdac0-a3bd-4ab1-a70c-f4c390d97100
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: localhost:/mnt/lv1/lv1
Brick2: localhost:/mnt/lv2/lv2
Brick3: localhost:/mnt/lv3/lv3
Brick4: localhost:/mnt/lv4/lv4
 
Volume Name: test2
Type: Distribute
Volume ID: 6d922203-6657-4ed3-897a-069ef6c396bf
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: localhost:/mnt/lv5/lv5
Brick2: localhost:/mnt/lv6/lv6
Brick3: localhost:/mnt/lv7/lv7
Brick4: localhost:/mnt/lv8/lv8


PS: Performance Engineering will be responsible for all large scale test , which will be done on BAGL cluster.