Bug 858439 - gluster-object: 400 Bad request syntax
Summary: gluster-object: 400 Bad request syntax
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gluster-swift
Version: 2.0
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
: ---
Assignee: Vivek Agarwal
QA Contact: pushpesh sharma
URL:
Whiteboard:
Depends On: 846657
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-09-18 22:22 UTC by Scott Haines
Modified: 2016-02-18 00:02 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 846657
Environment:
Last Closed: 2013-09-23 22:32:24 UTC
Embargoed:


Attachments (Terms of Use)

Comment 2 Junaid 2013-01-22 06:15:35 UTC
This should be fixed in gluster-swift-1.7.4 rpms. Reopen if seen again.

Comment 3 pushpesh sharma 2013-07-17 10:37:14 UTC
This BZ has been verified using catalyst workload on RHS2.1.It seems to be fixed, as new PDQ performance related changes are merged to RHS2.1. 

[root@dhcp207-9 ~]# rpm -qa|grep gluster
gluster-swift-object-1.8.0-6.3.el6rhs.noarch
vdsm-gluster-4.10.2-22.7.el6rhs.noarch
gluster-swift-plugin-1.8.0-2.el6rhs.noarch
glusterfs-geo-replication-3.4.0.12rhs.beta3-1.el6rhs.x86_64
glusterfs-3.4.0.12rhs.beta3-1.el6rhs.x86_64
gluster-swift-1.8.0-6.3.el6rhs.noarch
glusterfs-server-3.4.0.12rhs.beta3-1.el6rhs.x86_64
gluster-swift-proxy-1.8.0-6.3.el6rhs.noarch
gluster-swift-account-1.8.0-6.3.el6rhs.noarch
glusterfs-rdma-3.4.0.12rhs.beta3-1.el6rhs.x86_64
glusterfs-fuse-3.4.0.12rhs.beta3-1.el6rhs.x86_64
gluster-swift-container-1.8.0-6.3.el6rhs.noarch


All performance related tests(From QE perspective) will be done using catalyst workload(If required in future may be ssbench).Which has 15 runs of 10000 requests(PUT/GET/HEAD/DELETE) each distributed among 10 threads.These comprehensive test include all file formats and varied sizes.These test executed on a machine with following configuration:-

RAM:- 7500Gb
CPU:- 1
Volume Info:-

All bricks are created as a logical volume(on localhost) of 10G each, and each volume has 4 of such bricks.

[root@dhcp207-9 ~]# gluster volume info
 
Volume Name: test
Type: Distribute
Volume ID: 440fdac0-a3bd-4ab1-a70c-f4c390d97100
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: localhost:/mnt/lv1/lv1
Brick2: localhost:/mnt/lv2/lv2
Brick3: localhost:/mnt/lv3/lv3
Brick4: localhost:/mnt/lv4/lv4
 
Volume Name: test2
Type: Distribute
Volume ID: 6d922203-6657-4ed3-897a-069ef6c396bf
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: localhost:/mnt/lv5/lv5
Brick2: localhost:/mnt/lv6/lv6
Brick3: localhost:/mnt/lv7/lv7
Brick4: localhost:/mnt/lv8/lv8


PS: Performance Engineering will be responsible for all large scale test , which will be done on BAGL cluster.

Comment 5 Scott Haines 2013-09-23 22:32:24 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html


Note You need to log in before you can comment on or make changes to this bug.