Bug 821844 - Object storage doesn't utilize all the nodes in the Cluster
Summary: Object storage doesn't utilize all the nodes in the Cluster
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: Documentation
Version: 2.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: Release Candidate
: RHGS 2.0.0
Assignee: Divya
QA Contact: Saurabh
URL:
Whiteboard:
Depends On:
Blocks: 817967
TreeView+ depends on / blocked
 
Reported: 2012-05-15 14:56 UTC by Junaid
Modified: 2018-11-28 20:40 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-04-10 07:15:55 UTC
Embargoed:


Attachments (Terms of Use)

Description Junaid 2012-05-15 14:56:24 UTC
Description of problem:
In case of Object storage being deployed on two or more machines, only one machine is utilized, thus under/not utilizing the other nodes at all.

Version-Release number of selected component (if applicable):
3.3 beta

How reproducible:
Always

Comment 1 Junaid 2012-05-15 15:12:52 UTC
To utilize all the nodes in the cluster, we need a load balancer like nginx, pound, etc to distribute the request equally to different storage nodes. On the other hand, we must also configure the proxy servers on all the nodes to use a distributed memcached to share the authentication token across all the storage nodes.

Edit the /etc/swift/proxy-server.conf file to add "memcache_servers" line under the section [filter:cache].
  syntax: memcache_servers = <ip1>:11211,<ip2>:11211,<ip3>:11211...

  11211: is the port on which the memcached server is listening.

So the configuration file will look like

[app:proxy-server]
use = egg:swift#proxy
allow_account_management=true
account_autocreate=true
[filter:tempauth]
use = egg:swift#tempauth
user_admin_admin=admin.admin.reseller_admin
user_test_tester=testing.admin
user_test2_tester2=testing2.admin
user_test_tester3=testing3
[filter:healthcheck]
use = egg:swift#healthcheck
[filter:cache]
use = egg:swift#memcache
memcache_servers = 192.168.1.20:11211,192.168.1.21:11211,192.168.1.22:11211

The same sequence should be used on all the configuration files.

Comment 2 Ben England 2012-05-18 13:01:05 UTC
Is it possible to avoid having to enumerate the memcached servers in this file?  This is an example of O(N^2) management -- you have N configuration files, one on every host, and each one has to be updated whenever an memcached is added/removed on one of the N hosts.  There must be a better way, right? 

At least put the memcached server list in a separate file so that it can be updated by just copying a file to all the servers and restarting openswift, but ideally there would be a periodic check to see if file contents had changed...

Comment 3 Divya 2012-05-25 10:08:56 UTC
Added "caching memcached" information as Important at: http://documentation-stage.bne.redhat.com/docs/en-US/Red_Hat_Storage/2/html/User_Guide/ch13s04s04.html

Comment 4 Junaid 2012-05-26 10:50:49 UTC
(In reply to comment #2)
> Is it possible to avoid having to enumerate the memcached servers in this
> file?  This is an example of O(N^2) management -- you have N configuration
> files, one on every host, and each one has to be updated whenever an
> memcached is added/removed on one of the N hosts.  There must be a better
> way, right? 
> 
> At least put the memcached server list in a separate file so that it can be
> updated by just copying a file to all the servers and restarting openswift,
> but ideally there would be a periodic check to see if file contents had
> changed...

There is another way to achieve the same result. Instead of modifying the proxy-server.conf file on all the nodes on a new node addition, one server can be dedicated to memcached(which can be one of the server nodes itself) and add the IP of just the dedicated memcached node in all the proxy-server.conf. So, when the user's add a new node they don't have to modify proxy-server.conf on all the machines but the proxy-server.conf of the newly added node.

Only problem I see with this is that if the dedicated machine is down then users will experience errors. So they can have two or three machines dedicated which will act as fail back. The applications may need to re-authenticate if the server containing the authentication token is rebooted.

Comment 6 Saurabh 2012-06-01 05:22:18 UTC
Tried to configure the proxy-server files and find the token returned, it was same and REST request from different IPs with same token.

Verified on version 3.3.0qa43


Note You need to log in before you can comment on or make changes to this bug.