Bug 821844 - Object storage doesn't utilize all the nodes in the Cluster
Object storage doesn't utilize all the nodes in the Cluster
Status: CLOSED CURRENTRELEASE
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: Documentation (Show other bugs)
2.0
Unspecified Unspecified
unspecified Severity unspecified
: Release Candidate
: RHGS 2.0.0
Assigned To: Divya
Saurabh
: Reopened
Depends On:
Blocks: 817967
  Show dependency treegraph
 
Reported: 2012-05-15 10:56 EDT by Junaid
Modified: 2016-01-19 01:10 EST (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-04-10 03:15:55 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Junaid 2012-05-15 10:56:24 EDT
Description of problem:
In case of Object storage being deployed on two or more machines, only one machine is utilized, thus under/not utilizing the other nodes at all.

Version-Release number of selected component (if applicable):
3.3 beta

How reproducible:
Always
Comment 1 Junaid 2012-05-15 11:12:52 EDT
To utilize all the nodes in the cluster, we need a load balancer like nginx, pound, etc to distribute the request equally to different storage nodes. On the other hand, we must also configure the proxy servers on all the nodes to use a distributed memcached to share the authentication token across all the storage nodes.

Edit the /etc/swift/proxy-server.conf file to add "memcache_servers" line under the section [filter:cache].
  syntax: memcache_servers = <ip1>:11211,<ip2>:11211,<ip3>:11211...

  11211: is the port on which the memcached server is listening.

So the configuration file will look like

[app:proxy-server]
use = egg:swift#proxy
allow_account_management=true
account_autocreate=true
[filter:tempauth]
use = egg:swift#tempauth
user_admin_admin=admin.admin.reseller_admin
user_test_tester=testing.admin
user_test2_tester2=testing2.admin
user_test_tester3=testing3
[filter:healthcheck]
use = egg:swift#healthcheck
[filter:cache]
use = egg:swift#memcache
memcache_servers = 192.168.1.20:11211,192.168.1.21:11211,192.168.1.22:11211

The same sequence should be used on all the configuration files.
Comment 2 Ben England 2012-05-18 09:01:05 EDT
Is it possible to avoid having to enumerate the memcached servers in this file?  This is an example of O(N^2) management -- you have N configuration files, one on every host, and each one has to be updated whenever an memcached is added/removed on one of the N hosts.  There must be a better way, right? 

At least put the memcached server list in a separate file so that it can be updated by just copying a file to all the servers and restarting openswift, but ideally there would be a periodic check to see if file contents had changed...
Comment 3 Divya 2012-05-25 06:08:56 EDT
Added "caching memcached" information as Important at: http://documentation-stage.bne.redhat.com/docs/en-US/Red_Hat_Storage/2/html/User_Guide/ch13s04s04.html
Comment 4 Junaid 2012-05-26 06:50:49 EDT
(In reply to comment #2)
> Is it possible to avoid having to enumerate the memcached servers in this
> file?  This is an example of O(N^2) management -- you have N configuration
> files, one on every host, and each one has to be updated whenever an
> memcached is added/removed on one of the N hosts.  There must be a better
> way, right? 
> 
> At least put the memcached server list in a separate file so that it can be
> updated by just copying a file to all the servers and restarting openswift,
> but ideally there would be a periodic check to see if file contents had
> changed...

There is another way to achieve the same result. Instead of modifying the proxy-server.conf file on all the nodes on a new node addition, one server can be dedicated to memcached(which can be one of the server nodes itself) and add the IP of just the dedicated memcached node in all the proxy-server.conf. So, when the user's add a new node they don't have to modify proxy-server.conf on all the machines but the proxy-server.conf of the newly added node.

Only problem I see with this is that if the dedicated machine is down then users will experience errors. So they can have two or three machines dedicated which will act as fail back. The applications may need to re-authenticate if the server containing the authentication token is rebooted.
Comment 6 Saurabh 2012-06-01 01:22:18 EDT
Tried to configure the proxy-server files and find the token returned, it was same and REST request from different IPs with same token.

Verified on version 3.3.0qa43

Note You need to log in before you can comment on or make changes to this bug.