Bug 1272049 - [RFE] Integrating Multiple radosgw servers with HA Proxy Servers
[RFE] Integrating Multiple radosgw servers with HA Proxy Servers
Status: CLOSED CURRENTRELEASE
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: Documentation (Show other bugs)
1.3.0
x86_64 Linux
high Severity high
: rc
: 1.3.2
Assigned To: John Wilkins
ceph-qe-bugs
: Documentation, FutureFeature, ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-10-15 06:58 EDT by Vikhyat Umrao
Modified: 2016-03-01 03:22 EST (History)
10 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-03-01 03:22:44 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Vikhyat Umrao 2015-10-15 06:58:47 EDT
Description of problem:

Multiple radosgw instances with load balancer like HA Proxy document

We need to document a QA tested procedure for integrating HA Proxy with Radosgw Servers.
Including all use cases like also integrating Keystone service if possible with HA Proxy servers.

1. Keystone service running on public network 
2. Keystone service running on internal network 


Version-Release number of selected component (if applicable):
Red Hat Ceph Storage 1.3

Our customers are asking for downstream documents which is verified by our QA and they can follow and integrate HA Proxy with Raodosgw servers.
Comment 9 John Wilkins 2016-01-22 16:03:32 EST
I have an initial draft of a simple load balancer. I can add SSL termination as necessary. To do it with Keystone, I'd have to install OpenStack as well, correct?
Comment 11 shilpa 2016-02-03 09:03:17 EST
(In reply to John Wilkins from comment #10)
> Sorry. Forgot to add the repo URL. 
> 
> https://gitlab.cee.redhat.com/red-hat-ceph-storage-documentation/doc-
> Red_Hat_Ceph_Storage_1.3-Object_Gateway_for_Red_Hat_Enterprise_Linux/blob/v1.
> 3/ha-proxy.adoc

hi John,

It might be a good idea to add a link to -> how to configure multiple radosgw instance within the same zone. It is a part of the pre-requisite in the above ha-proxy doc.
This configuration is not documented in "https://gitlab.cee.redhat.com/red-hat-ceph-storage-documentation/doc-Red_Hat_Ceph_Storage_1.3-Object_Gateway_for_Red_Hat_Enterprise_Linux/blob/v1.3/object-gateway-guide-for-red-hat-enterprise-linux.adoc" but is a part of federated gw configuration.
Comment 12 John Wilkins 2016-02-03 11:51:19 EST
Just add two instances without doing any zone or region configuration. They are in the same region and zone by default.
Comment 13 Harish NV Rao 2016-02-03 11:56:43 EST
Hi John,

From comment 9 it appears that there is some more work left. Is the doc complete? Should QE wait for some more updates to the doc?

Regards,
Harish
Comment 15 shilpa 2016-02-04 09:01:13 EST
Trying out the configuration. Access via http port 80 worked. Doc needs to be changed for https config:

This did NOT work for me:
frontend rgw­-https
  bind <insert vip ipv4>:443 ssl crt /etc/ssl/private/example.com.pem
  default_backend rgw


This worked:
frontend rgw­-https
  bind *:443 ssl crt /etc/ssl/private/example.com.pem
  default_backend rgw


Also,is the doc complete? From C#1, it seems like we still need keystone integration documented.
Comment 16 John Wilkins 2016-02-09 13:38:57 EST
The instruction "bind <insert vip ipv4>:443 ssl crt /etc/ssl/private/example.com.pem" was intended to direct you to replace it with the virtual IP address previously set in the configuration. I've changed it per your comment so that people can cut and paste.

https://gitlab.cee.redhat.com/red-hat-ceph-storage-documentation/doc-Red_Hat_Ceph_Storage_1.3-Object_Gateway_for_Red_Hat_Enterprise_Linux/commit/440d01124e2a1380a5049f553bfc95eca885f63b

We do not have the bandwidth for Keystone in v1.3.2; however, we will set up OpenStack during the 2.0 documentation and testing cycle. As documented, it should work as advertised, as there would be no obvious difference in the HAProxy/keepalived configuration. Please verify and re-open it for RHCS 2.0 if you really need Keystone tested too. There are a number of Keystone features coming in 2.0 that warrant setting up OpenStack too.
Comment 17 shilpa 2016-02-10 01:45:29 EST
(In reply to John Wilkins from comment #16)
> The instruction "bind <insert vip ipv4>:443 ssl crt
> /etc/ssl/private/example.com.pem" was intended to direct you to replace it
> with the virtual IP address previously set in the configuration. I've
> changed it per your comment so that people can cut and paste.
> 
> https://gitlab.cee.redhat.com/red-hat-ceph-storage-documentation/doc-
> Red_Hat_Ceph_Storage_1.3-Object_Gateway_for_Red_Hat_Enterprise_Linux/commit/
> 440d01124e2a1380a5049f553bfc95eca885f63b
> 
> We do not have the bandwidth for Keystone in v1.3.2; however, we will set up
> OpenStack during the 2.0 documentation and testing cycle. As documented, it
> should work as advertised, as there would be no obvious difference in the
> HAProxy/keepalived configuration. Please verify and re-open it for RHCS 2.0
> if you really need Keystone tested too. There are a number of Keystone
> features coming in 2.0 that warrant setting up OpenStack too.

What I meant to stay is, specifying the virtual IP address in the bind instruction did not work for me. Thanks for making the change. Verifying the bug for now.

Note You need to log in before you can comment on or make changes to this bug.