Bug 711431 - Load Balancer
Summary: Load Balancer
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Update Infrastructure for Cloud Providers
Classification: Red Hat
Component: Documentation
Version: 2.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Lana Brindley
QA Contact: wes hayutin
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-06-07 13:52 UTC by Jay Dobies
Modified: 2016-02-18 05:33 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-07-29 04:49:59 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Jay Dobies 2011-06-07 13:52:57 UTC
This stuff is going to cross a few sections, so I'll provide all the info in this bug and let you decide where to best divide up the information. It also will mention a few changes to the existing docs.

= Previously (RHUI 1.2 and lower) =

The load balancer used to be a separate instance. It had a configuration RPM created for it from rhui-tools.

It was also less than ideal for a bunch of reasons. I won't get into them here since you don't really need to document a list of reasons our old versions stunk. But I will allude to certain things in the description of the new stuff that you may want to phrase in terms of "improvements".


= Server LB Concepts in 2.0 =

Instead of a separate instance, one of the CDS instances is used as a load balancer. In fact, all CDS instances will function in this capacity. This will come into play later when I get to how awesomely stable we are in terms of fail over, since if a client tries to contact a CDS load balancer and can't, it can just try the next one.

That means the user no longer needs to do the following steps that were necessary in 1.2 (your call how to handle mentioning this):
- No need to create a separate load balancer install
- No need to run install_lb.sh on that instance; in fact, that script no longer exists

The RHUA will take care of communicating CDS additions/removals to all CDS members made through RHUI Manager. In other words, there is no explicit load balancer management that needs to take place, it's all hooked into the normal CDS management and requires no extra steps.


= Client LB Concepts in 2.0 =

Now when creating a client configuration RPM, instead of indicating a load balancer, a "primary" CDS is selected (I'll go into more specifics later). That CDS is used by that client configuration RPM as the load balancer. If it cannot be accessed, the next CDS in the list will be used as the load balancer.


= Flow =

Not sure how much of this you'll use, but here goes.

- Yum on the client looks to a local ordered listing of RHUI load balancers. That list is determined in RHUI Manager at config RPM creation time and delivered to the client in the config RPM.

- A RHUI yum plugin does a quick test to verify the load balancer is up. If it cannot be accessed, the next load balancer in the list is used, and so on until either a working load balancer is found or we run out of them. That's the "load balancer failover" functionality we can hype as a new feature in 2.0.

- Once a load balancer is found, the current list of CDS instances is returned to the yum plugin. The plugin resolves any differences between this returned list and its local listing of CDS members, updating the local list with any changes. This allows changes to the deployed CDS instances to take place without needing to explicitly update all of the client instances (*pauses for applause, that's really cool*).

- Yum then queries the load balancer for the mirror list to use for the actual retrieval. That mirror list round robins the order of CDS instances returned. This is the actual "load balancing" part, since actual content requests will be distributed out among all CDS instances.

- Yum uses this mirror list to do the actual content requests against the repo. This is basic yum mirror list functionality and thus will automatically fail over the request to the next entry in the mirror list if a CDS is unavailable, which means this also functions as our "repository fail over" feature as well.

Comment 1 Jay Dobies 2011-06-07 14:00:50 UTC
= CDS Repository Management =

Remember how there was that whole screen devoted to selecting which repositories are deployed to which CDS instances? Yeah... that's gone now.

We assume (actually, "enforce" is a better word here) all repositories are deployed to all CDS instances. This isn't technically a regression since this was all that was supported in 1.2.

It has to do with all of the load balancing and fail over awesomeness described above. We can't balance across a bunch of CDS instances without knowing that they all have the same repositories.

That said, it will be coming back in 2.1 (or beyond). Conceptually it will change such that you're not assigning repos to a CDS, but rather to a "CDS cluster" (which is just a group of CDS with similar attributes). That's going to involve additions to be able to manage those clusters, but I'm getting ahead of myself. I only mention it in the context that the work done about CDS-repo associations isn't wasted and will be coming back in some form.

So in terms of docs changes:

- Remove the section on this entirely (6.1. Manage Repositories Hosted on a CDS Instance)

- The CDS menu now doesn't have the option for it. Below is a screenshot of what it looks like now:

------------------------------------------------------------------------------
             -= Red Hat Update Infrastructure Management Tool =-


-= Content Delivery Server (CDS) Management =-

   l   list all CDS instances registered to the RHUI
   a   register (add) a new CDS instance
   d   unregister (delete) a CDS instance from the RHUI

                                                           Connected: atlantis
------------------------------------------------------------------------------

Comment 2 Jay Dobies 2011-06-07 14:15:28 UTC
= Client Configuration RPM =

As mentioned previously, the client config RPM creation has changed to not prompt for a load balancer hostname, but instead require the user to select one of the CDS instances to act as the primary. The rest of the CDS instances will be used as backup.

This is meant to be a balance between them having zero control of which is acting as the load balancer and having to do a complicated UI where they order the list completely.

Below is a screen shot of what it looks like now. Ignore the values I used for all of the questions up to the CDS point; it's just dummy data that will probably confuse users if that's included in the example:


------------------------------------------------------------------------------
rhui (client) => c

Local directory in which the client configuration files generated by this tool
should be stored (if this directory does not exist, it will be created):
/tmp/example

Name of the RPM:
example-client

Version of the configuration RPM [2.0]:


Full path to the entitlement certificate authorizing the client to access
specific channels:
/home/jdob/vault/code/data/rhui-installer/rhui-cds-1.crt

Full path to the private key for the above entitlement certificate:
/home/jdob/vault/code/data/rhui-installer/server.key

Full path to the CA certificate used to sign the CDS SSL certificate:
/home/jdob/vault/code/data/rhui-installer/ssl-ca.crt

Select the CDS instance that should be the primary load balancer for the
client. All other CDS instances will be listed as back up load balancers
in the client's mirror list:

  1  - cds-1.example.com
  2  - cds-2.example.com
Enter value (1-2) or 'b' to abort: 2

Load Balancer Order:
  cds-2.example.com
  cds-1.example.com

Successfully created client configuration RPM.
RPMs can be found at /tmp/example

------------------------------------------------------------------------------

Comment 3 Lana Brindley 2011-06-23 01:07:00 UTC
(In reply to comment #1)
> = CDS Repository Management =
> 
> Remember how there was that whole screen devoted to selecting which
> repositories are deployed to which CDS instances? Yeah... that's gone now.
> 
> We assume (actually, "enforce" is a better word here) all repositories are
> deployed to all CDS instances. This isn't technically a regression since this
> was all that was supported in 1.2.
> 
> It has to do with all of the load balancing and fail over awesomeness described
> above. We can't balance across a bunch of CDS instances without knowing that
> they all have the same repositories.
> 
> That said, it will be coming back in 2.1 (or beyond). Conceptually it will
> change such that you're not assigning repos to a CDS, but rather to a "CDS
> cluster" (which is just a group of CDS with similar attributes). That's going
> to involve additions to be able to manage those clusters, but I'm getting ahead
> of myself. I only mention it in the context that the work done about CDS-repo
> associations isn't wasted and will be coming back in some form.
> 
> So in terms of docs changes:
> 
> - Remove the section on this entirely (6.1. Manage Repositories Hosted on a CDS
> Instance)
> 

Done.

> - The CDS menu now doesn't have the option for it. Below is a screenshot of
> what it looks like now:
> 
> ------------------------------------------------------------------------------
>              -= Red Hat Update Infrastructure Management Tool =-
> 
> 
> -= Content Delivery Server (CDS) Management =-
> 
>    l   list all CDS instances registered to the RHUI
>    a   register (add) a new CDS instance
>    d   unregister (delete) a CDS instance from the RHUI
> 
>                                                            Connected: atlantis
> ------------------------------------------------------------------------------

Updated.

LKB

Comment 4 Lana Brindley 2011-06-23 01:15:48 UTC
(In reply to comment #2)
> = Client Configuration RPM =
> 
> As mentioned previously, the client config RPM creation has changed to not
> prompt for a load balancer hostname, but instead require the user to select one
> of the CDS instances to act as the primary. The rest of the CDS instances will
> be used as backup.
> 
> This is meant to be a balance between them having zero control of which is
> acting as the load balancer and having to do a complicated UI where they order
> the list completely.
> 
> Below is a screen shot of what it looks like now. Ignore the values I used for
> all of the questions up to the CDS point; it's just dummy data that will
> probably confuse users if that's included in the example:
> 
> 
> ------------------------------------------------------------------------------
> rhui (client) => c
> 
> Local directory in which the client configuration files generated by this tool
> should be stored (if this directory does not exist, it will be created):
> /tmp/example
> 
> Name of the RPM:
> example-client
> 
> Version of the configuration RPM [2.0]:
> 
> 
> Full path to the entitlement certificate authorizing the client to access
> specific channels:
> /home/jdob/vault/code/data/rhui-installer/rhui-cds-1.crt
> 
> Full path to the private key for the above entitlement certificate:
> /home/jdob/vault/code/data/rhui-installer/server.key
> 
> Full path to the CA certificate used to sign the CDS SSL certificate:
> /home/jdob/vault/code/data/rhui-installer/ssl-ca.crt
> 
> Select the CDS instance that should be the primary load balancer for the
> client. All other CDS instances will be listed as back up load balancers
> in the client's mirror list:
> 
>   1  - cds-1.example.com
>   2  - cds-2.example.com
> Enter value (1-2) or 'b' to abort: 2
> 
> Load Balancer Order:
>   cds-2.example.com
>   cds-1.example.com
> 
> Successfully created client configuration RPM.
> RPMs can be found at /tmp/example
> 
> ------------------------------------------------------------------------------

<step>
	 <para>
		 All CDS instances are able to function as load balancers. You will be required to nominate one CDS as a primary load balancer, however if that CDS becomes unavailable, or is unable to function as a load balancer, load balancing tasks will fall to the other available CDS instances. Select a CDS instance to be the primary load balancer for the client:
	</para>
			 
<screen>
Select the CDS instance that should be the primary load balancer for the
client. All other CDS instances will be listed as back up load balancers
in the client's mirror list:

  1  - cds-1.example.com
  2  - cds-2.example.com
Enter value (1-2) or 'b' to abort: 2
</screen>
	<para>
		A list of the CDS instances to be used for load balancing will be displayed, in priority order:
	</para>
<screen>
Load Balancer Order:
  cds-2.example.com
  cds-1.example.command
  </screen>
</step>

Revision 1-14.

LKB

Comment 6 Lana Brindley 2011-07-29 04:49:59 UTC
This book is now available at http://docs.redhat.com/docs/en-US/Red_Hat_Update_Infrastructure/2.0/html/Installation_Guide/index.html

Please raise a new bug for any further changes.

LKB


Note You need to log in before you can comment on or make changes to this bug.