Bug 1368037 - [RFE] Support multiple pools when deploy OpenStack using Director (RHN Registration)
Summary: [RFE] Support multiple pools when deploy OpenStack using Director (RHN Regist...
Keywords:
Status: CLOSED DUPLICATE of bug 1430545
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo-heat-templates
Version: 8.0 (Liberty)
Hardware: Unspecified
OS: Unspecified
high
urgent
Target Milestone: ---
: ---
Assignee: Jiri Stransky
QA Contact: Arik Chernetsky
URL:
Whiteboard:
: 1322915 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-08-18 07:49 UTC by Anil Dhingra
Modified: 2019-12-16 06:24 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-03-13 06:14:27 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Anil Dhingra 2016-08-18 07:49:06 UTC
Description of problem:
Customer subscribed OSP Platform & RHCI in separate orders, so these subscriptions are not in same pool. When we deploy OpenStack via Director, we can only use one pool in /home/stack/templates/rhel-registration/environment-rhel-registration.yaml. Due to this reason, we can't deploy enough hypervisors according to customer's subscription

Version-Release number of selected component (if applicable):


How reproducible:

Always
Steps to Reproduce:
1.
2.
3.

Actual results:

issue now is we can't fully utilize customer's subscriptions via OSP Director
Expected results:
support multiple pools when deploy OpenStack using Director

Additional info:

Comment 3 Anil Dhingra 2016-08-18 08:51:30 UTC
Customer has to scale-out environment but due to this issue he cant scale it , & onsite SA has a deadline to finish & handover this setup .

i am testing below in test env to add another pool-id 

1- modify environment-rhel-registration.yaml

rhel_reg_pool_id: "xxx-xxx"
rhel_reg_pool_id1: "yyy-yyy"

2 -modify rhel-registration.yaml to add new variables

    type: string
  rhel_reg_pool_id:
    type: string
  rhel_reg_pool_id1:


- name: REG_POOL_ID
- name: REG_POOL_ID1


REG_POOL_ID: {get_param: rhel_reg_pool_id}
REG_POOL_ID1: {get_param: rhel_reg_pool_id1}


3 - modify scripts/rhel-registration

 if [ -n "${REG_POOL_ID:-}" ]; then
        attach_opts="$attach_opts --pool=$REG_POOL_ID --pool=$REG_POOL_ID1"
    fi

not sure if this work

Comment 4 Mike Burns 2016-08-18 11:34:03 UTC
The above might work, but is pretty risky since you're modifying a bunch of files that we ship and they'll get changed back when you update.

I'm not sure if this is the best way, but here's another option.  Instead of using the built in registration code, you can write a custom template to do the explicit registration you need.  A guide to this is here:

https://access.redhat.com/documentation/en/red-hat-openstack-platform/8/paged/director-installation-and-usage/614-customizing-overcloud-pre-configuration

The example simply adds a nameserver into /etc/resolv.conf but it shows how to run a random command.  You could do the subscription-manager command there instead.

You could also look at the post-config example for how to run a static script.

Comment 5 Anil Dhingra 2016-08-18 12:04:11 UTC
Thanks Mike
-will this pre-config scrip will run on existing nodes too and try to attach to new pool .
- do we need to use new pool id and this script will run only on new scale-out nodes.

Comment 6 Mike Burns 2016-08-18 12:13:38 UTC
(In reply to Anil Dhingra from comment #5)
> Thanks Mike
> -will this pre-config scrip will run on existing nodes too and try to attach
> to new pool .

I'm not an expert, but I think pre-config runs on the deploy of each node and not again.  It would not run on existing nodes.

> - do we need to use new pool id and this script will run only on new
> scale-out nodes.

I'm not following this question.  You have 2 pool-ids today.  I'm suggesting that you have a template that does something like:

...
...
  template: |
    #!/bin/sh
    subscription-manager register ...
    subscription-manager attach --pool <pool1> <pool2>  
...
...
...

Comment 7 Jaromir Coufal 2016-09-01 15:50:38 UTC
Anil, did the workaround work for your case?

Comment 8 Jon Thomas 2016-09-01 16:57:58 UTC
The customer had to postpone the workaround. Still waiting for feedback.

Comment 12 Jaromir Coufal 2017-01-19 20:27:56 UTC
*** Bug 1322915 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.