Bug 1391847

Summary: [RFE] Fixed IP assignment to physical node during openstack deployment
Product: Red Hat OpenStack Reporter: Petr Barta <pbarta>
Component: rhosp-directorAssignee: Angus Thomas <athomas>
Status: CLOSED DUPLICATE QA Contact: Omri Hochman <ohochman>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 9.0 (Mitaka)CC: aschultz, bfournie, chris.brown, dbecker, ipilcher, jcoufal, mburns, mcornea, morazi, rhel-osp-director-maint, sbaker, shardy, srevivo
Target Milestone: ---Keywords: FutureFeature, Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-08-15 17:26:19 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Petr Barta 2016-11-04 08:55:45 UTC
Description of problem:

During OpenStack deployment it's possible to assign fixed IP addresses to node class (compute, controller, storage etc.), according to http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/node_placement.html, but as far as I understand it's not possible to specify exact relation between specific node from each class to IP address. As such, when deployng the cloud, it's not possible to ensure that specific node from a class will get specific IP.

For instance, when creating cloud with three controller nodes, it;s possible to say that IPs used for the contollers are 172.16.1.100, 172.16.1.101 and 172.16.1.102, but it's not possible to say that physical machine with specific UUID as reported in ironic node-list gets IP ...100, another one gets ...101 and third gets ...102.

I think it would be good if it would be possible to specify relation between ironic UUID and IP this node gets during deployment.

Use case:
I want to have three controllers in three different places, and want to know the when connecting to IP ...100 (identified as overcloud-controller-0 in nova list) I'll get always to machine in place1 after each deployment.

Version-Release number of selected component (if applicable):
NA

How reproducible:
NA

Steps to Reproduce:
1. For deployment of overcloud create list of predictable IPs for a class of nodes (for instance compute), as described in http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/node_placement.html#predictable-ips
2. Deploy the overcloud


Actual results:
Not possible to say which physical node will get assigned which IP address of the fixed IP pool. Can vary between deployments on the same environment.


Expected results:
Specific node (as identified by UUID from ironic node-list) will get pre-defined IP from the pool after deployment.


Additional info:

https://bugs.launchpad.net/openstack-requirements/+bug/1639172

Comment 1 Zane Bitter 2016-11-04 14:26:19 UTC
I don't think your premise is correct. The IP addresses are assigned in order, and you can control which physical servers are used for which controller nodes: http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/node_placement.html#assign-per-node-capabilities

In any event this is not a Heat bug, so I'm reassigning to Director.

Incidentally, openstack-requirements is the upstream project for managing which versions of libraries OpenStack projects can use, and is a bizarre place for TripleO bugs.

Comment 2 Petr Barta 2016-11-04 14:37:46 UTC
Hello Zane,
  Thank for updates, and sorry for confusion about the classification of the bug. 

  It completely can be that I just misunderstood the situation, and made incorrect config on reproducer, will try that once more and report.

Thanks,
Petr

Comment 3 Petr Barta 2016-11-11 13:03:16 UTC
Hello Zane,
  We have tested the config, and you are partially right, the predictable address configuration joined with per-node capabilities and Scheduler hits works. Though, it works only on networks other that ctlplane. Is there some way to set up this for ctlplane which I was not able to find, or is this missing for this net?

  We have used following setup:

Prepare YAML template:

parameter_defaults:
    ControllerSchedulerHints:
        'capabilities:node': 'controller-%index%'
    ComputeSchedulerHints:
        'capabilities:node': 'compute-%index%'
    HostnameMap:
        controller-0: controller-0
        controller-1: controller-1
        controller-2: controller-2
        compute-0: compute-0
 
Edit environments/ips-from-pool-all.yaml
 
parameter_defaults:
  ControllerIPs:
    # Each controller will get an IP from the lists below, first controller, first IP
    external:
    - 172.16.23.113
    - 172.16.23.111
    - 172.16.23.112
    internal_api:
    - 172.16.20.14
    - 172.16.20.13
    - 172.16.20.15
    storage:
    - 172.16.21.14
    - 172.16.21.11
    - 172.16.21.13
    storage_mgmt:
    - 172.16.3.7
    - 172.16.3.5
    - 172.16.3.6
    tenant:
    - 172.16.22.13
    - 172.16.22.11
    - 172.16.22.12
  NovaComputeIPs:
    # Each compute will get an IP from the lists below, first compute, first IP
    internal_api:
    - 172.16.20.12
    storage:
    - 172.16.21.12
    tenant:
    - 172.16.22.10
 

Thanks,
Petr

Comment 4 VIKRANT 2016-11-16 14:01:09 UTC
I have opened a RFE [1] for predictable provisioning IPs sometime back. Currently AFAIK, it's not possible to have predictable IPs. Looking forward to hear for the confirmation from Zane. 

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1344174

Comment 5 Ray Wang 2017-01-17 06:49:55 UTC
Hi

This feature is still not implemented in OSP10 yet, right?

Comment 6 Christopher Brown 2017-03-27 02:25:17 UTC
(In reply to Ray Wang from comment #5)
> Hi
> 
> This feature is still not implemented in OSP10 yet, right?

Not as far as I can see.

This would be useful to have as some storage subsystems (e.g. GPFS/Spectrum Scale) require DNS and fixed IP addresses for communication.

Comment 9 Bob Fournier 2018-08-15 17:26:19 UTC

*** This bug has been marked as a duplicate of bug 1337770 ***