Bug 1110891 - [vmware] packstack doesn't support multiple vmware clusters and nova-computes correctly
Summary: [vmware] packstack doesn't support multiple vmware clusters and nova-computes...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-packstack
Version: 5.0 (RHEL 7)
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: z5
: 6.0 (Juno)
Assignee: Lukas Bezdicka
QA Contact: Tzach Shefi
URL:
Whiteboard:
Depends On:
Blocks: 1055536
TreeView+ depends on / blocked
 
Reported: 2014-06-18 17:05 UTC by Jaroslav Henner
Modified: 2016-06-20 13:27 UTC (History)
8 users (show)

Fixed In Version: openstack-packstack-2014.2-0.27.dev1474.gd85ee76.el7ost
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-06-20 13:27:25 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
OpenStack gerrit 243115 0 None MERGED [vmware] Add support for multiple VMware vCenetr clusters 2020-10-29 16:10:27 UTC
OpenStack gerrit 249806 0 None MERGED [vmware] Add support for multiple VMware vCenetr clusters 2020-10-29 16:10:27 UTC
OpenStack gerrit 249817 0 None MERGED [vmware] Add support for multiple VMware vCenetr clusters 2020-10-29 16:10:27 UTC
Red Hat Product Errata RHBA-2016:1260 0 normal SHIPPED_LIVE openstack-packstack bug fix advisory 2016-06-20 17:26:27 UTC

Description Jaroslav Henner 2014-06-18 17:05:50 UTC
Description of problem:
 * One nova-compute can talk to one vcenter and controll N clusters there.
 * It is probably not desirable to control same cluster by multiple nova-computes.
 * Packstack is blindly filling the values

  CONFIG_VCENTER_CLUSTER_NAME=foo, bar

to the nova.conf as

  cluster_name=foo, bar

there should be possibility which nova-compute does control which cluster. Supporting multiple vcenters is probably an overkill.

It should put multiple lines in the nova.conf in case multiple clusters per one nova:
  cluster_name=foo
  cluster_name=bar

Version-Release number of selected component (if applicable):
openstack-packstack-2014.1.1-0.22.dev1117.el7ost.noarch

How reproducible:
always

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Stephen Gordon 2014-06-20 13:18:18 UTC
This is really an RFE for PackStack to support deployment against multiple ESXi clusters *but* there must be one nova-compute deployed per cluster.

We do not want to make use of the ability to have one nova-compute manage multiple clusters.

The way I see this working is that you have your install-hosts:

192.168.122.1,192.168.122.2,192.168.122.3

and your clusters:

cluster_name=foo
cluster_name=bar

First install host is a controller (same as always) next two are the compute hosts (same as always) but they each point at one of the ESXi clusters.

If there is a mismatch between the number of hosts (less the controller) and number of clusters, error. If there is only the one address then of course it's allinone.

Comment 6 Tzach Shefi 2016-06-06 10:45:40 UTC
Need some help, I've installed three rhel7.2 VMs, one controller two nova compute nodes (they are all vms running inside esxi but it shouldn't matter). 

Running packstack I had hit this problem:

when I wrote:
CONFIG_VCENTER_CLUSTER_NAMES=
cluster_name=cluster1
cluster_name=cluster2 

This is the error I got:


Gathering ssh host keys for Nova migration           [ DONE ]
Adding Nova Compute manifest entries              [ ERROR ]

ERROR : global name 'exceptions' is not defined
Please check log file /var/tmp/packstack/20160606-120212-8fwptN/openstack-setup.log for more information
Additional information:
 * Note temporary directory /var/tmp/packstack/d7e803abca69477c9a6568b8bff873b5 on host 10.35.160.219 was not deleted for debugging purposes.
 * Note temporary directory /var/tmp/packstack/9b7a2d404fcc400dbb5cf3db3592359c on host 10.35.160.242 was not deleted for debugging purposes.
 * Note temporary directory /var/tmp/packstack/976de1223a594f028b8d203baa2b10b2 on host 10.35.163.38 was not deleted for debugging purposes.


I've got a vcenter running and two clusters under my datacenter. 
cluster1 cluster2 

Not sure if something is wrong on my answer file or if this is a none related new bug. 


When I used: 
CONFIG_VCENTER_CLUSTER_NAMES=cluster1,cluser2

It completed deployment, but I don't see any mention of cluster1/2 on compute node's nova.conf 


This value is set fine
compute_driver=vmwareapi.VMwareVCDriver 

But these remain unset, is this OK?
# Hostname or IP address for connection to VMware VC host.
# (string value)
#host_ip=<None>

# Port for connection to VMware VC host. (integer value)
#host_port=443

# Username for connection to VMware VC host. (string value)
#host_username=<None>

# Password for connection to VMware VC host. (string value)
#host_password=<None>

# Name of a VMware Cluster ComputeResource. (multi valued)
#cluster_name=<None>

Comment 7 Tzach Shefi 2016-06-06 11:22:54 UTC
Lukas ignore my remark #6 and needinfo
Missed the end of the nova.conf files.

Verified 
rhel7.2
openstack-packstack-puppet-2014.2-0.27.dev1474.gd85ee76.el7ost.noarch
openstack-packstack-2014.2-0.27.dev1474.gd85ee76.el7ost.noarch

On answer file:
CONFIG_VCENTER_CLUSTER_NAMES=cluster1,cluster2

On compute node1, 
[VMWARE]
task_poll_interval=5.0
use_linked_clone=True
host_ip=***********
host_password=*******
host_username=root
api_retry_count=5
cluster_name=cluster1
maximum_objects=100

On compute node2, nova.conf

[VMWARE]
task_poll_interval=5.0
use_linked_clone=True
host_ip=***********
host_password=*******
host_username=root
api_retry_count=5
cluster_name=cluster2
maximum_objects=100

Comment 9 errata-xmlrpc 2016-06-20 13:27:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1260


Note You need to log in before you can comment on or make changes to this bug.