Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1429892

Summary: Director try to associate 2 instances to the same machine.
Product: Red Hat OpenStack Reporter: Asaf Hirshberg <ahirshbe>
Component: rhosp-directorAssignee: Angus Thomas <athomas>
Status: CLOSED NOTABUG QA Contact: Amit Ugol <augol>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: 11.0 (Ocata)CC: dbecker, mburns, morazi, rhel-osp-director-maint, ushkalim
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-03-09 08:48:16 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1242422, 1558241    

Description Asaf Hirshberg 2017-03-07 12:07:07 UTC
Description of problem:
While trying to deploy my environment using "name": and "capabilities":,  the Director try to associate 2 instances to the same machine which results in an error.When omitting the "name" from the instackenv.json the deployment pass.
Version-Release number of selected component (if applicable):

[overcloud.Controller.0.Controller]: CREATE_FAILED  ResourceInError: resources.Controller: Went to status ERROR due to "Message: Node 5fbc835a-4b41-4736-8507-7243f9365a82 is associated with instance 0f5e1e45-2023-4706-bba6-5fd30fa4b12e. (HTTP 409), Code: 500"
 
+--------------------------------------+--------------+--------+------------+-------------+------------------------+
| ID                                   | Name         | Status | Task State | Power State | Networks               |
+--------------------------------------+--------------+--------+------------+-------------+------------------------+
| 01b5c115-2ef6-4d43-9214-1644ddf5622c | compute-0    | BUILD  | spawning   | NOSTATE     | ctlplane=192.168.24.15 |
| 48da57f5-64d4-43ba-ad42-b79a089d479c | controller-0 | ERROR  | -          | NOSTATE     |                        |
| 0f5e1e45-2023-4706-bba6-5fd30fa4b12e | controller-1 | BUILD  | spawning   | NOSTATE     | ctlplane=192.168.24.7  |
| 5b9604b4-efd5-421d-b8d4-db77f243f95c | controller-2 | BUILD  | spawning   | NOSTATE     |                        |
+--------------------------------------+--------------+--------+------------+-------------+------------------------+
[stack@puma33 ~]$ ironic node-list
+--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+
| UUID                                 | Name   | Instance UUID                        | Power State | Provisioning State | Maintenance |
+--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+
| 6d12f93d-966b-4d46-bbaa-cc489f5ea756 | pum04  | f835f8db-e0e8-4f99-811a-615c7f97308e | power on    | wait call-back     | False       |
| f5e2be54-03bc-418c-b400-d5ae637cafa2 | puma05 | 5b9604b4-efd5-421d-b8d4-db77f243f95c | power on    | wait call-back     | False       |
| 5fbc835a-4b41-4736-8507-7243f9365a82 | puma34 | 0f5e1e45-2023-4706-bba6-5fd30fa4b12e | power on    | wait call-back     | False       |
| 9cebf962-5d65-4e97-9b16-a55e86986016 | puma40 | 01b5c115-2ef6-4d43-9214-1644ddf5622c | power on    | wait call-back     | False       |
| fad2bf4e-5c0d-493c-a8ae-70ec66027d5b | puma14 | None                                 | power off   | available          | False       |
| 87b7c001-9a74-4b3e-8a33-87dd2bec3bf6 | puma16 | None                                 | power off   | available          | False       |
+--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+

[stack@puma33 ~]$ cat instackenv.json 
{
    "nodes":[
        { 
            "name": "pum04", 
            "pm_type":"pxe_ipmitool",
            "mac":[
                "44:1e:a1:73:39:8f"
            ],
            "cpu":"1",
            "memory":"4096",
            "disk":"40",
            "arch":"x86_64",
            "pm_user":"admin",
            "pm_password":"admin",
            "pm_addr":"10.35.160.18",
            "capabilities":"profile:control,boot_option:local"
        },
        {
            "name": "puma05",
            "pm_type":"pxe_ipmitool",
            "mac":[
                "44:1e:a1:73:3d:41"
            ],
            "cpu":"1",
            "memory":"4096",
            "disk":"40",
            "arch":"x86_64",
            "pm_user":"admin",
            "pm_password":"admin",
            "pm_addr":"10.35.160.20",
            "capabilities":"profile:control,boot_option:local"
        },
        {
            "name": "puma34",
            "pm_type":"pxe_ipmitool",
            "mac":[
                "44:1e:a1:73:36:2f"
            ],
            "cpu":"1",
            "memory":"4096",
            "disk":"40",
            "arch":"x86_64",
            "pm_user":"admin",
            "pm_password":"admin",
            "pm_addr":"10.35.160.78",
            "capabilities":"profile:control,boot_option:local"
        },
        {
            "name": "puma40",
            "pm_type":"pxe_ipmitool",
            "mac":[
                "44:1e:a1:73:39:58"
            ],
            "cpu":"1",
            "memory":"4096",
            "disk":"40",
            "arch":"x86_64",
            "pm_user":"admin",
            "pm_password":"admin",
            "pm_addr":"10.35.160.90",
            "capabilities":"profile:compute,boot_option:local"
        },
        {
            "name": "puma14",
            "pm_type":"pxe_ipmitool",
            "mac":[
                "44:1e:a1:71:a7:46"
            ],
            "cpu":"1",
            "memory":"4096",
            "disk":"40",
            "arch":"x86_64",
            "pm_user":"admin",
            "pm_password":"admin",
            "pm_addr":"10.35.160.38",
            "capabilities":"profile:compute,boot_option:local"
        },
        {
            "name": "puma16",
            "pm_type":"pxe_ipmitool",
            "mac":[
                "44:1e:a1:71:a5:ee"
            ],
            "cpu":"1",
            "memory":"4096",
            "disk":"40",
            "arch":"x86_64",
            "pm_user":"admin",
            "pm_password":"admin",
            "pm_addr":"10.35.160.42",
             "capabilities":"profile:compute,boot_option:local"
        }
    ]
}

[stack@puma33 ~]$ cat network-environment.yaml 
resource_registry:
  OS::TripleO::BlockStorage::Net::SoftwareConfig: /home/stack/nic-configs/cinder-storage.yaml
  OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/nic-configs/compute.yaml
  OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/nic-configs/controller.yaml
  OS::TripleO::ObjectStorage::Net::SoftwareConfig: /home/stack/nic-configs/swift-storage.yaml
  OS::TripleO::CephStorage::Net::SoftwareConfig: /home/stack/nic-configs/ceph-storage.yaml
parameter_defaults:
  InternalApiNetCidr: 172.17.0.0/24
  StorageNetCidr: 172.18.0.0/24
  StorageMgmtNetCidr: 172.19.0.0/24
  TenantNetCidr: 172.16.0.0/24
  ExternalNetCidr: 10.35.180.0/24
  InternalApiAllocationPools: [{'start': '172.17.0.10', 'end': '172.17.0.200'}]
  StorageAllocationPools: [{'start': '172.18.0.10', 'end': '172.18.0.200'}]
  StorageMgmtAllocationPools: [{'start': '172.19.0.10', 'end': '172.19.0.200'}]
  TenantAllocationPools: [{'start': '172.16.0.10', 'end': '172.16.0.200'}]
  ExternalAllocationPools: [{'start': '10.35.180.10', 'end': '10.35.180.50'}]
  DnsServers: ["10.35.28.28","10.35.28.1"]
  InternalApiNetworkVlanID: 189
  StorageNetworkVlanID: 202
  StorageMgmtNetworkVlanID: 203
  TenantNetworkVlanID: 201
  ExternalNetworkVlanID: 195
  ExternalInterfaceDefaultRoute: 10.35.180.254
  BondInterfaceOvsOptions:
     "mode=802.3ad"
  ControlPlaneSubnetCidr: "24"
  ControlPlaneDefaultRoute: 192.0.2.1
  EC2MetadataIp: 192.0.2.1
  NeutronExternalNetworkBridge: "''"
  # Nova flavor to use
  OvercloudControlFlavor: control
  OvercloudComputeFlavor: compute
  # Number of nodes to deploy
  ControllerCount: 3
  ComputeCount: 1
  
  # http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/node_placement.html#custom-hostnames
  ControllerHostnameFormat: 'controller-%index%'
  ComputeHostnameFormat: 'compute-%index%'
  CephStorageHostnameFormat: 'ceph-%index%'
  ObjectStorageHostnameFormat: 'swift-%index%'
[stack@puma33 ~]$ 




Steps to Reproduce:
1. configure "name" and capabilities in the instackenv file for every server.
2. deploy
3.

Comment 2 Asaf Hirshberg 2017-03-09 08:48:16 UTC
Sasha Chuzhoy:
"""
Profile matching is redundant when precise node placement is used. To avoid scheduling failures you should use the default “baremetal” flavor for deployment in this case, not the flavors designed for profile matching (“compute”, “control”, etc).


Once I used the baremetal (default) - it worked.

2017-03-08 17:56:25Z [overcloud]: CREATE_COMPLETE  Stack CREATE completed successfully

 Stack overcloud CREATE_COMPLETE
                                """

  # Nova flavor to use
#  OvercloudControlFlavor: control
#  OvercloudComputeFlavor: compute
#  OvercloudCephStorageFlavor: ceph-storage
  # Number of nodes to deploy
  ControllerCount: 3
  ComputeCount: 1
#  CephStorageCount: 1