Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1807602

Summary: [OSP16] DCN with OVS and Spine-Leaf network topology deployment failed on Error while evaluating a Function Call, 'join' parameter 'arg' expects an Array value
Product: Red Hat OpenStack Reporter: Yuri Obshansky <yobshans>
Component: puppet-neutronAssignee: Bernard Cafarelli <bcafarel>
Status: CLOSED ERRATA QA Contact: Eran Kuris <ekuris>
Severity: urgent Docs Contact:
Priority: medium    
Version: 16.0 (Train)CC: aschultz, bcafarel, beagles, bfournie, dsneddon, fiezzi, hjensas, jjoyce, jlibosva, jschluet, jslagle, mburns, slinaber, tvignaud
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: puppet-neutron-15.4.1-0.20200310161748.afc5750.el8ost Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-05-14 12:16:10 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
ansible log none

Description Yuri Obshansky 2020-02-26 17:26:43 UTC
Created attachment 1666028 [details]
ansible log

Description of problem:
OSP 16 DCN OVS and Spine-Leaf network topology deployment failed on
Feb 26 17:12:11 puppet-user: Error: Evaluation Error: Error while evaluating a Function Call, 'join' parameter 'arg' expects an Array value, got String (file: /etc/puppet/modules/neutron/manifests/agents/ml2/ovs.pp, line: 271, column: 19) on node central-controller0-1.redhat.local", "+ rc=1", "+ '[' False = false ']'", "+ set -e", "+ '[' 1 -ne 2 -a 1 -ne 0 ']'", "+ exit 1", " attempt(s): 3", "2020-02-26 17:12:14,165 INFO: 39551 -- Removing container: container-puppet-neutron", "2020-02-26 17:12:14,323 WARNING: 39551 -- Retrying running container: neutron", "2020-02-26 17:12:14,323 ERROR: 39551 -- Failed running container for neutron", "2020-02-26 17:12:14,323 INFO: 39551 -- Finished processing puppet configs for neutron", "2020-02-26 17:12:14,324 ERROR: 39550 -- ERROR configuring neutron"]}


Version-Release number of selected component (if applicable):
RHOS_TRUNK-16.0-RHEL-8-20200220.n.0

How reproducible:
I have an environment 
Ping me to access it

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Alex Schultz 2020-02-26 19:44:11 UTC
This points to NeutronBridgeMappings being a string and not a list as expected. Since the error is in the puppet-neutron module kicking over to DFG:Networking. It might just need a any2array, however it's hard to know without additional template information.

Comment 2 Yuri Obshansky 2020-02-26 20:19:34 UTC
$ cat network-environment.yaml
resource_registry:
  OS::TripleO::Controller0::Net::SoftwareConfig: spine-leaf-nics/controller0.yaml
  OS::TripleO::Compute0::Net::SoftwareConfig: spine-leaf-nics/compute0.yaml
  OS::TripleO::Compute1::Net::SoftwareConfig: spine-leaf-nics/compute1.yaml
  OS::TripleO::Compute2::Net::SoftwareConfig: spine-leaf-nics/compute2.yaml

parameter_defaults:
  DnsServers: 
  - 10.11.5.19
  ControlPlaneSubnet: leaf0
  Controller0ControlPlaneSubnet: leaf0
  Compute0ControlPlaneSubnet: leaf0
  Compute1ControlPlaneSubnet: leaf1
  Compute2ControlPlaneSubnet: leaf2

  Controller0Parameters:
    NeutronBridgeMappings: "leaf0:br-ex"
  Compute0Parameters:
    NeutronBridgeMappings: "leaf0:br-ex"
  NeutronEnableDVR: 'false'
  NeutronExternalNetworkBridge: ''
  NeutronNetworkTypes: 'vlan'
  NeutronNetworkVLANRanges: 'leaf0:1:1000,leaf1:1:1000,leaf2:1:1000'
  NeutronTunnelTypes: ''

  ControlPlaneDefaultRoute: 192.168.24.254
  ControlPlaneSubnetCidr: '24'
  ControlPlane1DefaultRoute: 192.168.34.254
  ControlPlane1SubnetCidr: '24'
  ControlPlane2DefaultRoute: 192.168.44.254
  ControlPlane2SubnetCidr: '24'

  Leaf0EC2MetadataIp: 192.168.24.1
  Leaf1EC2MetadataIp: 192.168.34.254
  Leaf2EC2MetadataIp: 192.168.44.254

  ExternalSupernet: 10.0.10.0/16
  InternalApiSupernet: 172.25.0.0/16
  StorageSupernet: 172.23.0.0/16
  StorageMgmtSupernet: 172.18.0.0/16
  TenantSupernet: 172.19.0.0/16

  # Customize the IP subnets to match the local environment
  StorageNetCidr: '172.23.1.0/24'
  Storage1NetCidr: '172.23.2.0/24'
  Storage2NetCidr: '172.23.3.0/24'
  StorageMgmtNetCidr: '172.18.1.0/24'
  StorageMgmt1NetCidr: '172.18.2.0/24'
  StorageMgmt2NetCidr: '172.18.3.0/24'
  InternalApiNetCidr: '172.25.1.0/24'
  InternalApi1NetCidr: '172.25.2.0/24'
  InternalApi2NetCidr: '172.25.3.0/24'
  TenantNetCidr: '172.19.1.0/24'
  Tenant1NetCidr: '172.19.2.0/24'
  Tenant2NetCidr: '172.19.3.0/24'
  ExternalNetCidr: '10.0.10.0/24'
  # Customize the VLAN IDs to match the local environment
  StorageNetworkVlanID: 1183
  Storage1NetworkVlanID: 1173
  Storage2NetworkVlanID: 1163
  StorageMgmtNetworkVlanID: 1188
  StorageMgmt1NetworkVlanID: 1178
  StorageMgmt2NetworkVlanID: 1168
  InternalApiNetworkVlanID: 1185
  InternalApi1NetworkVlanID: 1175
  InternalApi2NetworkVlanID: 1165
  TenantNetworkVlanID: 1189
  Tenant1NetworkVlanID: 1179
  Tenant2NetworkVlanID: 1169
  ExternalNetworkVlanID: 10
  StorageAllocationPools: [{'start': '172.23.1.4', 'end': '172.23.1.250'}]
  Storage1AllocationPools: [{'start': '172.23.2.4', 'end': '172.23.2.250'}]
  Storage2AllocationPools: [{'start': '172.23.3.4', 'end': '172.23.3.250'}]
  StorageMgmtAllocationPools: [{'start': '172.18.1.4', 'end': '172.18.1.250'}]
  StorageMgmt1AllocationPools: [{'start': '172.18.2.4', 'end': '172.18.2.250'}]
  StorageMgmt2AllocationPools: [{'start': '172.18.3.4', 'end': '172.18.3.250'}]
  InternalApiAllocationPools: [{'start': '172.25.1.4', 'end': '172.25.1.250'}]
  InternalApi1AllocationPools: [{'start': '172.25.2.4', 'end': '172.25.2.250'}]
  InternalApi2AllocationPools: [{'start': '172.25.3.4', 'end': '172.25.3.250'}]
  TenantAllocationPools: [{'start': '172.19.1.4', 'end': '172.19.1.250'}]
  Tenant1AllocationPools: [{'start': '172.19.2.4', 'end': '172.19.2.250'}]
  Tenant2AllocationPools: [{'start': '172.19.3.4', 'end': '172.19.3.250'}]
  # Leave room if the external network is also used for floating IPs
  ExternalAllocationPools: [{'start': '10.0.10.100', 'end': '10.0.10.119'}]
  # Gateway routers for routable networks
  StorageInterfaceDefaultRoute: '172.23.1.254'
  Storage1InterfaceDefaultRoute: '172.23.2.254'
  Storage2InterfaceDefaultRoute: '172.23.3.254'
  StorageMgmtInterfaceDefaultRoute: '172.18.1.254'
  StorageMgmt1InterfaceDefaultRoute: '172.18.2.254'
  StorageMgmt2InterfaceDefaultRoute: '172.18.3.254'
  InternalApiInterfaceDefaultRoute: '172.25.1.254'
  InternalApi1InterfaceDefaultRoute: '172.25.2.254'
  InternalApi2InterfaceDefaultRoute: '172.25.3.254'
  TenantInterfaceDefaultRoute: '172.19.1.254'
  Tenant1InterfaceDefaultRoute: '172.19.2.254'
  Tenant2InterfaceDefaultRoute: '172.19.3.254'
  ExternalInterfaceDefaultRoute: '10.0.10.1'

Comment 3 Yuri Obshansky 2020-02-26 20:20:15 UTC
With OVN it works
Controller0Parameters:
    NeutronBridgeMappings: "leaf0:br-ex"
  Compute0Parameters:
    NeutronBridgeMappings: "leaf0:br-ex"

Comment 4 Yuri Obshansky 2020-02-27 16:13:34 UTC
As temporary workaround I used 
 NeutronBridgeMappings: ["leaf0:br-ex"]

Comment 5 Bernard Cafarelli 2020-03-04 13:07:31 UTC
It is strange to see this happen and should not be DCN-specific, we have quite a few template examples setting NeutronBridgeMappings to a similar value.

But yes, as Alex mentioned having this forced in puppet-neutron will not hurt and should fix the issue.
@Yuri, can you test and confirm this fixes deployment?

diff --git a/manifests/agents/ml2/ovs.pp b/manifests/agents/ml2/ovs.pp
index d61d2118..a3274197 100644
--- a/manifests/agents/ml2/ovs.pp
+++ b/manifests/agents/ml2/ovs.pp
@@ -278,7 +278,7 @@ class neutron::agents::ml2::ovs (
 
     # Set config for bridges that we're going to create
     # The OVS neutron plugin will talk in terms of the networks in the bridge_mappings
-    $br_map_str = join($bridge_mappings, ',')
+    $br_map_str = join(any2array($bridge_mappings), ',')
     neutron_agent_ovs {
       'ovs/bridge_mappings': value => $br_map_str;
     }

Comment 6 Bernard Cafarelli 2020-03-04 16:52:50 UTC
I know deployments work with a list without brackets eg "site1:br-link1,site2:br-link2", both in 13 and 16. I think the root cause here is a single value list (if that is the case, fix should be backported up to queens/13)

Comment 7 Brent Eagles 2020-03-05 14:57:49 UTC
I think this has to do with the fact that NeutronBridgeMappings is being defined as a role specific parameter as opposed to the usual method. I encountered something similar before and it seems there is a difference between how heat processes the values of a heat parameter versus the one provided through role specific data. For example:

parameter_defaults:
  NeutronBridgeMappings: "site1:br-link1"

versus

parameter_defaults:
  Controller0Parameters:
    NeutronBridgeMappings: "leaf0:br-ex"

lead to different results in the hieradata. 

This would apply anywhere we use role specific data and rely on heat to do "something" type related when constructing the config data. We probably need to use defensive code in any puppet where the value source might be a role specific parameter. A general note in the documentation for role specific parameters and types is also probably a good idea.

Comment 8 Bernard Cafarelli 2020-03-09 16:13:04 UTC
Master puppet-neutron change merged, backports in progress

Comment 15 errata-xmlrpc 2020-05-14 12:16:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2114