Bug 1602833 - [Deployment] Deployment fails, OF controller set to use OVSDB port 6640
Summary: [Deployment] Deployment fails, OF controller set to use OVSDB port 6640
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: puppet-tripleo
Version: 13.0 (Queens)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z3
: 13.0 (Queens)
Assignee: Tim Rozet
QA Contact: Noam Manos
URL:
Whiteboard: Deployment
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-07-18 15:14 UTC by Sai Sindhur Malleni
Modified: 2022-06-30 13:00 UTC (History)
11 users (show)

Fixed In Version: puppet-tripleo-8.3.6-2.el7ost
Doc Type: Bug Fix
Doc Text:
During deployment, an OVS switch may be configured with the incorrect OpenFlow controller port (6640, instead of 6653) for two out of the three controllers. This causes either a deployment failure, or a functional failure with the deployment later on, where the incorrect flows are programmed into the switch. This release correctly sets all of the OpenFlow controller ports to 6653 for each OVS switch. All of the OVS switches have the correct OpenFlow controller configuration, which consists of three URIs, one to each OpenDaylight using port 6653.
Clone Of:
Environment:
N/A
Last Closed: 2018-11-13 22:27:09 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
controller-0 (4.16 MB, text/plain)
2018-07-18 16:48 UTC, Sai Sindhur Malleni
no flags Details
controller-1 (7.46 MB, text/plain)
2018-07-18 16:48 UTC, Sai Sindhur Malleni
no flags Details
controller-2 (7.25 MB, text/plain)
2018-07-18 16:49 UTC, Sai Sindhur Malleni
no flags Details
console output: Could not find class opendaylight (28.35 KB, text/plain)
2018-11-07 12:50 UTC, Noam Manos
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1786037 0 None None None 2018-08-08 13:17:54 UTC
OpenStack gerrit 590582 0 'None' MERGED Fixes ODL issue where OF port may be set wrong 2020-06-19 04:04:03 UTC
Red Hat Issue Tracker ODL-48 0 None None None 2022-06-30 13:00:35 UTC
Red Hat Issue Tracker OSP-16179 0 None None None 2022-06-30 13:00:38 UTC
Red Hat Product Errata RHBA-2018:3587 0 None None None 2018-11-13 22:27:46 UTC

Description Sai Sindhur Malleni 2018-07-18 15:14:48 UTC
Description of problem:
Deployment with 3 OSP Controller + ODL nodes and 7 computes fail in step with the following error:

 Stack overcloud CREATE_FAILED

overcloud.AllNodesDeploySteps.ControllerDeployment_Step5.1:
  resource_type: OS::Heat::StructuredDeployment
  physical_resource_id: fe962044-7905-4a53-a1db-9fd68fca5fdf
  status: CREATE_FAILED
  status_reason: |
    Error: resources[1]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 2
  deploy_stdout: |
    ...
            "                    with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 89]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]",
            "Warning: ModuleLoader: module 'neutron' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules",
            "Error: Evaluation Error: Error while evaluating a Function Call, Failed to validate OVS OpenFlow pipeline at /etc/puppet/modules/tripleo/manifests/profile/base/neutron/plugins/ovs/opendaylight.pp:138:7 on node overcloud-controller-1.localdomain"
        ]
    }
        to retry, use: --limit @/var/lib/heat-config/heat-config-ansible/1d8d244d-6db5-4ee7-a1a1-98c2bf56ac7b_playbook.retry

Version-Release number of selected component (if applicable):
OSP 13
Puddle/container tag: 2018-07-13.1

How reproducible:
Not very often

Steps to Reproduce:
1. Deploy OSP with ODL
2. 
3.

Actual results:
Deployment fails in step 5

Expected results:
Deployment should succeed

Additional info:


Talking with Tim Rozet, one of the nodes seemed to miss flows in table 17
[root@overcloud-controller-1 heat-admin]# ovs-ofctl -O openflow13 dump-flows br-int | grep table=17
[root@overcloud-controller-1 heat-admin]#

Comment 1 Tim Rozet 2018-07-18 15:42:02 UTC
I originally thought the problem may just table 17 missing after deployment per:
https://bugs.launchpad.net/tripleo/+bug/1781616

However, there looks to be a bigger issue here. On controller-0 OFP was never loaded correctly and ODL is not listening on port 6653. So on controller-0 we see one node not connected via OF:
2018-07-18T14:01:04,676 | ERROR | Blueprint Extender: 1 | BlueprintContainerImpl           | 81 - org.apache.aries.blueprint.core - 1.8.3 | Unable to start blueprint container for bundle org.opendaylight.openflowplugin.openflowjava-extension-nicira/0.6.3.redhat-1 due to unresolved dependencies [(objectClass=org.opendaylight.openflowjava.protocol.spi.connection.SwitchConnectionProvider), (objectClass=org.opendaylight.openflowjava.protocol.

Controller-1 has no openflow flows at all. If we look at the ovs output we can see that 2 of the ports are wrong for controller (set to ovsdb 6640):
    Bridge br-int
        Controller "tcp:172.16.0.12:6640"
        Controller "tcp:172.16.0.23:6653"
        Controller "tcp:172.16.0.13:6640"

Not sure how that is possible, there must be some bug in setting ports for controller. The controller above with the right port is controller-0 which never booted OFP correctly, so thats why there are no flows.

The features loaded include mdsal-trace:
 OpenDaylightFeatures: ["odl-jolokia","odl-mdsal-trace","odl-netvirt-openstack"]

Maybe some feature loading issue caused controller-0 not to boot OFP correctly, but there is still definitely a bug here in how controller-1 could be assigned OFP connection address with OVSDB port.

Please include the karaf logs from each controller.

Comment 2 Sai Sindhur Malleni 2018-07-18 16:48:20 UTC
Created attachment 1459763 [details]
controller-0

Comment 3 Sai Sindhur Malleni 2018-07-18 16:48:52 UTC
Created attachment 1459764 [details]
controller-1

Comment 4 Sai Sindhur Malleni 2018-07-18 16:49:48 UTC
Created attachment 1459765 [details]
controller-2

Comment 8 Mike Kolesnik 2018-08-07 12:52:14 UTC

*** This bug has been marked as a duplicate of bug 1613115 ***

Comment 9 Tim Rozet 2018-08-07 13:15:27 UTC
This bug is about OVSDB port being assigned as the OF port for the controller set strings (6640 instead of 6533).

Comment 10 Tim Rozet 2018-08-08 13:10:27 UTC
Thanks to shague for finding the problem is actually in puppet-neutron when the OF controller is reset due to flow sync issues:
https://github.com/openstack/puppet-tripleo/blob/master/manifests/profile/base/neutron/plugins/ovs/opendaylight.pp#L135

Comment 11 Tim Rozet 2018-08-08 13:15:51 UTC
puppet-tripleo I meant in last comment

Comment 12 Mike Kolesnik 2018-09-03 10:33:24 UTC
Tim,

What are the exact steps to reproduce and verify that this issue is fixed?

Comment 13 Tim Rozet 2018-09-05 13:35:50 UTC
To reproduce the scenario which triggers the problem, the flow pipeline sync mechanism needs to be triggered. That means during deployment, after OVS connects to ODL, a flow table needs to be deleted in OVS. That will trigger the pipeline resync, which had a bug in it where it was setting the wrong OF controller ports. I would say from a system level point of view its going to be hard to reproduce, given that ODL bugs have been fixed to ensure that the OF pipeline is correct when OVS first connects. However, it is possible to test by doing the following:

1. Post-deployment - login to a compute node and use ovs-ofctl command to delete all of the flows for any table listed here:
https://github.com/openstack/puppet-tripleo/blob/master/lib/puppet/functions/synchronize_odl_ovs_flows.rb#L8

2. grep step /etc/puppet/hieradata, set the step value to 5

3. puppet apply -e 'include tripleo::profile::base::neutron::plugins::ovs::opendaylight' --debug

4. You should see the flow resync mechanism triggered in the output.

5. Check ovs-vsctl and ensure all the OF controllers on br-int have the right port (6653)

Comment 22 Noam Manos 2018-11-06 10:30:45 UTC
On puddle 2018-10-18.1 (with rpm openstack-tripleo-common-8.6.3-13.el7ost), Trying to run:
$ puppet apply -e 'include tripleo::profile::base::neutron::plugins::ovs::opendaylight'

Fails on:

Could not find class ::tripleo::profile::base::neutron::plugins::ovs::opendaylight for compute-1.localdomain

 
[root@titan13 ~]# ssh stack@undercloud-0

(overcloud) [stack@undercloud-0 ~]$ ssh heat-admin@compute-1


[heat-admin@compute-1 ~]$ for table in $flow_tables; do
>   sudo ovs-ofctl -O openflow13 dump-flows br-int | grep table=$table,
> done
 cookie=0x6800000, duration=1034608.122s, table=18, n_packets=0, n_bytes=0, priority=0 actions=goto_table:38
 cookie=0x8220015, duration=1034608.092s, table=19, n_packets=2881, n_bytes=121002, priority=100,arp,arp_op=1 actions=resubmit(,17)
 cookie=0x8220016, duration=1034608.092s, table=19, n_packets=0, n_bytes=0, priority=100,arp,arp_op=2 actions=resubmit(,17)
 cookie=0x1080000, duration=1034608.092s, table=19, n_packets=3162, n_bytes=515836, priority=0 actions=resubmit(,17)
 cookie=0x1030000, duration=1034608.109s, table=20, n_packets=0, n_bytes=0, priority=0 actions=goto_table:80
 cookie=0x8000004, duration=945804.563s, table=22, n_packets=0, n_bytes=0, priority=42,ip,metadata=0x33c22/0xfffffe,nw_dst=10.0.0.255 actions=drop
 cookie=0x8000004, duration=1034608.125s, table=22, n_packets=0, n_bytes=0, priority=0 actions=CONTROLLER:65535
 cookie=0x1080000, duration=1034608.143s, table=23, n_packets=0, n_bytes=0, priority=0 actions=resubmit(,17)
 cookie=0x1080000, duration=1034608.208s, table=24, n_packets=0, n_bytes=0, priority=0 actions=resubmit(,17)
 cookie=0x822002d, duration=1034608.225s, table=43, n_packets=2881, n_bytes=121002, priority=100,arp,arp_op=1 actions=group:5000
 cookie=0x822002e, duration=1034608.224s, table=43, n_packets=0, n_bytes=0, priority=100,arp,arp_op=2 actions=CONTROLLER:65535,resubmit(,48)
 cookie=0x8220000, duration=1034608.225s, table=43, n_packets=3162, n_bytes=515836, priority=0 actions=goto_table:48
 cookie=0x4000000, duration=1034608.243s, table=45, n_packets=0, n_bytes=0, priority=0 actions=resubmit(,17)
 cookie=0x8500000, duration=1034608.259s, table=48, n_packets=6043, n_bytes=636838, priority=0 actions=resubmit(,49),resubmit(,50)
 cookie=0x8050001, duration=1034608.275s, table=50, n_packets=1528, n_bytes=65176, priority=10,reg4=0x1 actions=goto_table:51
 cookie=0x8050000, duration=1034608.275s, table=50, n_packets=4328, n_bytes=563516, priority=0 actions=CONTROLLER:65535,learn(table=49,hard_timeout=10,priority=0,cookie=0x8600000,NXM_OF_ETH_SRC[],NXM_NX_REG1[0..19],load:0x1->NXM_NX_REG4[0..7]),goto_table:51
 cookie=0x8030000, duration=1034608.294s, table=51, n_packets=0, n_bytes=0, priority=15,dl_dst=01:80:c2:00:00:00/ff:ff:ff:ff:ff:f0 actions=drop
 cookie=0x8030000, duration=1034608.294s, table=51, n_packets=6041, n_bytes=636406, priority=0 actions=goto_table:52
 cookie=0x6800000, duration=1034608.312s, table=60, n_packets=0, n_bytes=0, priority=0 actions=resubmit(,17)
 cookie=0x1030000, duration=1034608.282s, table=80, n_packets=0, n_bytes=0, priority=0 actions=resubmit(,17)
 cookie=0x8220000, duration=1034608.348s, table=81, n_packets=2881, n_bytes=121002, priority=0 actions=drop
 cookie=0x4000001, duration=945805.006s, table=90, n_packets=0, n_bytes=0, priority=0 actions=resubmit(,17)
 cookie=0x6900000, duration=1034608.384s, table=210, n_packets=0, n_bytes=0, priority=63020,icmp6,icmp_type=134,icmp_code=0 actions=write_metadata:0x4/0x4,goto_table:217
 cookie=0x6900000, duration=1034608.384s, table=210, n_packets=0, n_bytes=0, priority=61010,udp,tp_src=67,tp_dst=68 actions=write_metadata:0x4/0x4,goto_table:217
 cookie=0x6900000, duration=1034608.384s, table=210, n_packets=0, n_bytes=0, priority=61010,udp6,tp_src=547,tp_dst=546 actions=write_metadata:0x4/0x4,goto_table:217
 cookie=0x6900000, duration=1034608.384s, table=210, n_packets=0, n_bytes=0, priority=63009,arp actions=write_metadata:0x4/0x4,goto_table:217
 cookie=0x6900000, duration=1034608.384s, table=210, n_packets=0, n_bytes=0, priority=61009,ipv6 actions=write_metadata:0x4/0x4,goto_table:217
 cookie=0x6900000, duration=1034608.384s, table=210, n_packets=0, n_bytes=0, priority=61009,ip actions=write_metadata:0x4/0x4,goto_table:217
 cookie=0x6900000, duration=1034608.384s, table=210, n_packets=0, n_bytes=0, priority=0 actions=write_metadata:0x4/0x4,goto_table:217
 cookie=0x6900000, duration=1034608.402s, table=211, n_packets=0, n_bytes=0, priority=100,tcp6 actions=write_metadata:0/0x2,goto_table:212
 cookie=0x6900000, duration=1034608.402s, table=211, n_packets=0, n_bytes=0, priority=100,udp6 actions=write_metadata:0/0x2,goto_table:212
 cookie=0x6900000, duration=1034608.402s, table=211, n_packets=0, n_bytes=0, priority=100,tcp actions=write_metadata:0/0x2,goto_table:212
 cookie=0x6900000, duration=1034608.402s, table=211, n_packets=0, n_bytes=0, priority=100,udp actions=write_metadata:0/0x2,goto_table:212
 cookie=0x6900000, duration=1034608.402s, table=211, n_packets=0, n_bytes=0, priority=100,icmp6 actions=write_metadata:0/0x2,goto_table:212
 cookie=0x6900000, duration=1034608.402s, table=211, n_packets=0, n_bytes=0, priority=100,icmp actions=write_metadata:0/0x2,goto_table:212
 cookie=0x6900000, duration=1034608.402s, table=211, n_packets=0, n_bytes=0, priority=0 actions=write_metadata:0x2/0x2,goto_table:214
 cookie=0x6900000, duration=1034608.418s, table=212, n_packets=0, n_bytes=0, priority=0 actions=drop
 cookie=0x6900000, duration=1034608.435s, table=213, n_packets=0, n_bytes=0, priority=0 actions=goto_table:214
 cookie=0x6900000, duration=1034608.452s, table=214, n_packets=0, n_bytes=0, priority=62030,ct_state=-new+est-rel-inv+trk,ct_mark=0x1/0x1 actions=ct_clear,resubmit(,17)
 cookie=0x6900000, duration=1034608.452s, table=214, n_packets=0, n_bytes=0, priority=62030,ct_state=-new-est+rel-inv+trk,ct_mark=0x1/0x1 actions=ct_clear,resubmit(,17)
 cookie=0x6900000, duration=1034608.452s, table=214, n_packets=0, n_bytes=0, priority=62030,ct_state=-trk,metadata=0/0x2 actions=ct_clear,resubmit(,212)
 cookie=0x6900000, duration=1034608.452s, table=214, n_packets=0, n_bytes=0, priority=0 actions=drop
 cookie=0x6900000, duration=1034608.469s, table=215, n_packets=0, n_bytes=0, priority=0 actions=resubmit(,214)
 cookie=0x6900000, duration=1034608.486s, table=216, n_packets=0, n_bytes=0, priority=0 actions=resubmit(,214)
 cookie=0x6900000, duration=1034608.505s, table=217, n_packets=0, n_bytes=0, priority=62019,metadata=0x4/0x4 actions=drop
 cookie=0x6900000, duration=1034608.505s, table=217, n_packets=0, n_bytes=0, priority=0 actions=drop
 cookie=0x6900000, duration=1034608.523s, table=239, n_packets=0, n_bytes=0, priority=100,ipv6 actions=ct_clear,goto_table:240
 cookie=0x6900000, duration=1034608.523s, table=239, n_packets=0, n_bytes=0, priority=100,ip actions=ct_clear,goto_table:240
 cookie=0x6900000, duration=1034608.523s, table=239, n_packets=0, n_bytes=0, priority=0 actions=goto_table:240
 cookie=0x6900000, duration=1034608.542s, table=240, n_packets=0, n_bytes=0, priority=61010,ip,dl_dst=ff:ff:ff:ff:ff:ff,nw_dst=255.255.255.255 actions=goto_table:241
 cookie=0x6900000, duration=1034608.542s, table=240, n_packets=0, n_bytes=0, priority=61005,dl_dst=ff:ff:ff:ff:ff:ff actions=resubmit(,220)
 cookie=0x6900000, duration=1034608.542s, table=240, n_packets=0, n_bytes=0, priority=0 actions=write_metadata:0x4/0x4,goto_table:247
 cookie=0x6900000, duration=1034608.561s, table=241, n_packets=0, n_bytes=0, priority=100,tcp actions=write_metadata:0/0x2,goto_table:242
 cookie=0x6900000, duration=1034608.561s, table=241, n_packets=0, n_bytes=0, priority=100,tcp6 actions=write_metadata:0/0x2,goto_table:242
 cookie=0x6900000, duration=1034608.561s, table=241, n_packets=0, n_bytes=0, priority=100,udp actions=write_metadata:0/0x2,goto_table:242
 cookie=0x6900000, duration=1034608.561s, table=241, n_packets=0, n_bytes=0, priority=100,udp6 actions=write_metadata:0/0x2,goto_table:242
 cookie=0x6900000, duration=1034608.561s, table=241, n_packets=0, n_bytes=0, priority=100,icmp6 actions=write_metadata:0/0x2,goto_table:242
 cookie=0x6900000, duration=1034608.561s, table=241, n_packets=0, n_bytes=0, priority=100,icmp actions=write_metadata:0/0x2,goto_table:242
 cookie=0x6900000, duration=1034608.561s, table=241, n_packets=0, n_bytes=0, priority=0 actions=write_metadata:0x2/0x2,goto_table:244
 cookie=0x6900000, duration=1034608.580s, table=242, n_packets=0, n_bytes=0, priority=0 actions=drop
 cookie=0x6900000, duration=1034608.598s, table=243, n_packets=0, n_bytes=0, priority=0 actions=goto_table:244
 cookie=0x6900000, duration=1034608.615s, table=244, n_packets=0, n_bytes=0, priority=62030,ct_state=-new-est+rel-inv+trk,ct_mark=0x1/0x1 actions=ct_clear,resubmit(,220)
 cookie=0x6900000, duration=1034608.615s, table=244, n_packets=0, n_bytes=0, priority=62030,ct_state=-new+est-rel-inv+trk,ct_mark=0x1/0x1 actions=ct_clear,resubmit(,220)
 cookie=0x6900000, duration=1034608.615s, table=244, n_packets=0, n_bytes=0, priority=62030,ct_state=-trk,metadata=0/0x2 actions=ct_clear,resubmit(,242)
 cookie=0x6900000, duration=1034608.615s, table=244, n_packets=0, n_bytes=0, priority=0 actions=drop
 cookie=0x6900000, duration=1034608.634s, table=245, n_packets=0, n_bytes=0, priority=0 actions=resubmit(,244)
 cookie=0x6900000, duration=1034608.652s, table=246, n_packets=0, n_bytes=0, priority=0 actions=resubmit(,244)
 cookie=0x6900000, duration=1034608.668s, table=247, n_packets=0, n_bytes=0, priority=62019,metadata=0x4/0x4 actions=drop
 cookie=0x6900000, duration=1034608.668s, table=247, n_packets=0, n_bytes=0, priority=0 actions=drop
[heat-admin@compute-1 ~]$ 
[heat-admin@compute-1 ~]$ 
[heat-admin@compute-1 ~]$ 
[heat-admin@compute-1 ~]$ for table in $flow_tables; do
>   sudo ovs-ofctl -O OpenFlow13 del-flows br-int "table=$table"
> done
[heat-admin@compute-1 ~]$ for table in $flow_tables; do
>   sudo ovs-ofctl -O openflow13 dump-flows br-int | grep table=$table,
> done
[heat-admin@compute-1 ~]$ 

[heat-admin@compute-1 ~]$ sudo grep -r step /etc/puppet/hieradata
/etc/puppet/hieradata/config_step.json:{"step": 5}

[heat-admin@compute-1 ~]$ 
[heat-admin@compute-1 ~]$ puppet apply -e 'include tripleo::profile::base::neutron::plugins::ovs::opendaylight' --debug
Debug: Runtime environment: puppet_version=4.8.2, ruby_version=2.0.0, run_mode=user, default_encoding=UTF-8
Debug: Evicting cache entry for environment 'production'
Debug: Caching environment 'production' (ttl = 0 sec)
Debug: Loading external facts from /home/heat-admin/.puppet/cache/facts.d
Debug: Facter: Found no suitable resolves of 1 for ec2_metadata
Debug: Facter: value for ec2_metadata is still nil
Debug: Facter: value for agent_specified_environment is still nil
Debug: Facter: Found no suitable resolves of 1 for gce
Debug: Facter: value for gce is still nil
Debug: Facter: Found no suitable resolves of 1 for lsbdistid
Debug: Facter: value for lsbdistid is still nil
Debug: Facter: Found no suitable resolves of 1 for ec2_metadata
Debug: Facter: value for ec2_metadata is still nil
Debug: Facter: Found no suitable resolves of 1 for ec2_userdata
Debug: Facter: value for ec2_userdata is still nil
Debug: Facter: Found no suitable resolves of 1 for processor
Debug: Facter: value for processor is still nil
Debug: Facter: value for is_rsc is still nil
Debug: Facter: value for is_rsc is still nil
Debug: Facter: Found no suitable resolves of 1 for rsc_region
Debug: Facter: value for rsc_region is still nil
Debug: Facter: value for is_rsc is still nil
Debug: Facter: Found no suitable resolves of 1 for rsc_instance_id
Debug: Facter: value for rsc_instance_id is still nil
Debug: Facter: Found no suitable resolves of 1 for lsbrelease
Debug: Facter: value for lsbrelease is still nil
Debug: Facter: Found no suitable resolves of 1 for system32
Debug: Facter: value for system32 is still nil
Debug: Facter: value for dhcp_servers is still nil
Debug: Facter: Found no suitable resolves of 1 for lsbdistdescription
Debug: Facter: value for lsbdistdescription is still nil
Debug: Facter: Found no suitable resolves of 1 for zonename
Debug: Facter: value for zonename is still nil
Debug: Facter: value for zfs_version is still nil
Debug: Facter: value for cfkey is still nil
Debug: Facter: value for ipaddress6 is still nil
Debug: Facter: value for ipaddress6_br_ex is still nil
Debug: Facter: value for ipaddress_br_int is still nil
Debug: Facter: value for ipaddress6_br_int is still nil
Debug: Facter: value for netmask_br_int is still nil
Debug: Facter: value for ipaddress_br_isolated is still nil
Debug: Facter: value for ipaddress6_br_isolated is still nil
Debug: Facter: value for netmask_br_isolated is still nil
Debug: Facter: value for ipaddress6_docker0 is still nil
Debug: Facter: value for ipaddress6_eth0 is still nil
Debug: Facter: value for ipaddress_eth1 is still nil
Debug: Facter: value for ipaddress6_eth1 is still nil
Debug: Facter: value for netmask_eth1 is still nil
Debug: Facter: value for ipaddress_eth2 is still nil
Debug: Facter: value for ipaddress6_eth2 is still nil
Debug: Facter: value for netmask_eth2 is still nil
Debug: Facter: value for ipaddress6_lo is still nil
Debug: Facter: value for macaddress_lo is still nil
Debug: Facter: value for ipaddress_ovs_system is still nil
Debug: Facter: value for ipaddress6_ovs_system is still nil
Debug: Facter: value for netmask_ovs_system is still nil
Debug: Facter: value for ipaddress6_vlan20 is still nil
Debug: Facter: value for ipaddress6_vlan30 is still nil
Debug: Facter: value for ipaddress6_vlan50 is still nil
Debug: Facter: value for ipaddress_vxlan_sys_4789 is still nil
Debug: Facter: value for ipaddress6_vxlan_sys_4789 is still nil
Debug: Facter: value for netmask_vxlan_sys_4789 is still nil
Debug: Facter: value for vlans is still nil
Debug: Facter: value for sshdsakey is still nil
Debug: Facter: value for sshdsakey is still nil
Debug: Facter: value for sshfp_dsa is still nil
Debug: Facter: Found no suitable resolves of 2 for iphostnumber
Debug: Facter: value for iphostnumber is still nil
Debug: Facter: value for zpool_version is still nil
Debug: Facter: Found no suitable resolves of 1 for lsbdistrelease
Debug: Facter: value for lsbdistrelease is still nil
Debug: Facter: Found no suitable resolves of 2 for swapencrypted
Debug: Facter: value for swapencrypted is still nil
Debug: Facter: Found no suitable resolves of 1 for lsbminordistrelease
Debug: Facter: value for lsbminordistrelease is still nil
Debug: Facter: value for network_br_int is still nil
Debug: Facter: value for network_br_isolated is still nil
Debug: Facter: value for network_eth1 is still nil
Debug: Facter: value for network_eth2 is still nil
Debug: Facter: value for network_ovs_system is still nil
Debug: Facter: value for network_vxlan_sys_4789 is still nil
Debug: Facter: Found no suitable resolves of 1 for lsbdistcodename
Debug: Facter: value for lsbdistcodename is still nil
Debug: Facter: Found no suitable resolves of 1 for xendomains
Debug: Facter: value for xendomains is still nil
Debug: Facter: Found no suitable resolves of 1 for lsbmajdistrelease
Debug: Facter: value for lsbmajdistrelease is still nil
Debug: Evicting cache entry for environment 'production'
Debug: Caching environment 'production' (ttl = 0 sec)
Error: Evaluation Error: Error while evaluating a Function Call, Could not find class ::tripleo::profile::base::neutron::plugins::ovs::opendaylight for compute-1.localdomain  at line 1:1 on node compute-1.localdomain

Comment 23 Noam Manos 2018-11-07 12:50:24 UTC
Created attachment 1502971 [details]
console output: Could not find class opendaylight

On OSP version: 13   2018-11-05.3  (Z3 candidate), Trying to run:
$ puppet apply -e 'include tripleo::profile::base::neutron::plugins::ovs::opendaylight'

Fails on:
Could not find class ::tripleo::profile::base::neutron::plugins::ovs::opendaylight for compute-1.localdomain


please see console output attached.

Comment 27 Janki 2018-11-08 09:29:38 UTC
Noam, Run "puppet apply" with sudo or as root user on compute nodes and then check ports by "sudo ovs-vsctl show"

Comment 28 Noam Manos 2018-11-08 09:35:40 UTC
Running same command with sudo made it work:
sudo puppet apply -e 'include tripleo::profile::base::neutron::plugins::ovs::opendaylight'

Then "ovs-vsctl show" all the OpenFlow controllers on br-int have the right port (6653):

[heat-admin@compute-1 ~]$ sudo ovs-vsctl show
3b6c1b95-ca2d-47d1-8275-225043fc6b15
    Manager "tcp:172.17.1.18:6640"
        is_connected: true
    Manager "tcp:172.17.1.15:6640"
        is_connected: true
    Manager "tcp:172.17.1.21:6640"
        is_connected: true
    Manager "ptcp:6639:127.0.0.1"
    Bridge br-ex
        fail_mode: standalone
        Port br-ex-int-patch
            Interface br-ex-int-patch
                type: patch
                options: {peer=br-ex-patch}
        Port br-ex
            Interface br-ex
                type: internal
        Port "eth2"
            Interface "eth2"
    Bridge br-int
        Controller "tcp:172.17.1.15:6653"
            is_connected: true
        Controller "tcp:172.17.1.18:6653"
            is_connected: true
        Controller "tcp:172.17.1.21:6653"
            is_connected: true
        fail_mode: secure
        Port "tun7ddfa02d8eb"
            Interface "tun7ddfa02d8eb"
                type: vxlan
                options: {key=flow, local_ip="172.17.2.17", remote_ip="172.17.2.31"}
        Port br-ex-patch
            Interface br-ex-patch
                type: patch
                options: {peer=br-ex-int-patch}
        Port "tap9206ac3c-1d"
            Interface "tap9206ac3c-1d"
        Port "tunf4f8717c04e"
            Interface "tunf4f8717c04e"
                type: vxlan
                options: {key=flow, local_ip="172.17.2.17", remote_ip="172.17.2.32"}
        Port br-int
            Interface br-int
                type: internal
        Port "tun616ca45f52b"
            Interface "tun616ca45f52b"
                type: vxlan
                options: {key=flow, local_ip="172.17.2.17", remote_ip="172.17.2.18"}
        Port "tun69c778d9ef4"
            Interface "tun69c778d9ef4"
                type: vxlan
                options: {key=flow, local_ip="172.17.2.17", remote_ip="172.17.2.20"}
    Bridge br-isolated
        fail_mode: standalone
        Port "vlan20"
            tag: 20
            Interface "vlan20"
                type: internal
        Port "vlan50"
            tag: 50
            Interface "vlan50"
                type: internal
        Port "eth1"
            Interface "eth1"
        Port "vlan30"
            tag: 30
            Interface "vlan30"
                type: internal
        Port br-isolated
            Interface br-isolated
                type: internal
    ovs_version: "2.9.0"
[heat-admin@compute-1 ~]$

Comment 32 errata-xmlrpc 2018-11-13 22:27:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3587


Note You need to log in before you can comment on or make changes to this bug.