Bug 2124877 - Instance with one port gets two ip addresses or more
Summary: Instance with one port gets two ip addresses or more
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: python-ovsdbapp
Version: 16.2 (Train)
Hardware: All
OS: All
urgent
urgent
Target Milestone: z4
: 16.2 (Train on RHEL 8.4)
Assignee: Terry Wilson
QA Contact: Toni Freger
URL:
Whiteboard:
: 2124874 (view as bug list)
Depends On:
Blocks: 2137682 2137698
TreeView+ depends on / blocked
 
Reported: 2022-09-07 10:50 UTC by Eduard Barrera
Modified: 2023-04-12 19:18 UTC (History)
33 users (show)

Fixed In Version: python-ovsdbapp-0.17.6-2.20220923174727.el8ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2137682 2137698 (view as bug list)
Environment:
Last Closed: 2022-12-07 19:24:40 UTC
Target Upstream Version:
Embargoed:
skaplons: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
OpenStack gerrit 862524 0 None MERGED Don't force_reconnect() on unhandled Idl exception 2022-11-14 17:27:53 UTC
Red Hat Issue Tracker OSP-18590 0 None None None 2022-09-07 11:11:04 UTC
Red Hat Product Errata RHBA-2022:8794 0 None None None 2022-12-07 19:25:31 UTC

Comment 21 ldenny 2022-09-27 07:40:20 UTC
Regarding the metadata connections being split to all DBs I believe this[1] can be safely ran on the customers environment after the deployment has been ran:

Kuba could you please confirm this is what you did? I don't see the bomgar recording on the case so I couldn't verify myself sadly.

Cheers,
Lewis


[1] https://paste.opendev.org/show/bVcdTl92Z099upORWsss/

# Configuring metadata agents to connect to all southbound databases

Backup current config on nodes running ovn-metadata agent:
```bash
ansible -i $(which tripleo-ansible-inventory) ovn_metadata -bm shell -a 'crudini --get /var/lib/config-data/puppet-generated/neutron/etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini ovn ovn_sb_connection'
```

output:
```bash
compute-0 | CHANGED | rc=0 >>
tcp:172.17.1.21:6642

compute-1 | CHANGED | rc=0 >>
tcp:172.17.1.21:6642
```

Check which controller is hosting the master ovn-dbs container:
```bash
ansible -i $(which tripleo-ansible-inventory) ovn_dbs -bm shell -a 'pcs status| egrep ovn-dbs-bundle.*Master'
```

Output:
```bash
controller-1 | CHANGED | rc=0 >>
    * ovn-dbs-bundle-0	(ocf::ovn:ovndb-servers):	Master controller-0

controller-2 | CHANGED | rc=0 >>
    * ovn-dbs-bundle-0	(ocf::ovn:ovndb-servers):	Master controller-0

controller-0 | CHANGED | rc=0 >>
    * ovn-dbs-bundle-0	(ocf::ovn:ovndb-servers):	Master controller-0
```

Capture current connection settings for nb and southbound dbs from the master node:
```bash
ansible -i $(which tripleo-ansible-inventory) controller-0 -bm shell -a 'podman exec -it $(podman ps -qf name=ovn-dbs-bundle-podman) ovn-sbctl get-connection'
```

```bash
ansible -i $(which tripleo-ansible-inventory) controller-0 -bm shell -a 'podman exec -it $(podman ps -qf name=ovn-dbs-bundle-podman) ovn-nbctl get-connection'
```
Output:
```shell
controller-0 | CHANGED | rc=0 >>
read-write role="" ptcp:6642:172.17.1.21
```

```bash
controller-0 | CHANGED | rc=0 >>
read-write role="" ptcp:6641:172.17.1.21
```

On master node set the ovs-dbs server to listen on any address:
```bash
ansible -i $(which tripleo-ansible-inventory) controller-0 -bm shell -a 'podman exec -it $(podman ps -qf name=ovn-dbs-bundle-podman) ovn-sbctl set-connection ptcp:6641:0.0.0.0'
```

Output:
```bash
controller-0 | CHANGED | rc=0 >>
```

On the master node set the ovndb_servers resource `listen_on_master_ip_only` attribute to `no`:

```bash
ansible -i $(which tripleo-ansible-inventory) controller-0 -bm shell -a 'podman exec -it $(podman ps -qf name=ovn-dbs-bundle-podman) pcs resource update ovndb_servers listen_on_master_ip_only="no"'
```

Restart the bundle:
```bash
ansible -i $(which tripleo-ansible-inventory) controller-0 -bm shell -a 'pcs resource restart ovn-dbs-bundle'
```

Capture the internal IP address' for each controller from the undercloud:
```bash
openstack port list --network internal_api -f value -c Name -c 'Fixed IP Addresses' | grep -i controller
```

Output:
```bash
controller-1_InternalApi [{'subnet_id': 'dfab868f-abff-4024-9d63-5b3e06590ccb', 'ip_address': '172.17.1.20'}]
controller-2_InternalApi [{'subnet_id': 'dfab868f-abff-4024-9d63-5b3e06590ccb', 'ip_address': '172.17.1.137'}]
controller-0_InternalApi [{'subnet_id': 'dfab868f-abff-4024-9d63-5b3e06590ccb', 'ip_address': '172.17.1.62'}]
```

On all nodes running ovn_metadata containers set the ovn-metadata agent to connect to any of the controllers using there internal_api IP address:
```bash
crudini --set /var/lib/config-data/puppet-generated/neutron/etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini ovn ovn_sb_connection tcp:172.17.1.62:6642,tcp:172.17.1.20:6642,tcp:172.17.1.137:6642
```

Output:
```bash
compute-1 | CHANGED | rc=0 >>

compute-0 | CHANGED | rc=0 >>
```

Restart the ovn-metadata service:
```bash
ansible -i $(which tripleo-ansible-inventory) ovn_metadata -bm shell -a 'systemctl restart tripleo_ovn_metadata_agent.service'
```

Proof:
```
2022-09-27 07:18:05.086 477633 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = tcp:172.17.1.62:6642,tcp:172.17.1.20:6642,tcp:172.17.1.137:6642 log_opt_values /usr/lib/python3.6/site-packages/oslo_config/cfg.py:2589

$ curl -I http://169.254.169.254/openstack/2012-08-10/meta_data.json
HTTP/1.1 200 OK
```

Comment 23 Jakub Libosvar 2022-09-27 13:56:50 UTC
(In reply to ldenny from comment #21)
> Regarding the metadata connections being split to all DBs I believe this[1]
> can be safely ran on the customers environment after the deployment has been
> ran:
> 
> Kuba could you please confirm this is what you did? I don't see the bomgar
> recording on the case so I couldn't verify myself sadly.
> 
> Cheers,
> Lewis
> 
> 
> [1] https://paste.opendev.org/show/bVcdTl92Z099upORWsss/
> 
> # Configuring metadata agents to connect to all southbound databases
> 
> Backup current config on nodes running ovn-metadata agent:
> ```bash
> ansible -i $(which tripleo-ansible-inventory) ovn_metadata -bm shell -a
> 'crudini --get
> /var/lib/config-data/puppet-generated/neutron/etc/neutron/plugins/networking-
> ovn/networking-ovn-metadata-agent.ini ovn ovn_sb_connection'
> ```
> 
> output:
> ```bash
> compute-0 | CHANGED | rc=0 >>
> tcp:172.17.1.21:6642
> 
> compute-1 | CHANGED | rc=0 >>
> tcp:172.17.1.21:6642
> ```
> 
> Check which controller is hosting the master ovn-dbs container:
> ```bash
> ansible -i $(which tripleo-ansible-inventory) ovn_dbs -bm shell -a 'pcs
> status| egrep ovn-dbs-bundle.*Master'
> ```
> 
> Output:
> ```bash
> controller-1 | CHANGED | rc=0 >>
>     * ovn-dbs-bundle-0	(ocf::ovn:ovndb-servers):	Master controller-0
> 
> controller-2 | CHANGED | rc=0 >>
>     * ovn-dbs-bundle-0	(ocf::ovn:ovndb-servers):	Master controller-0
> 
> controller-0 | CHANGED | rc=0 >>
>     * ovn-dbs-bundle-0	(ocf::ovn:ovndb-servers):	Master controller-0
> ```
> 
> Capture current connection settings for nb and southbound dbs from the
> master node:
> ```bash
> ansible -i $(which tripleo-ansible-inventory) controller-0 -bm shell -a
> 'podman exec -it $(podman ps -qf name=ovn-dbs-bundle-podman) ovn-sbctl
> get-connection'
> ```
> 
> ```bash
> ansible -i $(which tripleo-ansible-inventory) controller-0 -bm shell -a
> 'podman exec -it $(podman ps -qf name=ovn-dbs-bundle-podman) ovn-nbctl
> get-connection'
> ```
> Output:
> ```shell
> controller-0 | CHANGED | rc=0 >>
> read-write role="" ptcp:6642:172.17.1.21
> ```
> 
> ```bash
> controller-0 | CHANGED | rc=0 >>
> read-write role="" ptcp:6641:172.17.1.21
> ```
> 
> On master node set the ovs-dbs server to listen on any address:
> ```bash
> ansible -i $(which tripleo-ansible-inventory) controller-0 -bm shell -a
> 'podman exec -it $(podman ps -qf name=ovn-dbs-bundle-podman) ovn-sbctl
> set-connection ptcp:6641:0.0.0.0'
> ```

You don't need to find the master node if you use --db=tcp:172.17.1.21:6641 or tcp:172.17.1.21:6642 with each ovn-nbctl or ovn-sbctl call respectively. For example you can pick just controller-0 and do

```bash
ansible -i $(which tripleo-ansible-inventory) controller-0 -bm shell -a
'podman exec -it $(podman ps -qf name=ovn-dbs-bundle-podman) ovn-sbctl --db=tcp:172.17.1.21:6642
set-connection ptcp:6641:0.0.0.0'
```

This can make it a bit easier and omit some steps. Just note, as Terry pointed out, the DBs will now listen on external network too and not just the internal (I actually don't know how their networks are designed). So it would be good to firewall 6641 and 6642 ports from that network.

> 
> Output:
> ```bash
> controller-0 | CHANGED | rc=0 >>
> ```
> 
> On the master node set the ovndb_servers resource `listen_on_master_ip_only`
> attribute to `no`:
> 
> ```bash
> ansible -i $(which tripleo-ansible-inventory) controller-0 -bm shell -a
> 'podman exec -it $(podman ps -qf name=ovn-dbs-bundle-podman) pcs resource
> update ovndb_servers listen_on_master_ip_only="no"'
> ```
> 
> Restart the bundle:
> ```bash
> ansible -i $(which tripleo-ansible-inventory) controller-0 -bm shell -a 'pcs
> resource restart ovn-dbs-bundle'
> ```

Don't do the restart here because the 0.0.0.0 address was already set before, so the listen_on_master_ip_only="no" part is just for future restarts.

> 
> Capture the internal IP address' for each controller from the undercloud:
> ```bash
> openstack port list --network internal_api -f value -c Name -c 'Fixed IP
> Addresses' | grep -i controller
> ```
> 
> Output:
> ```bash
> controller-1_InternalApi [{'subnet_id':
> 'dfab868f-abff-4024-9d63-5b3e06590ccb', 'ip_address': '172.17.1.20'}]
> controller-2_InternalApi [{'subnet_id':
> 'dfab868f-abff-4024-9d63-5b3e06590ccb', 'ip_address': '172.17.1.137'}]
> controller-0_InternalApi [{'subnet_id':
> 'dfab868f-abff-4024-9d63-5b3e06590ccb', 'ip_address': '172.17.1.62'}]
> ```
> 
> On all nodes running ovn_metadata containers set the ovn-metadata agent to
> connect to any of the controllers using there internal_api IP address:
> ```bash
> crudini --set
> /var/lib/config-data/puppet-generated/neutron/etc/neutron/plugins/networking-
> ovn/networking-ovn-metadata-agent.ini ovn ovn_sb_connection
> tcp:172.17.1.62:6642,tcp:172.17.1.20:6642,tcp:172.17.1.137:6642
> ```

```bash
crudini --set
/var/lib/config-data/puppet-generated/neutron/etc/neutron/plugins/networking-
ovn/networking-ovn-metadata-agent.ini agent report_agent True
```

> 
> Output:
> ```bash
> compute-1 | CHANGED | rc=0 >>
> 
> compute-0 | CHANGED | rc=0 >>
> ```
> 
> Restart the ovn-metadata service:
> ```bash
> ansible -i $(which tripleo-ansible-inventory) ovn_metadata -bm shell -a
> 'systemctl restart tripleo_ovn_metadata_agent.service'
> ```
> 
> Proof:
> ```
> 2022-09-27 07:18:05.086 477633 DEBUG oslo_service.service [-]
> ovn.ovn_sb_connection          =
> tcp:172.17.1.62:6642,tcp:172.17.1.20:6642,tcp:172.17.1.137:6642
> log_opt_values /usr/lib/python3.6/site-packages/oslo_config/cfg.py:2589
> 
> $ curl -I http://169.254.169.254/openstack/2012-08-10/meta_data.json
> HTTP/1.1 200 OK
> ```

Otherwise looks good to me. Thanks for putting it together!

Comment 25 Slawek Kaplonski 2022-09-28 07:34:49 UTC
*** Bug 2124874 has been marked as a duplicate of this bug. ***

Comment 76 errata-xmlrpc 2022-12-07 19:24:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Release of components for Red Hat OpenStack Platform 16.2.4), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:8794


Note You need to log in before you can comment on or make changes to this bug.