Bug 2124877

Summary: Instance with one port gets two ip addresses or more
Product: Red Hat OpenStack Reporter: Eduard Barrera <ebarrera>
Component: python-ovsdbappAssignee: Terry Wilson <twilson>
Status: CLOSED ERRATA QA Contact: Toni Freger <tfreger>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 16.2 (Train)CC: abhijadh, bcafarel, bmv, chopark, chrisw, dasmith, egarciar, eglynn, ekuris, eolivare, gkadam, hchatter, jbeaudoi, jhakimra, jhardee, jlibosva, jschluet, kchamart, ldenny, lmiccini, mflusche, nnavarat, praveen.k.dubey, sbauza, schhabdi, scohen, sgordon, skaplons, smooney, stchen, tvignaud, twilson, vromanso
Target Milestone: z4Keywords: Triaged
Target Release: 16.2 (Train on RHEL 8.4)Flags: skaplons: needinfo-
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: python-ovsdbapp-0.17.6-2.20220923174727.el8ost Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 2137682 2137698 (view as bug list) Environment:
Last Closed: 2022-12-07 19:24:40 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2137682, 2137698    

Comment 21 ldenny 2022-09-27 07:40:20 UTC
Regarding the metadata connections being split to all DBs I believe this[1] can be safely ran on the customers environment after the deployment has been ran:

Kuba could you please confirm this is what you did? I don't see the bomgar recording on the case so I couldn't verify myself sadly.

Cheers,
Lewis


[1] https://paste.opendev.org/show/bVcdTl92Z099upORWsss/

# Configuring metadata agents to connect to all southbound databases

Backup current config on nodes running ovn-metadata agent:
```bash
ansible -i $(which tripleo-ansible-inventory) ovn_metadata -bm shell -a 'crudini --get /var/lib/config-data/puppet-generated/neutron/etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini ovn ovn_sb_connection'
```

output:
```bash
compute-0 | CHANGED | rc=0 >>
tcp:172.17.1.21:6642

compute-1 | CHANGED | rc=0 >>
tcp:172.17.1.21:6642
```

Check which controller is hosting the master ovn-dbs container:
```bash
ansible -i $(which tripleo-ansible-inventory) ovn_dbs -bm shell -a 'pcs status| egrep ovn-dbs-bundle.*Master'
```

Output:
```bash
controller-1 | CHANGED | rc=0 >>
    * ovn-dbs-bundle-0	(ocf::ovn:ovndb-servers):	Master controller-0

controller-2 | CHANGED | rc=0 >>
    * ovn-dbs-bundle-0	(ocf::ovn:ovndb-servers):	Master controller-0

controller-0 | CHANGED | rc=0 >>
    * ovn-dbs-bundle-0	(ocf::ovn:ovndb-servers):	Master controller-0
```

Capture current connection settings for nb and southbound dbs from the master node:
```bash
ansible -i $(which tripleo-ansible-inventory) controller-0 -bm shell -a 'podman exec -it $(podman ps -qf name=ovn-dbs-bundle-podman) ovn-sbctl get-connection'
```

```bash
ansible -i $(which tripleo-ansible-inventory) controller-0 -bm shell -a 'podman exec -it $(podman ps -qf name=ovn-dbs-bundle-podman) ovn-nbctl get-connection'
```
Output:
```shell
controller-0 | CHANGED | rc=0 >>
read-write role="" ptcp:6642:172.17.1.21
```

```bash
controller-0 | CHANGED | rc=0 >>
read-write role="" ptcp:6641:172.17.1.21
```

On master node set the ovs-dbs server to listen on any address:
```bash
ansible -i $(which tripleo-ansible-inventory) controller-0 -bm shell -a 'podman exec -it $(podman ps -qf name=ovn-dbs-bundle-podman) ovn-sbctl set-connection ptcp:6641:0.0.0.0'
```

Output:
```bash
controller-0 | CHANGED | rc=0 >>
```

On the master node set the ovndb_servers resource `listen_on_master_ip_only` attribute to `no`:

```bash
ansible -i $(which tripleo-ansible-inventory) controller-0 -bm shell -a 'podman exec -it $(podman ps -qf name=ovn-dbs-bundle-podman) pcs resource update ovndb_servers listen_on_master_ip_only="no"'
```

Restart the bundle:
```bash
ansible -i $(which tripleo-ansible-inventory) controller-0 -bm shell -a 'pcs resource restart ovn-dbs-bundle'
```

Capture the internal IP address' for each controller from the undercloud:
```bash
openstack port list --network internal_api -f value -c Name -c 'Fixed IP Addresses' | grep -i controller
```

Output:
```bash
controller-1_InternalApi [{'subnet_id': 'dfab868f-abff-4024-9d63-5b3e06590ccb', 'ip_address': '172.17.1.20'}]
controller-2_InternalApi [{'subnet_id': 'dfab868f-abff-4024-9d63-5b3e06590ccb', 'ip_address': '172.17.1.137'}]
controller-0_InternalApi [{'subnet_id': 'dfab868f-abff-4024-9d63-5b3e06590ccb', 'ip_address': '172.17.1.62'}]
```

On all nodes running ovn_metadata containers set the ovn-metadata agent to connect to any of the controllers using there internal_api IP address:
```bash
crudini --set /var/lib/config-data/puppet-generated/neutron/etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini ovn ovn_sb_connection tcp:172.17.1.62:6642,tcp:172.17.1.20:6642,tcp:172.17.1.137:6642
```

Output:
```bash
compute-1 | CHANGED | rc=0 >>

compute-0 | CHANGED | rc=0 >>
```

Restart the ovn-metadata service:
```bash
ansible -i $(which tripleo-ansible-inventory) ovn_metadata -bm shell -a 'systemctl restart tripleo_ovn_metadata_agent.service'
```

Proof:
```
2022-09-27 07:18:05.086 477633 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = tcp:172.17.1.62:6642,tcp:172.17.1.20:6642,tcp:172.17.1.137:6642 log_opt_values /usr/lib/python3.6/site-packages/oslo_config/cfg.py:2589

$ curl -I http://169.254.169.254/openstack/2012-08-10/meta_data.json
HTTP/1.1 200 OK
```

Comment 23 Jakub Libosvar 2022-09-27 13:56:50 UTC
(In reply to ldenny from comment #21)
> Regarding the metadata connections being split to all DBs I believe this[1]
> can be safely ran on the customers environment after the deployment has been
> ran:
> 
> Kuba could you please confirm this is what you did? I don't see the bomgar
> recording on the case so I couldn't verify myself sadly.
> 
> Cheers,
> Lewis
> 
> 
> [1] https://paste.opendev.org/show/bVcdTl92Z099upORWsss/
> 
> # Configuring metadata agents to connect to all southbound databases
> 
> Backup current config on nodes running ovn-metadata agent:
> ```bash
> ansible -i $(which tripleo-ansible-inventory) ovn_metadata -bm shell -a
> 'crudini --get
> /var/lib/config-data/puppet-generated/neutron/etc/neutron/plugins/networking-
> ovn/networking-ovn-metadata-agent.ini ovn ovn_sb_connection'
> ```
> 
> output:
> ```bash
> compute-0 | CHANGED | rc=0 >>
> tcp:172.17.1.21:6642
> 
> compute-1 | CHANGED | rc=0 >>
> tcp:172.17.1.21:6642
> ```
> 
> Check which controller is hosting the master ovn-dbs container:
> ```bash
> ansible -i $(which tripleo-ansible-inventory) ovn_dbs -bm shell -a 'pcs
> status| egrep ovn-dbs-bundle.*Master'
> ```
> 
> Output:
> ```bash
> controller-1 | CHANGED | rc=0 >>
>     * ovn-dbs-bundle-0	(ocf::ovn:ovndb-servers):	Master controller-0
> 
> controller-2 | CHANGED | rc=0 >>
>     * ovn-dbs-bundle-0	(ocf::ovn:ovndb-servers):	Master controller-0
> 
> controller-0 | CHANGED | rc=0 >>
>     * ovn-dbs-bundle-0	(ocf::ovn:ovndb-servers):	Master controller-0
> ```
> 
> Capture current connection settings for nb and southbound dbs from the
> master node:
> ```bash
> ansible -i $(which tripleo-ansible-inventory) controller-0 -bm shell -a
> 'podman exec -it $(podman ps -qf name=ovn-dbs-bundle-podman) ovn-sbctl
> get-connection'
> ```
> 
> ```bash
> ansible -i $(which tripleo-ansible-inventory) controller-0 -bm shell -a
> 'podman exec -it $(podman ps -qf name=ovn-dbs-bundle-podman) ovn-nbctl
> get-connection'
> ```
> Output:
> ```shell
> controller-0 | CHANGED | rc=0 >>
> read-write role="" ptcp:6642:172.17.1.21
> ```
> 
> ```bash
> controller-0 | CHANGED | rc=0 >>
> read-write role="" ptcp:6641:172.17.1.21
> ```
> 
> On master node set the ovs-dbs server to listen on any address:
> ```bash
> ansible -i $(which tripleo-ansible-inventory) controller-0 -bm shell -a
> 'podman exec -it $(podman ps -qf name=ovn-dbs-bundle-podman) ovn-sbctl
> set-connection ptcp:6641:0.0.0.0'
> ```

You don't need to find the master node if you use --db=tcp:172.17.1.21:6641 or tcp:172.17.1.21:6642 with each ovn-nbctl or ovn-sbctl call respectively. For example you can pick just controller-0 and do

```bash
ansible -i $(which tripleo-ansible-inventory) controller-0 -bm shell -a
'podman exec -it $(podman ps -qf name=ovn-dbs-bundle-podman) ovn-sbctl --db=tcp:172.17.1.21:6642
set-connection ptcp:6641:0.0.0.0'
```

This can make it a bit easier and omit some steps. Just note, as Terry pointed out, the DBs will now listen on external network too and not just the internal (I actually don't know how their networks are designed). So it would be good to firewall 6641 and 6642 ports from that network.

> 
> Output:
> ```bash
> controller-0 | CHANGED | rc=0 >>
> ```
> 
> On the master node set the ovndb_servers resource `listen_on_master_ip_only`
> attribute to `no`:
> 
> ```bash
> ansible -i $(which tripleo-ansible-inventory) controller-0 -bm shell -a
> 'podman exec -it $(podman ps -qf name=ovn-dbs-bundle-podman) pcs resource
> update ovndb_servers listen_on_master_ip_only="no"'
> ```
> 
> Restart the bundle:
> ```bash
> ansible -i $(which tripleo-ansible-inventory) controller-0 -bm shell -a 'pcs
> resource restart ovn-dbs-bundle'
> ```

Don't do the restart here because the 0.0.0.0 address was already set before, so the listen_on_master_ip_only="no" part is just for future restarts.

> 
> Capture the internal IP address' for each controller from the undercloud:
> ```bash
> openstack port list --network internal_api -f value -c Name -c 'Fixed IP
> Addresses' | grep -i controller
> ```
> 
> Output:
> ```bash
> controller-1_InternalApi [{'subnet_id':
> 'dfab868f-abff-4024-9d63-5b3e06590ccb', 'ip_address': '172.17.1.20'}]
> controller-2_InternalApi [{'subnet_id':
> 'dfab868f-abff-4024-9d63-5b3e06590ccb', 'ip_address': '172.17.1.137'}]
> controller-0_InternalApi [{'subnet_id':
> 'dfab868f-abff-4024-9d63-5b3e06590ccb', 'ip_address': '172.17.1.62'}]
> ```
> 
> On all nodes running ovn_metadata containers set the ovn-metadata agent to
> connect to any of the controllers using there internal_api IP address:
> ```bash
> crudini --set
> /var/lib/config-data/puppet-generated/neutron/etc/neutron/plugins/networking-
> ovn/networking-ovn-metadata-agent.ini ovn ovn_sb_connection
> tcp:172.17.1.62:6642,tcp:172.17.1.20:6642,tcp:172.17.1.137:6642
> ```

```bash
crudini --set
/var/lib/config-data/puppet-generated/neutron/etc/neutron/plugins/networking-
ovn/networking-ovn-metadata-agent.ini agent report_agent True
```

> 
> Output:
> ```bash
> compute-1 | CHANGED | rc=0 >>
> 
> compute-0 | CHANGED | rc=0 >>
> ```
> 
> Restart the ovn-metadata service:
> ```bash
> ansible -i $(which tripleo-ansible-inventory) ovn_metadata -bm shell -a
> 'systemctl restart tripleo_ovn_metadata_agent.service'
> ```
> 
> Proof:
> ```
> 2022-09-27 07:18:05.086 477633 DEBUG oslo_service.service [-]
> ovn.ovn_sb_connection          =
> tcp:172.17.1.62:6642,tcp:172.17.1.20:6642,tcp:172.17.1.137:6642
> log_opt_values /usr/lib/python3.6/site-packages/oslo_config/cfg.py:2589
> 
> $ curl -I http://169.254.169.254/openstack/2012-08-10/meta_data.json
> HTTP/1.1 200 OK
> ```

Otherwise looks good to me. Thanks for putting it together!

Comment 25 Slawek Kaplonski 2022-09-28 07:34:49 UTC
*** Bug 2124874 has been marked as a duplicate of this bug. ***

Comment 76 errata-xmlrpc 2022-12-07 19:24:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Release of components for Red Hat OpenStack Platform 16.2.4), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:8794