The FDP team is no longer accepting new bugs in Bugzilla. Please report your issues under FDP project in Jira. Thanks.
Bug 2138135 - ovs-monitor-ipsec being single-threaded causes long delays in large ipsec OpenShift clusters
Summary: ovs-monitor-ipsec being single-threaded causes long delays in large ipsec Ope...
Keywords:
Status: CLOSED EOL
Alias: None
Product: Red Hat Enterprise Linux Fast Datapath
Classification: Red Hat
Component: openvswitch2.17
Version: FDP 22.F
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Mohammad Heib
QA Contact: ovs-qe
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-10-27 11:12 UTC by PX_OpenShift
Modified: 2024-10-08 17:49 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2024-10-08 17:49:14 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker FD-2409 0 None None None 2022-10-27 15:18:38 UTC
Red Hat Issue Tracker OCPBUGS-2598 0 None None None 2022-10-27 11:12:26 UTC

Description PX_OpenShift 2022-10-27 11:12:24 UTC
Description of problem:
This is a follow-up of https://issues.redhat.com/browse/OCPBUGS-2598 but for the ovs-ipsec-monitor script.

TL;DR:
a) the python UnixctlServer seems to be single threaded only and causes head of line blocking with long-running commands. In clusters with lots (>= 100 tunnels) tunnel endpoints, one can easily trigger this by running:
{code}
ovs-appctl -t ovs-monitor-ipsec refresh
{code}

If one now tries to query the status in a second CLI, the command will hang until the earlier refresh finishes:
{code}
ovs-appctl -t ovs-monitor-ipsec ipsec/status
{code}

b) connection activation during `ovs-appctl -t ovs-monitor-ipsec refresh` is single threaded as well, and is extremely slow as it iterates over each tunnel in and out side and attempts to start the tunnel with 
{code}
/usr/bin/sh /usr/libexec/ipsec/auto --config /etc/ipsec.conf --ctlsocket /run/pluto/pluto.ctl --start --asynchronous ovn-5f74d4-0-out-1
{code} 

The combination of a) and b) causes 2 issues:
i) timeouts of our probes and thus ipsec containers never coming up; we are going to address this with the adjustment of our probes in https://issues.redhat.com/browse/OCPBUGS-2598
ii) we have a wider issue as-well, as workloads are being scheduled onto a node even when that node's ovn-ipsec pod has not yet established all of its tunnels. That will cause connectivity to fail during tunnel setup.

We are talking about delays for tunnel setup in the order of minutes; IMO that's not acceptable and we should do this way faster in the order of seconds, but definitely under a minute.

Comment 2 Andreas Karis 2022-10-27 11:21:05 UTC
This can easily be reproduced on a standalone VM with an OVS setup; e.g., something with the following nmstate configuration:
{code}
---
interfaces:
- name: br-ex
  description: ovs bridge with eth1 as a port
  type: ovs-bridge
  state: up
  bridge:
    options:
      stp: false
    port:
    - name: br-ex
- name: br-ex
  type: ovs-interface
  state: up
  ipv4:
    enabled: true
    address:
    - ip: 192.0.2.11
      prefix-length: 24
- name: eth1
  type: ethernet
  state: up
  ipv4:
    enabled: true
    address:
    - ip: 192.168.123.11
      prefix-length: 24
{code}

Then, add a bunch of dummy geneve tunnels (the destinations do not have to exist):
{code}
for i in {20..200}; do
  ip_1=192.168.123.11
  ip_2=192.168.123.$i
  ovs-vsctl add-port br-ex tun$i -- set interface tun$i type=geneve options:remote_ip=$ip_2 options:psk=swordfish
  ovs-vsctl set Interface tun$i options:local_ip=$ip_1
done
{code}

Then, hack the script to add some debug output as needed:
{code}
vim /usr/share/openvswitch/scripts/ovs-monitor-ipsec
(...)
 587     def refresh(self, monitor):                                                                                         
 588         vlog.info("Refreshing LibreSwan configuration")                                                                 
 589         start = time.time()                                                                                             
 590         subprocess.call([self.IPSEC, "auto", "--ctlsocket", self.IPSEC_CTL,                                             
 591                         "--config", self.IPSEC_CONF, "--rereadsecrets"])                                                
(...)                                             
 677                 subprocess.call([self.IPSEC, "auto",                                                                    
 678                             "--config", self.IPSEC_CONF,                                                                
 679                             "--ctlsocket", self.IPSEC_CTL,                                                              
 680                             "--delete",                                                                                 
 681                             "--asynchronous", "prevent_unencrypted_vxlan"])                                             
 682             monitor.conf_in_use["skb_mark"] = monitor.conf["skb_mark"]                                                  
 683         end = time.time()                                                                                               
 684         vlog.warn("Refresh elapsed time: %f" %  (end - start))  
{code}

And start the openvswitch-ipsec service:
{code}
systemctl restart openvswitch-ipsec.service
{code}

Then, trigger a refresh:
{code}
ovs-appctl -t ovs-monitor-ipsec refresh
{code}

And with the aforementioned debug output:
{code}
Oct 25 13:49:01 ovn-ipsec1 ovs-monitor-ips[12385]: ovs| 350 | ovs-monitor-ipsec | INFO | Refreshing LibreSwan configuration
Oct 25 13:51:50 ovn-ipsec1 ovs-monitor-ips[12385]: ovs| 351 | ovs-monitor-ipsec | WARN | Refresh elapsed time: 169.737755
{code}

And with the following one can observe that it's looping through the activation commands:
{code}
watch "ps aux | grep auto"
{code}

Comment 3 Mohammad Heib 2023-07-05 11:47:42 UTC
Hi @akaris 

Sorry for the late comments i was in long PTO.
dose this issue still relevant?

Thanks,
Mohammad

Comment 4 Andreas Karis 2023-07-05 18:15:31 UTC
Disclaimer: I switched teams and I'm not on the code writing side of things any more. But I think this is still relevant. We worked around the issue in OCP by removing some monitoring, but I assume that the initial startup delay is still there - it would be nice is the script / ipsec tunnel activation could be parallelized, if at all possible. But if you want a more definitive answer, reach out to the SDN team :-)

Comment 5 Mohammad Heib 2023-07-05 19:48:47 UTC
thank you for Your quick response, I will try to reproduce it with the above reproduce. 
 Again thank you So much for the response and good luck with the new team 🙏

Comment 6 ovs-bot 2024-10-08 17:49:14 UTC
This bug did not meet the criteria for automatic migration and is being closed.
If the issue remains, please open a new ticket in https://issues.redhat.com/browse/FDP


Note You need to log in before you can comment on or make changes to this bug.