The FDP team is no longer accepting new bugs in Bugzilla. Please report your issues under FDP project in Jira. Thanks.
Bug 2138135 - ovs-monitor-ipsec being single-threaded causes long delays in large ipsec OpenShift clusters
Summary: ovs-monitor-ipsec being single-threaded causes long delays in large ipsec Ope...
Status: NEW
Alias: None
Product: Red Hat Enterprise Linux Fast Datapath
Classification: Red Hat
Component: openvswitch2.17
Version: FDP 22.F
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: ---
Assignee: Mohammad Heib
QA Contact: ovs-qe
Depends On:
TreeView+ depends on / blocked
Reported: 2022-10-27 11:12 UTC by PX_OpenShift
Modified: 2023-07-05 19:48 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed:
Target Upstream Version:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker FD-2409 0 None None None 2022-10-27 15:18:38 UTC
Red Hat Issue Tracker OCPBUGS-2598 0 None None None 2022-10-27 11:12:26 UTC

Description PX_OpenShift 2022-10-27 11:12:24 UTC
Description of problem:
This is a follow-up of but for the ovs-ipsec-monitor script.

a) the python UnixctlServer seems to be single threaded only and causes head of line blocking with long-running commands. In clusters with lots (>= 100 tunnels) tunnel endpoints, one can easily trigger this by running:
ovs-appctl -t ovs-monitor-ipsec refresh

If one now tries to query the status in a second CLI, the command will hang until the earlier refresh finishes:
ovs-appctl -t ovs-monitor-ipsec ipsec/status

b) connection activation during `ovs-appctl -t ovs-monitor-ipsec refresh` is single threaded as well, and is extremely slow as it iterates over each tunnel in and out side and attempts to start the tunnel with 
/usr/bin/sh /usr/libexec/ipsec/auto --config /etc/ipsec.conf --ctlsocket /run/pluto/pluto.ctl --start --asynchronous ovn-5f74d4-0-out-1

The combination of a) and b) causes 2 issues:
i) timeouts of our probes and thus ipsec containers never coming up; we are going to address this with the adjustment of our probes in
ii) we have a wider issue as-well, as workloads are being scheduled onto a node even when that node's ovn-ipsec pod has not yet established all of its tunnels. That will cause connectivity to fail during tunnel setup.

We are talking about delays for tunnel setup in the order of minutes; IMO that's not acceptable and we should do this way faster in the order of seconds, but definitely under a minute.

Comment 2 Andreas Karis 2022-10-27 11:21:05 UTC
This can easily be reproduced on a standalone VM with an OVS setup; e.g., something with the following nmstate configuration:
- name: br-ex
  description: ovs bridge with eth1 as a port
  type: ovs-bridge
  state: up
      stp: false
    - name: br-ex
- name: br-ex
  type: ovs-interface
  state: up
    enabled: true
    - ip:
      prefix-length: 24
- name: eth1
  type: ethernet
  state: up
    enabled: true
    - ip:
      prefix-length: 24

Then, add a bunch of dummy geneve tunnels (the destinations do not have to exist):
for i in {20..200}; do
  ovs-vsctl add-port br-ex tun$i -- set interface tun$i type=geneve options:remote_ip=$ip_2 options:psk=swordfish
  ovs-vsctl set Interface tun$i options:local_ip=$ip_1

Then, hack the script to add some debug output as needed:
vim /usr/share/openvswitch/scripts/ovs-monitor-ipsec
 587     def refresh(self, monitor):                                                                                         
 588"Refreshing LibreSwan configuration")                                                                 
 589         start = time.time()                                                                                             
 590[self.IPSEC, "auto", "--ctlsocket", self.IPSEC_CTL,                                             
 591                         "--config", self.IPSEC_CONF, "--rereadsecrets"])                                                
 677       [self.IPSEC, "auto",                                                                    
 678                             "--config", self.IPSEC_CONF,                                                                
 679                             "--ctlsocket", self.IPSEC_CTL,                                                              
 680                             "--delete",                                                                                 
 681                             "--asynchronous", "prevent_unencrypted_vxlan"])                                             
 682             monitor.conf_in_use["skb_mark"] = monitor.conf["skb_mark"]                                                  
 683         end = time.time()                                                                                               
 684         vlog.warn("Refresh elapsed time: %f" %  (end - start))  

And start the openvswitch-ipsec service:
systemctl restart openvswitch-ipsec.service

Then, trigger a refresh:
ovs-appctl -t ovs-monitor-ipsec refresh

And with the aforementioned debug output:
Oct 25 13:49:01 ovn-ipsec1 ovs-monitor-ips[12385]: ovs| 350 | ovs-monitor-ipsec | INFO | Refreshing LibreSwan configuration
Oct 25 13:51:50 ovn-ipsec1 ovs-monitor-ips[12385]: ovs| 351 | ovs-monitor-ipsec | WARN | Refresh elapsed time: 169.737755

And with the following one can observe that it's looping through the activation commands:
watch "ps aux | grep auto"

Comment 3 Mohammad Heib 2023-07-05 11:47:42 UTC
Hi @akaris 

Sorry for the late comments i was in long PTO.
dose this issue still relevant?


Comment 4 Andreas Karis 2023-07-05 18:15:31 UTC
Disclaimer: I switched teams and I'm not on the code writing side of things any more. But I think this is still relevant. We worked around the issue in OCP by removing some monitoring, but I assume that the initial startup delay is still there - it would be nice is the script / ipsec tunnel activation could be parallelized, if at all possible. But if you want a more definitive answer, reach out to the SDN team :-)

Comment 5 Mohammad Heib 2023-07-05 19:48:47 UTC
thank you for Your quick response, I will try to reproduce it with the above reproduce. 
 Again thank you So much for the response and good luck with the new team 🙏

Note You need to log in before you can comment on or make changes to this bug.