Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1995631

Summary: VRF CNI needs additional kernel options to be installed cgroup_no_v1=net_prio,net_cl
Product: OpenShift Container Platform Reporter: Nikita <nkononov>
Component: NetworkingAssignee: Federico Paolinelli <fpaoline>
Networking sub component: multus QA Contact: Weibin Liang <weliang>
Status: CLOSED WONTFIX Docs Contact: Padraig O'Grady <pogrady>
Severity: high    
Priority: high CC: cgoncalves, dosmith, elevin, fpaoline, pogrady, sscheink, yjoseph
Version: 4.9Keywords: AutomationBlocker, Reopened
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Known Issue
Doc Text:
Cause: Using vrf via "ip vrf exec .. " does not work due to cgroups mismatch Consequence:ip vrf exec cannot be used inside openshift pods Workaround (if any): Result: Applications that want to use vrf must be VRF aware and bind directly to the VRF interface
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-08-31 07:48:48 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Nikita 2021-08-19 14:11:23 UTC
Description of problem:

In order to use VRFs inside a pod we need to disable v1 cgroups as it present in a example below:
cgroup_no_v1=net_prio,net_cl

It looks like knows kernel issue: https://bugzilla.kernel.org/show_bug.cgi?id=203483
 

Without that kernel options application could not bind socket inside vrf.
 

Version-Release number of selected component (if applicable):
4.7/4.8/4.9

How reproducible:
Run pod inside OCP with VRF red and try to bind socket using following command:
ip vrf exec red httpd -X -C "ServerName 10.128.2.212" -c "Listen 10.128.2.212:80"

You will get an error:
(99)Cannot assign requested address: AH00072: make_sock: could not bind to address 10.128.2.212:80
no listening sockets available, shutting down

Same for nginx application.

ip vrf exec is recommended way to run application inside VRF according to: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_and_managing_networking/assembly_starting-a-service-within-an-isolated-vrf-network_configuring-and-managing-networking


Steps to Reproduce:
1. Run pod with VRF configured
2. Try to run httpd inside vrf
ip vrf exec red httpd -X -C "ServerName 10.128.2.212" -c "Listen 

Actual results:
(99)Cannot assign requested address: AH00072: make_sock: could not bind to address 10.128.2.212:80
no listening sockets available, shutting down

Expected results:
httpd process should be running inside VRF

Additional info:
Work around:
Disable v1 cgroups in kernel: 

cgroup_no_v1=net_prio,net_cl

Comment 4 Douglas Smith 2021-08-24 15:52:12 UTC
One thing that's worth noting is that as we move to cgroups v2, there's no net_cls or net_prio cgroups. So, from what it looks like -- this should work OK with cgroups v2.

As far as impact on other systems, I haven't got any input, yet.

Comment 7 Federico Paolinelli 2021-09-15 13:43:43 UTC
Closing as not a bug

Comment 11 Red Hat Bugzilla 2023-09-15 01:35:37 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 365 days