Description of problem: In order to use VRFs inside a pod we need to disable v1 cgroups as it present in a example below: cgroup_no_v1=net_prio,net_cl It looks like knows kernel issue: https://bugzilla.kernel.org/show_bug.cgi?id=203483 Without that kernel options application could not bind socket inside vrf. Version-Release number of selected component (if applicable): 4.7/4.8/4.9 How reproducible: Run pod inside OCP with VRF red and try to bind socket using following command: ip vrf exec red httpd -X -C "ServerName 10.128.2.212" -c "Listen 10.128.2.212:80" You will get an error: (99)Cannot assign requested address: AH00072: make_sock: could not bind to address 10.128.2.212:80 no listening sockets available, shutting down Same for nginx application. ip vrf exec is recommended way to run application inside VRF according to: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_and_managing_networking/assembly_starting-a-service-within-an-isolated-vrf-network_configuring-and-managing-networking Steps to Reproduce: 1. Run pod with VRF configured 2. Try to run httpd inside vrf ip vrf exec red httpd -X -C "ServerName 10.128.2.212" -c "Listen Actual results: (99)Cannot assign requested address: AH00072: make_sock: could not bind to address 10.128.2.212:80 no listening sockets available, shutting down Expected results: httpd process should be running inside VRF Additional info: Work around: Disable v1 cgroups in kernel: cgroup_no_v1=net_prio,net_cl
One thing that's worth noting is that as we move to cgroups v2, there's no net_cls or net_prio cgroups. So, from what it looks like -- this should work OK with cgroups v2. As far as impact on other systems, I haven't got any input, yet.
Closing as not a bug
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 365 days