Hello, Description of problem: On a Red Hat 7.x machine used as a router (default gateway for Internet), a radvd process does not give IPv6 addreses (/64 prefix...) to internal hosts because it says (wrongly) that IPv6 forwarding is disabled. Scenario - on the router: 1. radvd is configured (/etc/radvd.conf) and enabled via systemctl (systemctl enable radvd.service) 2. DHCPv6 server is configured (/etc/dhcp/dhcpd6.conf) and enabled via systemctl (systemctl enable dhcpd6.service) - in order to convey IPv6 DNS addreses to internal hosts. 3. IPv6 forwarding is enabled both in /etc/sysctl.conf and in /etc/sysconfig/network In /etc/sysctl.conf: net.ipv6.conf.default.forwarding=1 net.ipv6.conf.all.forwarding=1 In /etc/sysconfig/network IPV6FORWARDING=yes (as instructed in /usr/share/doc/initscripts-9.49.24/sysconfig.txt) However, after restart, the default gateway does not give IPv6 addreses to internal hosts. systemctl -l status radvd.service says: "IPv6 forwarding setting is: 0, should be 1" The whole configuration DO become active and functional if I manually do a "sysctl -p" after restart (or force IPv6 forwarding via a "echo 1 > /proc/sys/net/ipv6/conf/all/forwarding" /etc/rc.d/rc.local). However, doing so IS suboptimal! Version-Release number of selected component (if applicable): radvd-1.9.2-7.el7.x86_64 kernel-3.10.0-229.7.2.el7.x86_64 initscripts-9.49.24-1.el7.x86_64 systemd-208-20.el7_1.5.x86_64 How reproducible: Always Steps to Reproduce: 1. Cleanly install system in default configuration, with two Ethernet interfaces ("Basic server with GUI") 2. Statically configure both IPv4 and IPv4 addreses for each interface (external and internal) 3. Connect the external interface to Internet and verify you may freely acces it via both IPv4 and IPv6 4. Configure and start RADVD in the machine (as above), in order to give /64 prefix to internal hosts 5. Configure and start DHCP6 in the machine (as above), in order to give IPv6 DNS to internal hosts 6. Configure IPv6 forwarding (as above) in /etc/sysctl.conf and/or /etc/sysconfig/network 7. Connect internal interface to a IPv6-enabled LAN (stations expect RAs in order to form IPv6 adddreses) 8. Restart gateway Actual results: Internal workstations don't get IPv6 addreses after restart, without manually forcing a sysctl -p. systemctl -l status radvd.service says: "IPv6 forwarding setting is: 0, should be 1" Expected results: Workstations should get IPv6 addresses and DNSes, from the gateway, immediately after restart and should be able to navigate on the Internet via IPv6, with no manual "correction". radvd and dhcpd6 procceses should start quietly and flawlessy when enabled via "systemctl enable", since they are critical for the LAN. Official documentation should exist about how to configure such a SIMPLE setup and about correctly enabling IPv6 forwarding. Additional info: The same glitches are present on RHEL 6.x line, please see related bugs #995693, #1197319 , #1197324, #996727 Best regards, Răzvan
That seems to be due to asymmetric shorewall configuration in IPv4 and IPv6. The systems I've tested on are using shorewall (http://www.shorewall.net) as their firewall solution. While the *default* /etc/shorewall/shorewall.conf says: IP_FORWARDING=On the equivalent *default* /etc/shorewall6/shorewall6.conf says: IP_FORWARDING=Off On a systemd system, the exact order in which processes are started in at boot (radvd, shorewall, shorewall6) it is still not too clear, so, if /etc/shorewall6/shorewall6.conf was not modified, it is possible that radvd may find IP_FORWARDING=off for IPv6, *despite* manually putting net.ipv6.conf.default.forwarding=1 net.ipv6.conf.all.forwarding=1 in /etc/sysctl.conf Pretty confusing... Răzvan
(In reply to Răzvan Sandu from comment #3) > That seems to be due to asymmetric shorewall configuration in IPv4 and IPv6. Then we need to move the bug to shorewall. The problem is that shorewall is not part of RHEL and instead it is a part of Fedora EPEL repository. Luckily EPEL is in the same bugzilla so it is not hard to move the bug report. > The systems I've tested on are using shorewall (http://www.shorewall.net) as > their firewall solution. > > While the *default* /etc/shorewall/shorewall.conf says: > > IP_FORWARDING=On > > the equivalent *default* /etc/shorewall6/shorewall6.conf says: > > IP_FORWARDING=Off That sounds like the problem. Could you please confirm that it works correctly with the configuration file changed? We should also check Fedora versions of the package. Although the bug is reported for EPEL, I'm adding it as a dependency for the dualstack networking tracker. > On a systemd system, the exact order in which processes are started in at > boot (radvd, shorewall, shorewall6) it is still not too clear, Then, in my opinion, shorewall unit file might need to be modified to provide explicit ordering. As radvd is not the only option for router advertisements (have you tried using dnsmasq instead, by the way?), we will probably need to cooperate across components. > so, if > /etc/shorewall6/shorewall6.conf was not modified, it is possible that radvd > may find IP_FORWARDING=off for IPv6, *despite* manually putting Just to avoid confusion, it's `net.ipv6.conf.*.forwarding` that radvd reads, therefore service order, `/etc/sysctl.conf` defaults and shorewall defaults matter. > net.ipv6.conf.default.forwarding=1 > net.ipv6.conf.all.forwarding=1 > > in /etc/sysctl.conf That would IMO confirm that it's shorewall.
(In reply to Pavel Šimerda (pavlix) from comment #4) > (In reply to Răzvan Sandu from comment #3) > > The systems I've tested on are using shorewall (http://www.shorewall.net) as > > their firewall solution. > > > > While the *default* /etc/shorewall/shorewall.conf says: > > > > IP_FORWARDING=On > > > > the equivalent *default* /etc/shorewall6/shorewall6.conf says: > > > > IP_FORWARDING=Off > > That sounds like the problem. Could you please confirm that it works > correctly with the configuration file changed? > > We should also check Fedora versions of the package. Although the bug is > reported for EPEL, I'm adding it as a dependency for the dualstack > networking tracker. Note that these are the default upstream settings. I'm querying upstream about it.
In latest release of shorewall6 in EPEL7, the file /etc/shorewall6/shorewall6.conf now has: IP_FORWARDING=keep which is pretty confusing and not consistent with the similar IPv4 parameter in /etc/shorewall/shorewall.conf (which is Off) There's still no clear, documented (and supported) procedure on enabling/disabling the machine's "transparency" for packets, on both IPv4 and IPv6: what parameters to modify, in which file(s), etc. Please see bugs #130195, #1318644, #995478 Răzvan
(In reply to Răzvan Sandu from comment #6) > In latest release of shorewall6 in EPEL7, the file > /etc/shorewall6/shorewall6.conf now has: > > IP_FORWARDING=keep > > which is pretty confusing and not consistent with the similar IPv4 parameter > in /etc/shorewall/shorewall.conf (which is Off) > > There's still no clear, documented (and supported) procedure on > enabling/disabling the machine's "transparency" for packets, on both IPv4 > and IPv6: what parameters to modify, in which file(s), etc. I am not entirely sure what the issue is here? If you have keep in your shorewall conf it will do nothing to the sysctl values whereas it will set them to 1 or 0 if you turn it On or Off respectively. You can either do it yourself via sysctl and set keep in shorewall. Or let not do it via sysctl and let shorewall do it. In any case shorewall does need configuration to be done. What am I missing here?
Hello, Explanation: IMHO (after a few investigations), it is not a shorewall matter. The radvd process seems to be picky (refuse to start) if the machine (used as a gateway/router here) is not "transparent", i.e. if passing IPv6 packets from an interface to another is not allowed. The "transparency" is set in firewall (shorewall in my case), via the IP_FORWARDING= setting in /etc/shorewall/shorewall.conf and the corresponding /etc/shorewall6/shorewall.conf Now, if in the whole systemd new logic, radvd tries to start *BEFORE* the firewall, it will find the machine "non-transparent" at the moment of the try and refuse to initiate. Later, after the machine was fully booted (all interfaces active, "transparency" set by the firewall), manually restarting radvd will work like magic. So it is important that firewall starts *first* (it might be that the whole initialization process was tested with firewalld, not shorewall). A particular case for this (the need for "transparency") is the way my ISP (RCS&RDS in Romania) allocates *static* IPv6 addresses to its clients - namely: - a fixed LINK LOCAL IPv6 address (/10) is assigned to the client, for the external interface of his router. This is to be set *manually* on the router. - the default gateway for IPv6 is *always* LINK LOCAL "fe80::1", that corresponds to the ISP CPE that is physically located in the same Ethernet segment as the router's external interface - a true (public, routable) /64 prefix is assigned to the client (both the gateway itself and various workstations in the LAN "behind" it) So an IPv6 packet coming from the LAN (with true IPv6 address), will travel like this: workstation -> router internal NIC true IPv6 -> router external NIC true IPv6 -> router external IPv6 LINK LOCAL -> ISP CPE LINK LOCAL (fe80::1) -> Internet For this whole process, the router *needs* to be "transparent" even between its *local* addresses, loopback interfaces, etc. Sorry for the long story, this is the result of my empirical tests and personal understanding. So we need to make sure that shorewall starts and makes the machine "transparent" BEFORE radvd needs this "transparency" when initializing. Put in this terms, it seems to be a matter of *order* of initialization via systemd. Thanks a lot, Răzvan
Hi Răzvan, ok thanks for the explanation. I think it is much clearer now. I think for the time being your best approach is to set ip_forward via the sysctl scripts and set it to "keep" in shorewall. Once https://bugzilla.redhat.com/show_bug.cgi?id=1317240 gets implemented we can then make sure radvd depends on the firewall.target which would solve this use case more elegantly. thanks, Michele