After installing RedHat 6.1, and adding an ip alias on the loopback
interface, upon reboot I lose my default gateway. If I remove the ip
alias on the loopback interface, I get the gateway back. The ip aliases on
the loopback interface worked wonderfully in RedHat 6.0. Any ideas what I
can do? I've already upgraded my linuxconf and initscripts after seeing
mention of ip alias problems caused by the initscripts, but that doesn't
resolve the issue.
What does /etc/sysconfig/network-scripts/ifcfg-* and /etc/sysconfig/network say?
# If you're having problems with gated making 127.0.0.0/8 a martian,
# you can change this to something else (255.255.255.255, for example)
I can't reproduce this here with the current Raw Hide
initscripts; do they work for you?
If you mean this package: initscripts (RHSA-1999:052-04) it is currently
installed and 100% reproducible on my system. I could arrange ssh access for
you to one of them if you wanted to see it in action.
Actually, I mean the initscripts at
ftp://ftp.redhat.com/pub/rawhide/i386/ - I wasn't
able to reproduce it with that package.
Well, tried the initscripts in the rawhide directory, rebooted, put in the
virtual IP on the loopback interface, rebooted again and voila, no default
route. :( Do I need to update any other packages when I update the
OK, confirmed, finally. IFF the loopback alias is on the same
network as the ethernet address, it will fail.
Out of curiousity, is there a reason you're putting the alias
on the loopback as poosted to ethernet device?
Looks to be kernel weirdness; it refuses to assign a default
route through the ethernet device if the loopback alias exists.
The reason we're adding it to the loopback interface and not the ethernet
interface is because in order to use the Direct Server Return feature of our
load balancing foundry switch, you must assign an ip alias to the loopback
interface that is the same as the virtual ip address on the foundry switch.
That way the server recognizes the IP as its own and answers directly back to
the client and not back through the switch.
Part of the problem, of course, is that your config for eth0 and lo:0 both have
IP addresses that are part of the same subnet. As a result, when you try to add
a default route to a host on the subnet shared by both interfaces, the kernel
rightly doesn't know which of the interfaces you would prefer to use (generic
multi-path problem here). One thing to try is adding a line that reads:
to the /etc/sysconfig/network file and see if the scripts are able to cope after
that. If not, I would think that using an alias on the ethernet device would be
sufficient, regardless of what the Foundry Networks stuff might say. I would
have to have a more descriptive analysis of the network requirements before I
would be able to say for sure that an eth0:0 alias wouldn't work.
I'll try the GATEWAYDEV statement tomorrow and see if that makes it work. As
for why an alias on eth0:0 won't work: The way direct server return works on
the foundry is this: The foundry receives an incoming request on it's ethernet
interface for a given virtual IP address, it then forwards that packet to the
actual server's IP address but leaving the ip address in the packet for the
virtual IP (the one attached to one of the foundry's ethernet links). The
actual server then has to recognize that IP address as one of it's own (thus
the IP address on the loopback interface) and respond back through it's normal
route. If we put the alias on the eth0:0 then we'll have an ip address
conflict between the linux box and the foundry, so it has to go on the
loopback. Hopefully I ran through that explanation correctly.
Given your explanation, an alias on eth0:0 should work, but you would have to go
about the business of keeping any ARP packets for that IP address from going out
the ethernet interface, which would mean some special ARP address setup in order
to stop the ARP packets. Your current setup will require less work if the
GATEWAYDEV= parameter works.
the GATEWAYDEV directive didn't work, so now we'll try the eth0:0 alias with
blocking the ARP packets and see how that goes.
Im assuming that worked since there was no further traffic but the foundry setup
doesnt appear to be valid internet behaviour, and relies on stuff like undefined
Actually, no, it didn't work. We opted to go with a NAT implementation even
though the Direct Server Return offers better performance since we couldn't get
DSR working with Linux.