Description of problem: I have the tinydns running with "IP=0.0.0.0" in the configuration file on a multihomed server, and the server accepts connections on one ip but always sends replies back on the first free ip in the same segment. In detail: 1) Server has few ips: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet 192.168.21.12/32 brd 192.168.21.12 scope host lo inet 192.168.21.13/32 brd 192.168.21.13 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:50:56:8b:70:e3 brd ff:ff:ff:ff:ff:ff inet 192.168.23.43/25 brd 192.168.23.127 scope global eth0 inet 192.168.23.44/25 brd 192.168.23.127 scope global secondary eth0 inet 192.168.23.45/25 brd 192.168.23.127 scope global secondary eth0 inet 192.168.23.46/25 brd 192.168.23.127 scope global secondary eth0 inet 192.168.23.47/25 brd 192.168.23.127 scope global secondary eth0 inet6 fe80::250:56ff:fe8b:70e3/64 scope link valid_lft forever preferred_lft forever I want to make tinydns listen only on the following: 192.168.23.44 192.168.21.12 192.168.21.13 All request coming through 192.168.23.44 generate responses on 192.168.23.43. Log of tcpdump: 13:44:56.422176 IP 10.254.146.2.34931 > 192.168.23.44.53: 2+ PTR? 44.23.168.192.in-addr.arpa. (44) 13:44:56.422701 IP 192.168.23.43.53 > 10.254.146.2.34931: 2*- 1/1/0 PTR dns.cpe.m2b.int. (98) Version-Release number of selected component (if applicable): ndjbdns-1.05.6-1.fc19 How reproducible: Always Steps to Reproduce: 1. Install on a server with some loopback addresses (i.e. 192.168.100.1 and 192.168.100.2) 2. Run the tinydns daemon with IP=0.0.0.0 3. Query 192.168.100.2 for records, records will be sent out from 192.168.100.1; thus breaking response to clients. Actual results: Responses are sent out with the wrong source ip. Expected results: All queries from clients should come from the ip address that was queried. Additional info: Bug still present in the code committed after bug #913667
This is fixed. Needs review and testing though. -> https://github.com/pjps/ndjbdns/commit/e8ab14e06a66f032483f81705d510f22da739e67
I can confirm the latest patches work in both cases; I get the correct reply from the DNS server when querying one specific ip. With IP=0.0.0.0: 13:50:45.756611 IP 10.103.1.34.nfs > 192.168.23.44.domain: 43894+ A? program.avast.com. (35) 13:50:45.756998 IP 192.168.23.44.domain > 10.103.1.34.nfs: 43894 NXDomain*- 0/1/0 (95) With IP=192.168.23.43,192.168.23.44: 13:52:43.888752 IP 10.0.44.130.nfs > 192.168.23.44.domain: 37353+ A? download.microsoft.com. (40) 13:52:43.890513 IP 192.168.23.44.domain > 10.0.44.130.nfs: 37353 NXDomain*- 0/1/0 (100) Perfect :) Thank you very much.
ndjbdns-1.05.7-1.fc17 has been submitted as an update for Fedora 17. https://admin.fedoraproject.org/updates/ndjbdns-1.05.7-1.fc17
ndjbdns-1.05.7-1.fc18 has been submitted as an update for Fedora 18. https://admin.fedoraproject.org/updates/ndjbdns-1.05.7-1.fc18
ndjbdns-1.05.7-1.el6 has been submitted as an update for Fedora EPEL 6. https://admin.fedoraproject.org/updates/ndjbdns-1.05.7-1.el6
ndjbdns-1.05.7-1.el5 has been submitted as an update for Fedora EPEL 5. https://admin.fedoraproject.org/updates/ndjbdns-1.05.7-1.el5
ndjbdns-1.05.7-1.fc18 has been pushed to the Fedora 18 stable repository. If problems still persist, please make note of it in this bug report.
ndjbdns-1.05.7-1.fc17 has been pushed to the Fedora 17 stable repository. If problems still persist, please make note of it in this bug report.
Note that in the normal multi-homed case you can use 0.0.0.0. I had that setup for a short time when changing ISPs. You need to set up routing so that the src IP addresses determines which route to take. This works even if both networks are seen on the same NIC. The particular case here is not a standard multi-homed case as there are multiple IP addresses for the host on the same attached network.
I take back most of what I said. My mail server was listening on 0.0.0.0, but dnscache and tinydns were listening on specific addresses so as not to conflict with each other with two instances of tinydns running. This was setting the source address which was being used to send traffic to the correct device for each ISP. I suspect even normal multi-homed setups would be broken without setting a source address in tinydns. Being attached to multiple networks should still work as long as requests are sent to the address on the network were the request is coming from.
(In reply to comment #10) > I suspect even normal multi-homed setups would be broken without setting a > source address in tinydns. Yes true. > Being attached to multiple networks should still work as long as requests > are sent to the address on the network were the request is coming from. ...??
The example I was thinking about there was a dnscache visible on both a globably visible address and a private network address. Say 1.2.3.4 and 192.168.0.1. As long as requests from the 192.168.0.0 network are to 192.168.0.1 and external requests are to 1.2.3.4 than routing should properly set the source addresses on return packets. Packets to 192.168.0.1 should be set to 192.168.0.1 and packets going to the default gateway should be set to 1.2.3.4. This isn't a multi-homed situation since packets can go only one way to reach their destination. It also doesn't cover the case of having more than one address on the same network.
ah, right.
ndjbdns-1.05.7-1.el5 has been pushed to the Fedora EPEL 5 stable repository. If problems still persist, please make note of it in this bug report.
ndjbdns-1.05.7-1.el6 has been pushed to the Fedora EPEL 6 stable repository. If problems still persist, please make note of it in this bug report.