This service will be undergoing maintenance at 00:00 UTC, 2017-10-23 It is expected to last about 30 minutes
Bug 917580 - Server accept connections on one ip but always sends replies back on the first ip in the same segment
Server accept connections on one ip but always sends replies back on the firs...
Status: CLOSED ERRATA
Product: Fedora
Classification: Fedora
Component: ndjbdns (Show other bugs)
rawhide
Unspecified Unspecified
unspecified Severity high
: ---
: ---
Assigned To: pjp
Fedora Extras Quality Assurance
https://github.com/pjps/ndjbdns/
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-03-04 06:29 EST by Simone Caronni
Modified: 2013-03-29 17:30 EDT (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-03-23 19:52:54 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Simone Caronni 2013-03-04 06:29:35 EST
Description of problem:
I have the tinydns running with "IP=0.0.0.0" in the configuration file on a multihomed server, and the server accepts connections on one ip but always sends replies back on the first free ip in the same segment.

In detail:

1) Server has few ips:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet 192.168.21.12/32 brd 192.168.21.12 scope host lo
    inet 192.168.21.13/32 brd 192.168.21.13 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:50:56:8b:70:e3 brd ff:ff:ff:ff:ff:ff
    inet 192.168.23.43/25 brd 192.168.23.127 scope global eth0
    inet 192.168.23.44/25 brd 192.168.23.127 scope global secondary eth0
    inet 192.168.23.45/25 brd 192.168.23.127 scope global secondary eth0
    inet 192.168.23.46/25 brd 192.168.23.127 scope global secondary eth0
    inet 192.168.23.47/25 brd 192.168.23.127 scope global secondary eth0
    inet6 fe80::250:56ff:fe8b:70e3/64 scope link
       valid_lft forever preferred_lft forever

I want to make tinydns listen only on the following:

192.168.23.44
192.168.21.12
192.168.21.13

All request coming through 192.168.23.44 generate responses on 192.168.23.43. Log of tcpdump:

13:44:56.422176 IP 10.254.146.2.34931 > 192.168.23.44.53: 2+ PTR? 44.23.168.192.in-addr.arpa. (44)
13:44:56.422701 IP 192.168.23.43.53 > 10.254.146.2.34931: 2*- 1/1/0 PTR dns.cpe.m2b.int. (98)

Version-Release number of selected component (if applicable): ndjbdns-1.05.6-1.fc19


How reproducible:
Always

Steps to Reproduce:
1. Install on a server with some loopback addresses (i.e. 192.168.100.1 and 192.168.100.2) 
2. Run the tinydns daemon with IP=0.0.0.0
3. Query 192.168.100.2 for records, records will be sent out from 192.168.100.1; thus breaking response to clients.
  
Actual results:
Responses are sent out with the wrong source ip.

Expected results:
All queries from clients should come from the ip address that was queried.

Additional info:
Bug still present in the code committed after bug #913667
Comment 1 pjp 2013-03-12 10:16:59 EDT
This is fixed. Needs review and testing though.

 -> https://github.com/pjps/ndjbdns/commit/e8ab14e06a66f032483f81705d510f22da739e67
Comment 2 Simone Caronni 2013-03-13 12:57:04 EDT
I can confirm the latest patches work in both cases; I get the correct reply from the DNS server when querying one specific ip.

With IP=0.0.0.0:

13:50:45.756611 IP 10.103.1.34.nfs > 192.168.23.44.domain: 43894+ A? program.avast.com. (35)
13:50:45.756998 IP 192.168.23.44.domain > 10.103.1.34.nfs: 43894 NXDomain*- 0/1/0 (95)

With IP=192.168.23.43,192.168.23.44:

13:52:43.888752 IP 10.0.44.130.nfs > 192.168.23.44.domain: 37353+ A? download.microsoft.com. (40)
13:52:43.890513 IP 192.168.23.44.domain > 10.0.44.130.nfs: 37353 NXDomain*- 0/1/0 (100)

Perfect :)

Thank you very much.
Comment 3 Fedora Update System 2013-03-14 11:08:03 EDT
ndjbdns-1.05.7-1.fc17 has been submitted as an update for Fedora 17.
https://admin.fedoraproject.org/updates/ndjbdns-1.05.7-1.fc17
Comment 4 Fedora Update System 2013-03-14 11:08:32 EDT
ndjbdns-1.05.7-1.fc18 has been submitted as an update for Fedora 18.
https://admin.fedoraproject.org/updates/ndjbdns-1.05.7-1.fc18
Comment 5 Fedora Update System 2013-03-14 11:55:44 EDT
ndjbdns-1.05.7-1.el6 has been submitted as an update for Fedora EPEL 6.
https://admin.fedoraproject.org/updates/ndjbdns-1.05.7-1.el6
Comment 6 Fedora Update System 2013-03-14 11:56:49 EDT
ndjbdns-1.05.7-1.el5 has been submitted as an update for Fedora EPEL 5.
https://admin.fedoraproject.org/updates/ndjbdns-1.05.7-1.el5
Comment 7 Fedora Update System 2013-03-23 19:52:55 EDT
ndjbdns-1.05.7-1.fc18 has been pushed to the Fedora 18 stable repository.  If problems still persist, please make note of it in this bug report.
Comment 8 Fedora Update System 2013-03-23 20:04:05 EDT
ndjbdns-1.05.7-1.fc17 has been pushed to the Fedora 17 stable repository.  If problems still persist, please make note of it in this bug report.
Comment 9 Bruno Wolff III 2013-03-25 12:17:05 EDT
Note that in the normal multi-homed case you can use 0.0.0.0. I had that setup for a short time when changing ISPs. You need to set up routing so that the src IP addresses determines which route to take. This works even if both networks are seen on the same NIC. The particular case here is not a standard multi-homed case as there are multiple IP addresses for the host on the same attached network.
Comment 10 Bruno Wolff III 2013-03-25 12:54:02 EDT
I take back most of what I said. My mail server was listening on 0.0.0.0, but dnscache and tinydns were listening on specific addresses so as not to conflict with each other with two instances of tinydns running. This was setting the source address which was being used to send traffic to the correct device for each ISP. I suspect even normal multi-homed setups would be broken without setting a source address in tinydns. Being attached to multiple networks should still work as long as requests are sent to the address on the network were the request is coming from.
Comment 11 pjp 2013-03-25 13:38:46 EDT
(In reply to comment #10)
> I suspect even normal multi-homed setups would be broken without setting a
> source address in tinydns.

  Yes true.

> Being attached to multiple networks should still work as long as requests
> are sent to the address on the network were the request is coming from.

   ...??
Comment 12 Bruno Wolff III 2013-03-25 15:15:00 EDT
The example I was thinking about there was a dnscache visible on both a globably visible address and a private network address. Say 1.2.3.4 and 192.168.0.1. As long as requests from the 192.168.0.0 network are to 192.168.0.1 and external requests are to 1.2.3.4 than routing should properly set the source addresses on return packets. Packets to 192.168.0.1 should be set to 192.168.0.1 and packets going to the default gateway should be set to 1.2.3.4. This isn't a multi-homed situation since packets can go only one way to reach their destination. It also doesn't cover the case of having more than one address on the same network.
Comment 13 pjp 2013-03-26 01:45:25 EDT
ah, right.
Comment 14 Fedora Update System 2013-03-29 17:28:23 EDT
ndjbdns-1.05.7-1.el5 has been pushed to the Fedora EPEL 5 stable repository.  If problems still persist, please make note of it in this bug report.
Comment 15 Fedora Update System 2013-03-29 17:30:56 EDT
ndjbdns-1.05.7-1.el6 has been pushed to the Fedora EPEL 6 stable repository.  If problems still persist, please make note of it in this bug report.

Note You need to log in before you can comment on or make changes to this bug.