Description of problem:
Currently Docker ignores localhost address in /etc/resolv.conf. Docker upstream is planning to solve this in the future by some DNS proxy service. There is proposed solution using iptables rules.
the localhost address from resolv.conf is ignored
the local DNS resolver should be used by containers
upstream ticket - https://github.com/docker/docker/issues/14627
I assigned this bug to PJP right away, since he is driving this changes in Docker.
This issue seems to have died, or at least gone to a deep deep sleep.
PJP, any news?
Hello Tomas, Dan,
(In reply to Tomas Hozza from comment #3)
> PJP, any news?
Sorry, I was super occupied at work, could not spend time on this. Will start with it again over the weekend.
(In reply to Daniel Walsh from comment #5)
We are restarting the change process for "Default DNS resolver" as well as the work on the necessary parts. Most of the discussion is happening upstream.
PJP, can you please determine if we will have to have some downstream patch for F24, or is there anything usable in Upstream?
(In reply to Tomas Hozza from comment #6)
> (In reply to Daniel Walsh from comment #5)
> PJP, can you please determine if we will have to have some downstream patch
> for F24, or is there anything usable in Upstream?
Yes, will do and update here at the earliest.
Sorry I got caught up with lot of things before. Thank you.
Docker is working on this also, I believe or is that you guys?
(In reply to Daniel Walsh from comment #8)
> Docker is working on this also, I believe or is that you guys?
It's Docker team. They wanted to implement the whole DNS proxy service instead of just resolving our one use case. Based on their comments, this should be also fixed by their proposed changes.
Some more info is here:
Yes I was following that, just wanted to confirm that fixes this issue. I say we go with their solution, and wait for it. Hopefully this will be in docker-1.10
Just a note that we discovered a problem caused by the fact that Docker ignores local resolver.
In case the local resolver has configured forward zones e.g. for specific domain from VPN and the VPN-provided resolver has internal view of the domain, then the internal domains are not resolvable from within the container. So when you e.g. want to build the container using 'docker build' and inside you use the internal domain, the build will fail.
I know that this is basically an implication of the situation in Docker, but I wanted to list this use-case.
Tomas does docker-1.10 fix this issue?
(In reply to Daniel Walsh from comment #12)
> Tomas does docker-1.10 fix this issue?
IIUC, Docker-1.10 embedded DNS server still does not connect to a local resolver on the host. It requires the resolver on the host to run on a non-localhost(127.0.0.1) interface and the same to be supplied with --dns option. It forwards only those requests to the external resolver which it could not resolve itself.
I started hitting this on Fedora 23, and updating to 1.10 didn't resolve it (although it did introduce a wrinkle with needing to install docker-v1.10-migrator and run "v1.10-migrator-local -s devicemapper" to convert the local images to the new format).
My current host name resolution configuration:
$ cat /etc/resolv.conf
# Generated by NetworkManager
The docker0 bridge is on 172.17.0.1, so would it be possible to bind the local resolver to that, and tell docker to use it for external DNS resolution?
(In reply to Nick Coghlan from comment #14)
> The docker0 bridge is on 172.17.0.1, so would it be possible to bind the
> local resolver to that, and tell docker to use it for external DNS
I think that'd defeat the idea of having local resolver. These steps below should help
Please let us know if you face any issues. Thank you.
I went down a different path, which was to create a dedicated dnsmasq instance for Docker to use: http://stackoverflow.com/questions/35693117/giving-docker-containers-access-to-a-dnsmasq-local-dns-resolver-on-the-host/35693118
There are some aspects of my current configuration that I definitely don't like (mainly that I wasn't able to figure out a nice way of configuring firewalld, so ended up dropping the network firewall between the host and containers entirely), but it does work.
If I've understood the way dnsmasq configures itself correctly, then the instance binding itself to docker0 should be passing queries to the resolver on lo, rather than directly to the external DNS servers.
(Also setting this back to ASSIGNED, since 1.10 didn't fix it)
(In reply to Nick Coghlan from comment #16)
> I went down a different path, which was to create a dedicated dnsmasq
> instance for Docker to use:
Thank you for the link. This is definitely something users can do to workaround the problem. The same setup should work also with Unbound.
> If I've understood the way dnsmasq configures itself correctly, then the
> instance binding itself to docker0 should be passing queries to the resolver
> on lo, rather than directly to the external DNS servers.
It depends on your configuration, but dnsmasq by default reads the /etc/resolv.conf, so it should use the local DNS resolver. Dnsmasq does not use DNSSEC by default, but that may not be something you care about.
I've discovered another interesting problem with my setup: it doesn't play nicely with docker-compose
Setting the composed services to "network_method: bridge" gets things working again, so I suspect it's a problem with the dedicated resolver not being present on the docker-compose created networks
(In reply to Nick Coghlan from comment #18)
> Setting the composed services to "network_method: bridge" gets things
> working again, so I suspect it's a problem with the dedicated resolver not
> being present on the docker-compose created networks
Can you please explain what do you mean by "dedicated resolver not being present on the docker-compose created network". How does this work when there is no local resolver on the machine on which you run the codker-compose? Do these machines directly use resolvers from /etc/resolv.conf?
e.g. when you use libvirt, it runs its own dnsmasq instance on the network it creates and this instance forwards all the queries to the local resolver running on localhost. I would expect docker-compose to do something similar.
I haven't looked into the details of what docker-compose is getting wrong, but the behaviour I see is:
1. I have "OPTIONS='--selinux-enabled --log-driver=journald --dns=172.17.0.1'" configured in /etc/sysconfig/docker
2. This works correctly for containers started on the default Docker bridge network (including allowing lookup of services only available via VPN from the host)
3. For containers started by docker-compose in the default per-app network mode, /etc/resolv.conf had the DNS address as something like "127.0.0.11"
4. Name resolution in those containers didn't work, at all (not even for finding other services in the compose)
I don't know how it works when there's no local resolver on the host - I don't have that configuration readily available.
So is this a docker-compose bug?
Not sure why this bug was reassigned to docker-compose. It's pretty obviously a docker upstream issue.
If there's a problem with docker-compose, which doesn't merely reproduce this bug, please open a new bug report.
Since the git pull request got merged upstream, does this fix the problem?
I am closing this bug as fixed in the current release. Reopen if it still happens in docker-1.12
I will test this once the version is available in some stable Fedora. I will reopen in case it won't work with 1.12.
This message is a reminder that Fedora 23 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 23. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as EOL if it remains open with a Fedora 'version'
Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version'
to a later Fedora version.
Thank you for reporting this issue and we are sorry that we were not
able to fix it before Fedora 23 is end of life. If you would still like
to see this bug fixed and are able to reproduce it against a later version
of Fedora, you are encouraged change the 'version' to a later Fedora
version prior this bug is closed as described in the policy above.
Although we aim to fix as many bugs as possible during every release's
lifetime, sometimes those efforts are overtaken by events. Often a
more recent Fedora release includes newer upstream software that fixes
bugs or makes them obsolete.
Fedora 23 changed to end-of-life (EOL) status on 2016-12-20. Fedora 23 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.
If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
Thank you for reporting this bug and we are sorry it could not be fixed.