Bug 1814404 - dnsmasq seems to be ignoring server in config files
Summary: dnsmasq seems to be ignoring server in config files
Keywords:
Status: CLOSED DUPLICATE of bug 1814468
Alias: None
Product: Fedora
Classification: Fedora
Component: dnsmasq
Version: 31
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: Petr Menšík
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-03-17 19:25 UTC by James McDermott
Modified: 2020-03-23 19:03 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-03-23 19:03:29 UTC
Type: Bug


Attachments (Terms of Use)

Description James McDermott 2020-03-17 19:25:07 UTC
Description of problem:
Seems to be ignoring server line in config files.

I use dnsmasq to locally resolve the names of virtual machines in my kvm environment.  I have the following setup:

/etc/NetworkManager/dnsmasq.d/libvirt_dnsmasq.conf 
server=/home.mcd/192.168.122.1
address=/.apps.home.mcd/192.168.122.200

/etc/NetworkManager/conf.d/localdns.conf 
[main]
dns=dnsmasq

virsh net-dumpxml default
<network>
  <name>default</name>
  <uuid>8a627562-0188-493c-b84b-340ac340ffc9</uuid>
  <forward mode='nat'>
    <nat>
      <port start='1024' end='65535'/>
    </nat>
  </forward>
  <bridge name='virbr0' stp='on' delay='0'/>
  <mac address='62:52:00:94:f5:9f'/>
  <domain name='home.mcd' localOnly='yes'/>
  <ip address='192.168.122.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.122.2' end='192.168.122.50'/>
      <host mac='52:54:00:b3:1c:96' name='tower' ip='192.168.122.100'/>
      <host mac='52:54:00:5a:38:4a' name='ocp' ip='192.168.122.200'/>
      <host mac='52:54:00:5a:38:4b' name='ocp-int' ip='192.168.122.201'/>
      <host mac='52:54:00:5a:38:4c' name='ocp-infra' ip='192.168.122.202'/>
      <host mac='52:54:00:5a:38:4d' name='ocp-app' ip='192.168.122.203'/>
    </dhcp>
  </ip>
</network>

virsh dominfo sb-31
Id:             -
Name:           sb-31
UUID:           2852d0df-640b-4045-82b5-c011aeaca057
OS Type:        hvm
State:          shut off
CPU(s):         2
Max memory:     4194304 KiB
Used memory:    4194304 KiB
Persistent:     yes
Autostart:      disable
Managed save:   no
Security model: selinux
Security DOI:   0

With dnsmasq-2.80-10.fc31.x86_64 I can run nslookup on sb-31 and it resolves.  This also allows me to ssh sb-31, without having to know its ip address.  It comes in handy when you have 20 vms running doing diffrent things.
nslookup sb-31.home.mcd 
Server:		127.0.0.1
Address:	127.0.0.1#53

Name:	sb-31.home.mcd
Address: 192.168.122.47



With the new version: dnsmasq-2.80-12.fc31.x86_64
That is no longer the case.

nslookup fails for anything under "home.mcd"
It does resolve anything for *.apps.home.mcd which it gets from the line: address=/.apps.home.mcd/192.168.122.200 in the file: /etc/NetworkManager/dnsmasq.d/libvirt_dnsmasq.conf (this was for openshift testing)

As far as I can tell everything is loaded fine and no errors in journalctl for NetworkManager or libvirtd.  Looking at the journalctl -u NetworkManager -b 0 it even looks like it setup dns correctly:
Mar 17 14:28:51 wormhole.home.mcd dnsmasq[1215]: setting upstream servers from DBus
Mar 17 14:28:51 wormhole.home.mcd dnsmasq[1215]: using nameserver 192.168.122.1#53 for domain home.mcd


Version-Release number of selected component (if applicable):
dnsmasq-2.80-12.fc31.x86_64

How reproducible:
Everytime

Steps to Reproduce:
1. Install dnsmasq-2.80-10.fc31.x86_64
2. Setup dnsmasq to resolve local domain with kvm (https://liquidat.wordpress.com/2017/03/03/howto-automated-dns-resolution-for-kvmlibvirt-guests-with-a-local-domain/)
3. Update to dnsmasq-2.80-12.fc31.x86_64


Actual results:
kvm virtual machines no longer resolve with nslookup/dig

Expected results:
kvm virtual machines should still resolve with nslookup/dig

Additional info:

Comment 1 James 2020-03-17 23:47:49 UTC
Looks like a nasty regression somewhere here -- not just with VMs. Just upgraded to dnsmasq-2.80-12.fc31.x86_64 and no machine on the LAN was able to resolve addresses stored in hosts files for the LAN's internal domain. Upstream forwarding to the outside world still works. Rolled back to -10 and everything works as before.

Comment 2 Petr Menšík 2020-03-18 11:50:52 UTC
Is there a reason why address= starts with a dot? It seems it would work better without dot.

But anyway, both dnsmasq-2.80-10.fc30.x86_64, dnsmasq-2.80-12.fc30.x86_64 and dnsmasq-2.81-1.rc3.fc30.x86_64 seems to work on my machine well. Would it be possible just f31 build is broken?

Comment 3 James McDermott 2020-03-18 14:04:12 UTC
Petr,

  I don't have any indication that it is a fedora problem, but i guess its possible.  I would gladly run some tests if there is anything I can do.

The . in my address line "address=/.apps.home.mcd/192.168.122.200" was used to act like a wildcard pointing to my openshift router.  Without that none of my pods could be reached from the host system.

exp: tomcat-app.apps.home.mcd would automatically to to 192.168.122.200(openshift router) which would forward the request to the correct pod.  If i remember correctly it did not work with just apps.home.mcd.


I have been using this same dnsmasq config since fed 29(possibly longer) this is the first time I have had this issue.  Please let me know if there is anything else I can provide.

--James

Comment 4 Petr Menšík 2020-03-23 16:09:07 UTC
I made mistake with latest build in F31. Please check bug #1814468, which is indeed broken. Just that build, next one is corrected.

Could you try build [1]? Propably just duplicate of that fix.

1. https://koji.fedoraproject.org/koji/buildinfo?buildID=1480969

Comment 5 Petr Menšík 2020-03-23 19:03:29 UTC
I am convinced now it is the same issue, so closing this bug as duplicate. Referenced bug has better described what is exact issue, even this was also well documented.
I think it was mistake when testing, that I reported dnsmasq-2.80-12.fc30.x86_64 version ok. Proper test proved just 2.80-12 had wrong patch, causing all DNS queries to receive just empty reply.

*** This bug has been marked as a duplicate of bug 1814468 ***


Note You need to log in before you can comment on or make changes to this bug.