Problem Description --------------------------------- On client while trying to mount nfsv4 filesystem from server, it fails to mount and gets exited with message {mount.nfs: Connection timed out" on Fedora Alpha PPC64. --- On NFSv4 Client --- [root@miz09 ~]# mount -t nfs4 9.3.110.107:/ /NFSVCLI/ mount.nfs4: Connection timed out [root@miz09 ~]# mount 9.3.110.107:/ /NFSVCLI/ mount.nfs: Connection timed out [root@miz09 ~]# tail -f /var/log/messages Oct 12 08:43:16 miz09 dbus[453]: [system] Activating service name='org.freedesktop.PackageKit' (using servicehelper) Oct 12 08:43:17 miz09 dbus-daemon[453]: dbus[453]: [system] Successfully activated service 'org.freedesktop.PackageKit' Oct 12 08:43:17 miz09 dbus[453]: [system] Successfully activated service 'org.freedesktop.PackageKit' Oct 12 09:00:14 miz09 systemd-logind[447]: New session 2 of user root. Oct 12 09:02:25 miz09 kernel: [ 1301.779627] FS-Cache: Loaded Oct 12 09:02:25 miz09 kernel: [ 1301.784057] Key type dns_resolver registered Oct 12 09:02:25 miz09 kernel: [ 1301.809290] FS-Cache: Netfs 'nfs' registered for caching Oct 12 09:02:25 miz09 kernel: [ 1301.835044] NFS: Registering the id_resolver key type Oct 12 09:02:25 miz09 kernel: [ 1301.835058] Key type id_resolver registered Oct 12 09:02:25 miz09 kernel: [ 1301.835059] Key type id_legacy registered Oct 12 09:04:52 miz09 systemd[1]: Reloading. Oct 12 09:04:57 miz09 systemd[1]: Reloading. Oct 12 09:05:01 miz09 dbus-daemon[453]: dbus[453]: avc: received setenforce notice (enforcing=0 --- Details of Server --- [root@miz12 ~]# uname -a Linux miz12.austin.ibm.com 3.6.0-3.fc18.ppc64p7 #1 SMP Wed Oct 10 07:34:22 MST 2012 ppc64 ppc64 ppc64 GNU/Linux [root@miz12 ~]# rpm -qa | grep -i nfs nfs-utils-1.2.6-12.fc18.ppc64 libnfsidmap-0.25-4.fc18.ppc64 [root@miz12 ~]# mount | grep -i nfs sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) sunrpc on /proc/fs/nfsd type nfsd (rw,relatime) /dev/sdb1 on /nfs/nfs1 type ext3 (rw,relatime,seclabel,data=ordered) /dev/sdc1 on /nfs/nfs2 type ext4 (rw,relatime,seclabel,data=ordered) /dev/sdd1 on /nfs/nfs3 type xfs (rw,relatime,seclabel,attr2,noquota) /dev/sde1 on /nfs/nfs4 type reiserfs (rw,relatime) /dev/sdb1 on /NFS/NFS1 type ext3 (rw,relatime,seclabel,data=ordered) /dev/sdc1 on /NFS/NFS2 type ext4 (rw,relatime,seclabel,data=ordered) /dev/sdd1 on /NFS/NFS3 type xfs (rw,relatime,seclabel,attr2,noquota) /dev/sde1 on /NFS/NFS4 type reiserfs (rw,relatime) [root@miz12 ~]# exportfs /NFS <world> /NFS/NFS1 <world> /NFS/NFS2 <world> /NFS/NFS3 <world> /NFS/NFS4 <world> --- Details of Client --- [root@miz09 ~]# uname -a Linux miz09.austin.ibm.com 3.6.0-3.fc18.ppc64p7 #1 SMP Wed Oct 10 07:34:22 MST 2012 ppc64 ppc64 ppc64 GNU/Linux [root@miz09 ~]# rpm -qa | grep -i nfs nfs-utils-1.2.6-12.fc18.ppc64 libnfsidmap-0.25-4.fc18.ppc64 --- Steps to reproduce --- Please see the attached file named steps-to-reproduce.txt. --- Attached logs --- var-lof-message-nfsv4server.txt demesg-nfsv4server.txt var-lof-message-nfsv4client.txt demesg-nfsv4client.txt Thanks... Manas == Comment: #1 - Vaishnavi Bhat <vaish123.com> == Hi Manas, Can the client do a "rpcinfo -p <server>" and get a list of the services? I see the following on the server side : Oct 12 07:54:13 localhost rpcbind: Cannot open '/var/lib/rpcbind/rpcbind.xdr' file for reading, errno 2 (No such file or directory) Oct 12 07:54:13 localhost rpcbind: Cannot open '/var/lib/rpcbind/portmap.xdr' file for reading, errno 2 (No such file or directory) Oct 12 08:58:02 miz12 rpc.mountd[1213]: Caught signal 15, un-registering and exiting. Oct 12 08:58:02 miz12 systemd[1]: nfs-mountd.service: main process exited, code=exited, status=1 Oct 12 08:58:02 miz12 systemd[1]: Unit nfs-mountd.service entered failed state. Client side: Oct 12 07:54:15 localhost rpcbind: Cannot open '/var/lib/rpcbind/rpcbind.xdr' file for reading, errno 2 (No such file or directory) Oct 12 07:54:15 localhost rpcbind: Cannot open '/var/lib/rpcbind/portmap.xdr' file for reading, errno 2 (No such file or directory) Oct 12 08:36:29 miz09 rpc.statd[793]: Caught signal 15, un-registering and exiting Oct 12 08:36:29 miz09 systemd[1]: nfs-lock.service: main process exited, code=exited, status=1 Oct 12 08:36:29 miz09 sysctl[15330]: fs.nfs.nlm_tcpport = 0 Oct 12 08:36:29 miz09 sysctl[15330]: fs.nfs.nlm_udpport = 0 Oct 12 08:36:29 miz09 systemd[1]: Unit nfs-lock.service entered failed state. == Comment: #2 - MANAS K. NAYAK <maknayak.com> - 2012-10-13 01:38:47 == Hi Vaishnavi, Here are the details of NFSV4 server & client: --- On NFSv4 server: --- [root@miz12 ~]# systemctl status nfs-server.service nfs-server.service - NFS Server Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled) Active: active (exited) since Fri, 12 Oct 2012 09:10:46 -0400; 20h ago Main PID: 4398 (code=exited, status=0/SUCCESS) CGroup: name=systemd:/system/nfs-server.service [root@miz12 ~]# exportfs -v /NFS <world>(rw,wdelay,insecure,root_squash,no_subtree_check,fsid=0) /NFS/NFS1 <world>(rw,wdelay,nohide,insecure,root_squash,no_subtree_check) /NFS/NFS2 <world>(rw,wdelay,nohide,insecure,root_squash,no_subtree_check) /NFS/NFS3 <world>(rw,wdelay,nohide,insecure,root_squash,no_subtree_check) /NFS/NFS4 <world>(rw,wdelay,nohide,insecure,root_squash,no_subtree_check) [root@miz12 ~]# mount | grep -i rpc sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) sunrpc on /proc/fs/nfsd type nfsd (rw,relatime) [root@miz12 ~]# rpcinfo -p program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 50148 status 100024 1 tcp 59930 status 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100227 2 tcp 2049 nfs_acl 100227 3 tcp 2049 nfs_acl 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 4 udp 2049 nfs 100227 2 udp 2049 nfs_acl 100227 3 udp 2049 nfs_acl 100021 1 udp 56410 nlockmgr 100021 3 udp 56410 nlockmgr 100021 4 udp 56410 nlockmgr 100021 1 tcp 43482 nlockmgr 100021 3 tcp 43482 nlockmgr 100021 4 tcp 43482 nlockmgr 100005 1 udp 20048 mountd 100005 1 tcp 20048 mountd 100005 2 udp 20048 mountd 100005 2 tcp 20048 mountd 100005 3 udp 20048 mountd 100005 3 tcp 20048 mountd 100011 1 udp 875 rquotad 100011 2 udp 875 rquotad 100011 1 tcp 875 rquotad 100011 2 tcp 875 rquotad [root@miz12 ~]# rpcinfo -u localhost nfs program 100003 version 2 ready and waiting program 100003 version 3 ready and waiting program 100003 version 4 ready and waiting [root@miz12 ~]# ps -ef | egrep 'nfs|rpc|portmap|lockd' | grep -v grep root 51 2 0 Oct12 ? 00:00:00 [kblockd] root 813 1 0 Oct12 ? 00:00:00 /sbin/rpcbind -w root 827 2 0 Oct12 ? 00:00:00 [rpciod] rpcuser 1001 1 0 Oct12 ? 00:00:00 /sbin/rpc.statd root 4399 2 0 Oct12 ? 00:00:00 [lockd] root 4400 2 0 Oct12 ? 00:00:00 [nfsd4] root 4401 2 0 Oct12 ? 00:00:00 [nfsd4_callbacks] root 4402 2 0 Oct12 ? 00:00:00 [nfsd] root 4403 2 0 Oct12 ? 00:00:00 [nfsd] root 4404 2 0 Oct12 ? 00:00:00 [nfsd] root 4405 2 0 Oct12 ? 00:00:00 [nfsd] root 4406 2 0 Oct12 ? 00:00:00 [nfsd] root 4407 2 0 Oct12 ? 00:00:00 [nfsd] root 4408 2 0 Oct12 ? 00:00:00 [nfsd] root 4409 2 0 Oct12 ? 00:00:00 [nfsd] root 4417 1 0 Oct12 ? 00:00:00 /usr/sbin/rpc.idmapd root 4419 1 0 Oct12 ? 00:00:00 /usr/sbin/rpc.mountd root 4420 1 0 Oct12 ? 00:00:00 /usr/sbin/rpc.rquotad --- On NFSv4 Client: --- [root@miz09 ~]# ping -c 3 9.3.110.107 PING 9.3.110.107 (9.3.110.107) 56(84) bytes of data. 64 bytes from 9.3.110.107: icmp_req=1 ttl=64 time=0.117 ms 64 bytes from 9.3.110.107: icmp_req=2 ttl=64 time=0.095 ms 64 bytes from 9.3.110.107: icmp_req=3 ttl=64 time=0.159 ms --- 9.3.110.107 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1998ms rtt min/avg/max/mdev = 0.095/0.123/0.159/0.029 ms [root@miz09 ~]# rpcinfo -p 9.3.110.107 rpcinfo: can't contact portmapper: RPC: Remote system error - No route to host [root@miz09 ~]# rpcinfo -u 9.3.110.107 nfs rpcinfo: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host) program 100003 is not available [root@miz09 ~]# ps -ef | egrep 'nfs|rpc|portmap|lockd' | grep -v grep root 51 2 0 Oct12 ? 00:00:00 [kblockd] root 773 1 0 Oct12 ? 00:00:00 /sbin/rpcbind -w root 785 2 0 Oct12 ? 00:00:00 [rpciod] rpcuser 800 1 0 Oct12 ? 00:00:00 /sbin/rpc.statd root 1102 2 0 Oct12 ? 00:00:00 [nfsiod] So you can see NFSv4 server is running & up correctly. But somehow client is not able to get connected to the server. Thanks... Manas
Created attachment 627286 [details] steps-to-reproduce.txt
Created attachment 627287 [details] var-log-messages-nfsv4server.txt
Created attachment 627288 [details] /dmesg-nfsv4server.txt
Created attachment 627289 [details] var-log-messages-nfsv4client.txt
Created attachment 627290 [details] dmesg-nfsv4client.txt
I'm a little confused by this: I see the following on the server side : ... Oct 12 08:58:02 miz12 rpc.mountd[1213]: Caught signal 15, un-registering and exiting. Oct 12 08:58:02 miz12 systemd[1]: nfs-mountd.service: main process exited, code=exited, status=1 Oct 12 08:58:02 miz12 systemd[1]: Unit nfs-mountd.service entered failed state. # ps -ef | egrep 'nfs|rpc|portmap|lockd' | grep -v grep ... root 4419 1 0 Oct12 ? 00:00:00 /usr/sbin/rpc.mountd Well, I guess I'll believe the logs and not ps (perhaps it was a failure from some earlier attempt?) Would it be possible to get a network trace showing the failure? (So, tcpdump -s0 -wtmp.pcap, then try the mount, then kill tcpdump and attach tmp.pcap.)
------- Comment From maknayak.com 2012-10-17 06:56 EDT------- (In reply to comment #10) > I'm a little confused by this: > > I see the following on the server side : > ... > Oct 12 08:58:02 miz12 rpc.mountd[1213]: Caught signal 15, un-registering and > exiting. > Oct 12 08:58:02 miz12 systemd[1]: nfs-mountd.service: main process exited, > code=exited, status=1 > Oct 12 08:58:02 miz12 systemd[1]: Unit nfs-mountd.service entered failed > state. > > # ps -ef | egrep 'nfs|rpc|portmap|lockd' | grep -v grep > ... > root 4419 1 0 Oct12 ? 00:00:00 /usr/sbin/rpc.mountd > > Well, I guess I'll believe the logs and not ps (perhaps it was a failure > from some earlier attempt?) > > Would it be possible to get a network trace showing the failure? (So, > tcpdump -s0 -wtmp.pcap, then try the mount, then kill tcpdump and attach > tmp.pcap.) Hello RedHat, while trying to mount nfsv4 mounts on client side , I captured tcpdump on NFSv4 client & server as in "nfsv4client-tmp.pcap" & nfsv4server-tmp.pcap respectively. I have attached the network trace logfile in nfsv4client-tmp.pcap to the bugzilla.
Created attachment 628569 [details] nfsv4server-tmp.pcap ------- Comment (attachment only) From maknayak.com 2012-10-17 06:57 EDT-------
Created attachment 628570 [details] nfsv4client-tmp.pcap ------- Comment (attachment only) From maknayak.com 2012-10-17 06:58 EDT-------
Looks like a firewalling issue: 2005 2012-10-17 06:39:21.951176 9.3.110.195 9.3.110.107 TCP 723 > nfs [SYN] Seq=0 Win=14600 Len=0 MSS=1460 SACK_PERM=1 TSval=42441783 TSecr=0 WS=128 2006 2012-10-17 06:39:21.951224 9.3.110.107 9.3.110.195 ICMP Destination unreachable (Host administratively prohibited) The client is unable to open a socket to the server's NFS port, but the server is rejecting that with icmp-host-prohibited.
Interestingly, I see the same issue if I try to mount a firewalled server over IPv4: # mount -v tlielax:/foo /mnt/foo -o vers=4,proto=tcp mount.nfs: timeout set for Wed Oct 17 06:35:39 2012 mount.nfs: trying text-based options 'vers=4,proto=tcp,addr=192.168.1.3,clientaddr=192.168.1.22' mount.nfs: mount(2): No route to host mount.nfs: trying text-based options 'vers=4,proto=tcp,addr=192.168.1.3,clientaddr=192.168.1.22' mount.nfs: mount(2): No route to host ...and it keeps repeating, I guess until it eventually times out. If I try ipv6 though, I get a different error and it fails immediately: # mount -v tlielax:/foo /mnt/foo -o vers=4,proto=tcp6 mount.nfs: timeout set for Wed Oct 17 06:37:27 2012 mount.nfs: trying text-based options 'vers=4,proto=tcp6,addr=2001:470:8:d63:3a60:77ff:fe93:a95d,clientaddr=2001:470:8:d63:5054:ff:fe9b:3976' mount.nfs: mount(2): Permission denied mount.nfs: access denied by server while mounting tlielax:/foo
Ok, I think the difference there is due to the default firewall rules for the stock firewall in fedora. Either way, it certainly sounds like firewalling is the problem. Can you confirm whether that's the case?
------- Comment From maknayak.com 2012-10-26 02:49 EDT------- (In reply to comment #17) > Ok, I think the difference there is due to the default firewall rules for > the stock firewall in fedora. Either way, it certainly sounds like > firewalling is the problem. > > Can you confirm whether that's the case? While mounting NFSV4 directories ,both client & server had iptables services disabled. On NFSv4 Server: [root@miz12 ~]# systemctl status iptables iptables.service - IPv4 firewall with iptables Loaded: loaded (/usr/lib/systemd/system/iptables.service; disabled) Active: inactive (dead) CGroup: name=systemd:/system/iptables.service [root@miz12 ~]# systemctl status ip6tables ip6tables.service - IPv6 firewall with ip6tables Loaded: loaded (/usr/lib/systemd/system/ip6tables.service; disabled) Active: inactive (dead) CGroup: name=systemd:/system/ip6tables.service On NFSV4 Client: [root@miz09 ~]# systemctl status iptables.service iptables.service - IPv4 firewall with iptables Loaded: loaded (/usr/lib/systemd/system/iptables.service; disabled) Active: inactive (dead) CGroup: name=systemd:/system/iptables.service [root@miz09 ~]# [root@miz09 ~]# systemctl status ip6tables.service ip6tables.service - IPv6 firewall with ip6tables Loaded: loaded (/usr/lib/systemd/system/ip6tables.service; disabled) Active: inactive (dead) CGroup: name=systemd:/system/ip6tables.service
Can you also run these commands on both hosts and paste the output here? # iptables -L ...also, are there any non-comment lines in /etc/hosts.allow or /etc/hosts.deny ?
------- Comment From maknayak.com 2012-10-30 14:35 EDT------- (In reply to comment #19) > Can you also run these commands on both hosts and paste the output here? > > # iptables -L > On server: [root@miz12 ~]# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED ACCEPT all -- anywhere anywhere INPUT_direct all -- anywhere anywhere INPUT_ZONES all -- anywhere anywhere ACCEPT icmp -- anywhere anywhere REJECT all -- anywhere anywhere reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED ACCEPT all -- anywhere anywhere FORWARD_direct all -- anywhere anywhere FORWARD_ZONES all -- anywhere anywhere ACCEPT icmp -- anywhere anywhere REJECT all -- anywhere anywhere reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT) target prot opt source destination OUTPUT_direct all -- anywhere anywhere Chain FORWARD_ZONES (1 references) target prot opt source destination FWDO_ZONE_public all -- anywhere anywhere FWDI_ZONE_public all -- anywhere anywhere Chain FORWARD_direct (1 references) target prot opt source destination Chain FWDI_ZONE_public (1 references) target prot opt source destination FWDI_ZONE_public_deny all -- anywhere anywhere FWDI_ZONE_public_allow all -- anywhere anywhere Chain FWDI_ZONE_public_allow (1 references) target prot opt source destination Chain FWDI_ZONE_public_deny (1 references) target prot opt source destination Chain FWDO_ZONE_external (0 references) target prot opt source destination FWDO_ZONE_external_deny all -- anywhere anywhere FWDO_ZONE_external_allow all -- anywhere anywhere Chain FWDO_ZONE_external_allow (1 references) target prot opt source destination ACCEPT all -- anywhere anywhere Chain FWDO_ZONE_external_deny (1 references) target prot opt source destination Chain FWDO_ZONE_public (1 references) target prot opt source destination FWDO_ZONE_public_deny all -- anywhere anywhere FWDO_ZONE_public_allow all -- anywhere anywhere Chain FWDO_ZONE_public_allow (1 references) target prot opt source destination Chain FWDO_ZONE_public_deny (1 references) target prot opt source destination Chain INPUT_ZONES (1 references) target prot opt source destination IN_ZONE_public all -- anywhere anywhere Chain INPUT_direct (1 references) target prot opt source destination Chain IN_ZONE_dmz (0 references) target prot opt source destination IN_ZONE_dmz_deny all -- anywhere anywhere IN_ZONE_dmz_allow all -- anywhere anywhere Chain IN_ZONE_dmz_allow (1 references) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ctstate NEW Chain IN_ZONE_dmz_deny (1 references) target prot opt source destination Chain IN_ZONE_external (0 references) target prot opt source destination IN_ZONE_external_deny all -- anywhere anywhere IN_ZONE_external_allow all -- anywhere anywhere Chain IN_ZONE_external_allow (1 references) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ctstate NEW Chain IN_ZONE_external_deny (1 references) target prot opt source destination Chain IN_ZONE_home (0 references) target prot opt source destination IN_ZONE_home_deny all -- anywhere anywhere IN_ZONE_home_allow all -- anywhere anywhere Chain IN_ZONE_home_allow (1 references) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ctstate NEW ACCEPT udp -- anywhere anywhere udp dpt:ipp ctstate NEW ACCEPT udp -- anywhere 224.0.0.251 udp dpt:mdns ctstate NEW ACCEPT udp -- anywhere anywhere udp dpt:netbios-ns ctstate NEW ACCEPT udp -- anywhere anywhere udp dpt:netbios-dgm ctstate NEW Chain IN_ZONE_home_deny (1 references) target prot opt source destination Chain IN_ZONE_internal (0 references) target prot opt source destination IN_ZONE_internal_deny all -- anywhere anywhere IN_ZONE_internal_allow all -- anywhere anywhere Chain IN_ZONE_internal_allow (1 references) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ctstate NEW ACCEPT udp -- anywhere anywhere udp dpt:ipp ctstate NEW ACCEPT udp -- anywhere 224.0.0.251 udp dpt:mdns ctstate NEW ACCEPT udp -- anywhere anywhere udp dpt:netbios-ns ctstate NEW ACCEPT udp -- anywhere anywhere udp dpt:netbios-dgm ctstate NEW Chain IN_ZONE_internal_deny (1 references) target prot opt source destination Chain IN_ZONE_public (1 references) target prot opt source destination IN_ZONE_public_deny all -- anywhere anywhere IN_ZONE_public_allow all -- anywhere anywhere Chain IN_ZONE_public_allow (1 references) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ctstate NEW ACCEPT udp -- anywhere 224.0.0.251 udp dpt:mdns ctstate NEW Chain IN_ZONE_public_deny (1 references) target prot opt source destination Chain IN_ZONE_work (0 references) target prot opt source destination IN_ZONE_work_deny all -- anywhere anywhere IN_ZONE_work_allow all -- anywhere anywhere Chain IN_ZONE_work_allow (1 references) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ctstate NEW ACCEPT udp -- anywhere 224.0.0.251 udp dpt:mdns ctstate NEW ACCEPT udp -- anywhere anywhere udp dpt:ipp ctstate NEW Chain IN_ZONE_work_deny (1 references) target prot opt source destination Chain OUTPUT_direct (1 references) target prot opt source destination > ...also, are there any non-comment lines in /etc/hosts.allow or > /etc/hosts.deny ? Nope there is no such non-comment lines. [root@miz12 ~]# cat /etc/hosts.allow # # hosts.allow This file contains access rules which are used to # allow or deny connections to network services that # either use the tcp_wrappers library or that have been # started through a tcp_wrappers-enabled xinetd. # # See 'man 5 hosts_options' and 'man 5 hosts_access' # for information on rule syntax. # See 'man tcpd' for information on tcp_wrappers # [root@miz12 ~]# cat /etc/hosts.deny # # hosts.deny This file contains access rules which are used to # deny connections to network services that either use # the tcp_wrappers library or that have been # started through a tcp_wrappers-enabled xinetd. # # The rules in this file can also be set up in # /etc/hosts.allow with a 'deny' option instead. # # See 'man 5 hosts_options' and 'man 5 hosts_access' # for information on rule syntax. # See 'man tcpd' for information on tcp_wrappers # Thanks... Manas
The server clearly has some firewalling set up, even though iptables.service and ip6tables.service are disabled. I think you'll need to do some investigation to determine why that is and how to fix it to allow traffic to port 2049 in. At this point, I'm not seeing any evidence of a bug here. I'm going to go ahead and close this as NOTABUG. Please reopen if you find evidence to the contrary.