Description of problem: Under the default rules provided by s-c-securitylevel with NFSV4 checked (allowed) client machines cannot access NFS services. Version-Release number of selected component (if applicable): iptables-1.3.5-1.2 s-c-securitylevel-1.6.16-3 How reproducible: always Steps to Reproduce: 1. 2. 3. Actual results: client machine (Fedora development) cannot access portmap or NFS. Reports (wrongly) "No Route To Host". Expected results: rpcinfo and NFS should work Additional info: turning off iptables or adding a network "accept" for the LAN allows rpcinfo and NFS to work. Trying to add a udp accept for portmap does *not* allow rpcinfo to work. Netstat shows active listeners in place for the concerned services. The turning off of iptables is not a good alternative.
Assigning to system-config-securitylevel which generates the firewall config file.
I'm having nothing but machine lockups trying to test NFSv4 stuff here recently. Any ideas what the problems in the current firewall setup are? Which ports do I still need to open?
Well, portmapper(sunrpc) needs to be opened at least. Additionally the rpc.mountd, and probablt rpc.nfslockd ports too. Here, I punted and added a firewall rule for the local lan as a trusted source. I'm inside a router/firewall, so I feel (somewhat) safe in doing this; also I'm using a non-routable set of addresses for the lan (10.x.x.x) I've been playing tonight with a variety of ports, but the nfs init.d script is trying to use some /proc/fs/nfs entries that don't exist (nlm_<something>) to configure ports and such. The /etc/sysconfig/nfs file probably needs to be created and setup some relatively static ports for the lockd and mountd daemons. The inherent flexibility of the rpc/portmapper mapping makes some of the static nature of the firewalling process a real problem to overcome. As far as the NFSv4 stuff, I'm having no problems with NFS on the i686 boxes that I'm using for the testing. I'm not sure that I'm using v4, so take it with a grain of salt.
It looks to me like you are not using NFSv4 here. I was finally able to get a machine set up that doesn't panic, and ran some tests. I set up an NFS server with ssh and nfsv4 ports open, and a client with only the ssh port open. Then: # mount exeter:/home/clumens/nfs /misc/ mount: mount to NFS server 'exeter' failed: System Error: No route to host. # mount -t nfs4 exeter:/ /misc/ # mount | grep exeter exeter:/ on /misc type nfs4 (rw,addr=172.16.80.157) The difference in paths I'm attempting to mount is intentional - that's the way things work with NFSv4. Since the checkbox does clearly say "NFSv4", I'm inclined to say this is not actually a bug.
Closing based on my previous comments. If you disagree and think this is still a bug, feel free to repoen and explain what you think I should do to clear it up.
Perhaps it would be an option to add a checkbox for "NFS legacy" or something like that?