Bug 1285299 - NFS file system mounting failure after upgrade to F23
NFS file system mounting failure after upgrade to F23
Status: CLOSED EOL
Product: Fedora
Classification: Fedora
Component: nfs-utils (Show other bugs)
23
x86_64 Linux
unspecified Severity urgent
: ---
: ---
Assigned To: Steve Dickson
Fedora Extras Quality Assurance
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-11-25 06:28 EST by mohammed
Modified: 2016-12-20 11:16 EST (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-12-20 11:16:45 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description mohammed 2015-11-25 06:28:22 EST
Description of problem:
After upgrading from F16 to F23 I cannot get NFS to work correctly, well I can mount nfs exported shares in the same machine, from some other machines (kubuntu 12.04) but not from some other machines.


Version-Release number of selected component (if applicable):
nfs-utils-1.3.3-1.rc1.fc23.x86_64


How reproducible:
Always


Steps to Reproduce:
1.Disable Selinux, firewalld, iptables:
$ /usr/sbin/getenforce
Disabled
#systemctl status firewalld.service
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)

2. Make sure nfs service is started (Did everything I can imagine)
$sudo systemctl start nfs.service 
$sudo systemctl start rpcbind.service
$sudo systemctl start nfs-lock.service
(I also enable the services so they are available after boot)

3.Mounting nfs works locally and also from some other remote machine in the subnetwork
sudo mount -t nfs 10.10.30.219:/home/meodou/zdev/mock/fedora-23-x86_64-st_tc/root/opt//target /mnt/

but for some other machines it fails:
The kernel messages prints:
"[504347.279668] RPC: AUTH_GSS upcall failed. Please check user daemon is running.
[504436.790083] RPC: AUTH_GSS upcall failed. Please check user daemon is running."


Actual results:
Mounting nfs works locally and also from some other remote machine in the subnetwork
sudo mount -t nfs 10.10.30.219:/home/meodou/zdev/mock/fedora-23-x86_64-st_tc/root/opt//target /mnt/
But for some other machines the same command it fails:
The kernel messages prints:
"[504347.279668] RPC: AUTH_GSS upcall failed. Please check user daemon is running.
[504436.790083] RPC: AUTH_GSS upcall failed. Please check user daemon is running."

Also when I try to boot my nfs mounted file system it fails:
IP-Config: Complete:
     device=eth0, addr=10.20.20.12, mask=255.255.252.0, gw=10.20.20.1,
     host=b2067, domain=, nis-domain=(none),
     bootserver=255.255.255.255, rootserver=10.20.23.34, rootpath=
Looking up port of RPC 100003/3 on 10.20.23.34
PHY: 0:01 - Link is Up - 100/Full
Looking up port of RPC 100005/3 on 10.20.23.34
Root-NFS: Server returned error -13 while mounting /home/meodou/zenDev/mock/fedora-23-x86_64-st_tc/root/opt/target
Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(2,0)


Expected results:
nfs should be mounted


Additional info:
The status of NFS server
● nfs-server.service - NFS server and services
   Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
   Active: active (exited) since Wed 2015-11-25 12:01:51 CET; 2min 31s ago
  Process: 7021 ExecStopPost=/usr/sbin/exportfs -f (code=exited, status=0/SUCCESS)
  Process: 7018 ExecStopPost=/usr/sbin/exportfs -au (code=exited, status=0/SUCCESS)
  Process: 7005 ExecStop=/usr/sbin/rpc.nfsd 0 (code=exited, status=0/SUCCESS)
  Process: 7052 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
  Process: 7050 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
 Main PID: 7052 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/nfs-server.service

Note the activity state is "active (exited)"

Some services where failing on ConditionPathExists=/etc/krb5.keytab, I understood that this related somehow to the secure nfs and Kerberos, as I dont need that I tried disabling it:
1-Changing in /etc/nfsmount.conf: 
  *testing wiht Sec=sys and none. But this doesnt disable gss.
  *Forcing default version to 3 (as 3 doesnt have kerberos sec) Defaultvers=3.
  *Editing the file /etc/sysconfig/nfs
   #GSS_USE_PROXY="yes"
After all those changes I was still having the secure service started and failing (after restarting all service or even rebooting my computer).

2- Try directly to remove the problematic services
  *Editing /usr/lib/systemd/system/nfs-server.service
  And disabling GSS dependencies commenting the lines:
  #Wants=auth-rpcgss-module.service
  #After=rpc-gssd.service gssproxy.service rpc-svcgssd.service
  *Disabling auth-rpcgss-module.service and rpc-gssd.service
After those changes and after a reboot nfs-secure is not started any more but nfs.service is still exit "active (exited)"

I really dont know how to proceed now, I spent last day and half searching in internet by this and I m not able to find anything more, all I need is a basic nfs mounting without security, it should not be that hard to get that?

If needed:
[meodou@localhost mock]$ rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100021    1   udp  38868  nlockmgr
    100021    3   udp  38868  nlockmgr
    100021    4   udp  38868  nlockmgr
    100021    1   tcp  34313  nlockmgr
    100021    3   tcp  34313  nlockmgr
    100021    4   tcp  34313  nlockmgr
    100024    1   udp  41416  status
    100024    1   tcp  44705  status
    100005    1   udp  20048  mountd
    100005    1   tcp  20048  mountd
    100005    2   udp  20048  mountd
    100005    2   tcp  20048  mountd
    100005    3   udp  20048  mountd
    100005    3   tcp  20048  mountd
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    3   tcp   2049  nfs_acl
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    3   udp   2049  nfs_acl

[meodou@localhost mock]$ cat /etc/exports
/home/meodou/zdev/mock/fedora-23-x86_64-st_tc/root/opt/target *(rw,no_root_squash,sync,insecure)
/home/meodou/zdev/mock/fedora-23-x86_64-st_tc/root/opt/target *(rw,no_root_squash,sync,insecure)


Thanks in advance.
meodou
Comment 1 mohammed 2015-11-25 06:45:53 EST
[meodou@localhost mock]$ sudo systemctl restart nfs-server.service
[ 5385.799219] nfsd: last server has exited, flushing export cache
[ 5385.872778] NFSD: starting 90-second grace period (net ffffffff81cef980)
Comment 2 Steve Dickson 2015-12-14 09:39:26 EST
(In reply to mohammed from comment #0)

> 3.Mounting nfs works locally and also from some other remote machine in the
> subnetwork
> sudo mount -t nfs
> 10.10.30.219:/home/meodou/zdev/mock/fedora-23-x86_64-st_tc/root/opt//target
> /mnt/
> 
> but for some other machines it fails:
> The kernel messages prints:
> "[504347.279668] RPC: AUTH_GSS upcall failed. Please check user daemon is
> running.
> [504436.790083] RPC: AUTH_GSS upcall failed. Please check user daemon is
> running."
> 

What is happening is the kernel is doing an upcall to the 
rpc.gssd daemon to get some Kerberos creds.

Is Kerberos installed (aka does /etc/krb5.keytab exist)
on the failing machines?
Comment 3 mohammed 2016-02-04 16:52:53 EST
First thanks for your response.
Yes the file doesnt exist. And I got around this problem forcing using NFs version 3.
The point that I didnt understand is why for version running with GSS disabled was not possible. I was expecting to be able to disable GSS on version 4, and run without the extra security as I didnt need it. Now I m forcing the client to set version 3 on mount option then it works.
Sorry for the late reply I missed the email notification.
Comment 4 Fedora End Of Life 2016-11-24 08:43:36 EST
This message is a reminder that Fedora 23 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 23. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as EOL if it remains open with a Fedora  'version'
of '23'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not 
able to fix it before Fedora 23 is end of life. If you would still like 
to see this bug fixed and are able to reproduce it against a later version 
of Fedora, you are encouraged  change the 'version' to a later Fedora 
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.
Comment 5 Fedora End Of Life 2016-12-20 11:16:45 EST
Fedora 23 changed to end-of-life (EOL) status on 2016-12-20. Fedora 23 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.

Note You need to log in before you can comment on or make changes to this bug.