With gssproxy-0.7.0-24.fc27 NFS4 mounts with Krb5 security fail with the server denying access. Works OK after I downgrade to gssproxy-0.7.0-22.fc27. Also works if I manually stop and restart gssproxy.
Setting component and blocks to match the rhel bug this is cloned from.
(In reply to Robbie Harwood from comment #1) > Setting component and blocks to match the rhel bug this is cloned from. I don't understand how this is a nfs-utils problem when downgrading gssproxy fixes the problem. The rpcgssd has not changed in a while... So how is this an nfs-utils problem?
Steve, ever heard of latent bugs triggered by another component change ?
(In reply to Simo Sorce from comment #3) > Steve, > ever heard of latent bugs triggered by another component change ? So something is changed in one package which breaks something in another package and that is a latent bug? Hmm...
I can confirm, downgrading to gssproxy-0.7.0-22.fc27 restores mounting nfs4 with krb5. Both server and client is fc27. Let me know how to debug this.
(In reply to louisgtwo from comment #5) > I can confirm, downgrading to gssproxy-0.7.0-22.fc27 restores mounting nfs4 > with krb5. Both server and client is fc27. Let me know how to debug this. Just curious... What KDC are you using?
krb5-server-1.15.2-4.fc27.x86_64
(In reply to louisgtwo from comment #7) > krb5-server-1.15.2-4.fc27.x86_64 Thank you!
Just doing some testing and found out the server running gssproxy-0.7.0-24.fc27.x86_64 client running gssproxy-0.7.0-22.fc27.x86_64 and everything is fine. as soon as I upgrade the client to gssproxy-0.7.0-24.fc27.x86_64 nfs4 with krb5 stops working.
Please re-test with -25 and without KCM (yum erase sssd-kcm). Thanks!
My system (fedora-workstation) does not have sssd-kcm installed. I've upgraded to -25 on both server and workstation and so far so good. NFS mounts working normally
(In reply to Robbie Harwood from comment #10) > Please re-test with -25 and without KCM (yum erase sssd-kcm). Thanks! What should GSS_USE_PROXY be in /etc/sysconfig/nfs for testing -25?
(In reply to James from comment #12) > (In reply to Robbie Harwood from comment #10) > > Please re-test with -25 and without KCM (yum erase sssd-kcm). Thanks! > > What should GSS_USE_PROXY be in /etc/sysconfig/nfs for testing -25? GSS_USE_PROXY=yes always when testing gssproxy; otherwise, gssproxy isn't involved and you just end up testing something else instead. Please remove sssd-kcm prior to testing and restart rpc-gssd after upgrading gssproxy. Thanks louisgtwo! Looks like this is my bug then, so taking it.
Bodhi has landed, so assuming this is fixed unless I hear otherwise.
Robbie, I've tried gssproxy-0.7.0-25.fc27 (https://bodhi.fedoraproject.org/updates/FEDORA-2017-cb5743bcb0) and it doesn't seem to resolve this issue for me. I am however using sssd-kcm. In comment 10 you mentioned erasing sssd-kcm. Is gssproxy/nfs/krb5 not going to work when using sssd-kcm? If so are there plans to address this (as sssd-kcm became the default),
(In reply to Anthony Messina from comment #15) > Robbie, I've tried gssproxy-0.7.0-25.fc27 > (https://bodhi.fedoraproject.org/updates/FEDORA-2017-cb5743bcb0) and it > doesn't seem to resolve this issue for me. > > I am however using sssd-kcm. In comment 10 you mentioned erasing sssd-kcm. > Is gssproxy/nfs/krb5 not going to work when using sssd-kcm? If so are there > plans to address this (as sssd-kcm became the default), gssproxy can't be responsible for KCM-specific bugs. https://bugzilla.redhat.com/show_bug.cgi?id=1521110 is the bug I have filed for the failure I find when trying to use KCM.
Created attachment 1364119 [details] vagrant reproducer vagrant reproducer
Hello, I've added a reproducer that proves this is still an issue on fedora27 on the fedoraclient: using gssproxy 0.7.0-25.fc27 With GSS_USE_PROXY="yes" in /etc/sysconfig/nfs try mounting the homes share on media with mount -overs=4,rw,async,noatime,timeo=14,soft,sec=krb5p,acl 192.168.122.3:/homes /media it will fail change With GSS_USE_PROXY="no" in /etc/sysconfig/nfs reboot and try again mount -overs=4,rw,async,noatime,timeo=14,soft,sec=krb5p,acl 192.168.122.3:/homes /media it will succeed the vagrant setup is kinda heavy so bring up the vms one at a time. vagrant up ipaserver && vagrant up nfsserver && vagrant up centosclient && vagrant up fedoraclient the centosclient is to validate that the nfs setup is working ok accounts in this setup are : root/admin/vagrant/testuser all passwords are : centos74 Rob Verduijn
using vagrant 2.0.1-1 on centos (because it is broken on fedora) Rob Verduijn
Hi Rob, thanks for the very detailed reproducer. Unforuntately I'm having some issues with vagrant at the moment. (I'm not really sure what's up with the eth1 management you're doing, among other things.) Does the problem continue to manifest if you restart rpc-gssd after updating gssproxy? (The gssproxy update affects what we call the "mechglue" - it's a shim that runs in the process of applications consuming GSSAPI, like rpc-gssd does.)
There's an element of 'stale state' to this problem. I just upgraded a machine to gssproxy-0.7.0-26.fc27.x86_64 without any service restarts and immediately rebooted to find I had no NFS4 mounts. I reverted back to -25 and rebooted, still nothing. I then restart rpc-gssd, gssproxy and autofs and it came back, surviving reboots. I then upgraded to -26 again but this time emptied /var/lib/gssproxy/clients/ before rebooting. It's working now. (This is with USE_GSS_PROXY="yes".)
Hello, After running yum update -y --enablerepo=updates-testing gssproxy and systemctl restart gssproxy.service The command mount -overs=4,rw,async,noatime,timeo=14,soft,sec=krb5p,acl 192.168.122.3:/homes /media Still gives mount.nfs: access denied by server while mounting 192.168.122.3:/homes A reboot doesn't help either. I've found that booting 4 vagrant systems at the same time gives quite a few issues, therefore I start them one at the time And that if you are not using root you need to add domain.uri = 'qemu_tcp://<fqdn of vagrant host>/system' to the :libvirt section of each system. The eth1 stuff is a trick to make the vagrant guests use the ipa server as their dns service (that way you can use the example.com domain for ipa) And I like eth1 better on the command line than 'Wired connection 1' Also I forgo all the issues that might come from raceconditions dhcp by setting up static ip addresses on eth1 for each machine That and my lack of knowledge on how to manipulate vagrant network settings is more or less the reason that I do all that eth1 stuff. Rob
ps if resources are a problem on your vagrant host, do not start the centosclient. It's only function is to validate the working of the kerberized nfs exports. Rob
Right, so what I was asking you to do was `service rpc-gssd restart` after you run the `yum update`.
Hi, Seems I misread your question. After updateing the system again (now at gssproxy-0.7.0-26) The command `service rpc-gssd restart` did help. But only once. After a reboot I could not get the mount to work again no matter how often I issued 'service rpc-gssd restart' Rob Verduijn
Hi Rob, I have a new version of gssproxy that may fix this issue. Could you test gssproxy-0.7.0-29 and let me know if it works for you? Thanks!
Hello, I pulled it from the build system since it wasn't in testing yet. https://koji.fedoraproject.org/koji/buildinfo?buildID=1009320 That version seems to be working fine on my system. with GSS_USE_PROXY="yes" in /etc/sysconfig/nfs I can now mount the kerberized share again with : mount -overs=4,rw,async,noatime,timeo=14,soft,sec=krb5p,acl 192.168.122.3:/homes /media Thanx Rob