Bugzilla will be upgraded to version 5.0 on a still to be determined date in the near future. The original upgrade date has been delayed.
Bug 1514241 - [Regression] gssproxy-0.7.0-24.fc27 breaks NFS4 krb5i mounts
[Regression] gssproxy-0.7.0-24.fc27 breaks NFS4 krb5i mounts
Status: CLOSED CURRENTRELEASE
Product: Fedora
Classification: Fedora
Component: gssproxy (Show other bugs)
27
Unspecified Unspecified
unspecified Severity high
: ---
: ---
Assigned To: Robbie Harwood
Fedora Extras Quality Assurance
:
Depends On:
Blocks: 1507817
  Show dependency treegraph
 
Reported: 2017-11-16 17:09 EST by James Ettle
Modified: 2017-12-13 15:29 EST (History)
10 users (show)

See Also:
Fixed In Version: gssproxy-0.7.0-25.fc27
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-12-05 10:14:15 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
vagrant reproducer (17.54 KB, text/plain)
2017-12-07 04:20 EST, rob.verduijn
no flags Details

  None (edit)
Description James Ettle 2017-11-16 17:09:23 EST
With gssproxy-0.7.0-24.fc27 NFS4 mounts with Krb5 security fail with the server denying access.

Works OK after I downgrade to gssproxy-0.7.0-22.fc27.

Also works if I manually stop and restart gssproxy.
Comment 1 Robbie Harwood 2017-11-16 17:33:08 EST
Setting component and blocks to match the rhel bug this is cloned from.
Comment 2 Steve Dickson 2017-11-20 10:55:24 EST
(In reply to Robbie Harwood from comment #1)
> Setting component and blocks to match the rhel bug this is cloned from.

I don't understand how this is a nfs-utils problem when downgrading
gssproxy fixes the problem. The rpcgssd has not changed in a while... 
So how is this an nfs-utils problem?
Comment 3 Simo Sorce 2017-11-20 15:45:32 EST
Steve,
ever heard of latent bugs triggered by another component change ?
Comment 4 Steve Dickson 2017-11-21 07:07:24 EST
(In reply to Simo Sorce from comment #3)
> Steve,
> ever heard of latent bugs triggered by another component change ?

So something is changed in one package which breaks something in 
another package and that is a latent bug? Hmm...
Comment 5 louisgtwo 2017-11-29 21:25:09 EST
I can confirm, downgrading to gssproxy-0.7.0-22.fc27 restores mounting nfs4 with krb5. Both server and client is fc27. Let me know how to debug this.
Comment 6 Steve Dickson 2017-11-30 16:53:36 EST
(In reply to louisgtwo from comment #5)
> I can confirm, downgrading to gssproxy-0.7.0-22.fc27 restores mounting nfs4
> with krb5. Both server and client is fc27. Let me know how to debug this.

Just curious... What KDC are you using?
Comment 7 louisgtwo 2017-11-30 17:12:50 EST
krb5-server-1.15.2-4.fc27.x86_64
Comment 8 Steve Dickson 2017-11-30 17:21:00 EST
(In reply to louisgtwo from comment #7)
> krb5-server-1.15.2-4.fc27.x86_64

Thank you!
Comment 9 louisgtwo 2017-11-30 20:44:26 EST
Just doing some testing and found out the server running gssproxy-0.7.0-24.fc27.x86_64 client running gssproxy-0.7.0-22.fc27.x86_64 and everything is fine. as soon as I upgrade the client to gssproxy-0.7.0-24.fc27.x86_64 nfs4 with krb5 stops working.
Comment 10 Robbie Harwood 2017-12-01 15:21:24 EST
Please re-test with -25 and without KCM (yum erase sssd-kcm).  Thanks!
Comment 11 louisgtwo 2017-12-01 16:25:08 EST
My system (fedora-workstation) does not have sssd-kcm installed. I've upgraded to -25 on both server and workstation and so far so good. NFS mounts working normally
Comment 12 James Ettle 2017-12-01 19:37:08 EST
(In reply to Robbie Harwood from comment #10)
> Please re-test with -25 and without KCM (yum erase sssd-kcm).  Thanks!

What should GSS_USE_PROXY be in /etc/sysconfig/nfs for testing -25?
Comment 13 Robbie Harwood 2017-12-02 10:23:56 EST
(In reply to James from comment #12)
> (In reply to Robbie Harwood from comment #10)
> > Please re-test with -25 and without KCM (yum erase sssd-kcm).  Thanks!
> 
> What should GSS_USE_PROXY be in /etc/sysconfig/nfs for testing -25?

GSS_USE_PROXY=yes always when testing gssproxy; otherwise, gssproxy isn't involved and you just end up testing something else instead.  Please remove sssd-kcm prior to testing and restart rpc-gssd after upgrading gssproxy.

Thanks louisgtwo!  Looks like this is my bug then, so taking it.
Comment 14 Robbie Harwood 2017-12-05 10:14:15 EST
Bodhi has landed, so assuming this is fixed unless I hear otherwise.
Comment 15 Anthony Messina 2017-12-05 21:12:57 EST
Robbie, I've tried gssproxy-0.7.0-25.fc27 (https://bodhi.fedoraproject.org/updates/FEDORA-2017-cb5743bcb0) and it doesn't seem to resolve this issue for me.

I am however using sssd-kcm.  In comment 10 you mentioned erasing sssd-kcm.  Is gssproxy/nfs/krb5 not going to work when using sssd-kcm?  If so are there plans to address this (as sssd-kcm became the default),
Comment 16 Robbie Harwood 2017-12-06 12:44:44 EST
(In reply to Anthony Messina from comment #15)
> Robbie, I've tried gssproxy-0.7.0-25.fc27
> (https://bodhi.fedoraproject.org/updates/FEDORA-2017-cb5743bcb0) and it
> doesn't seem to resolve this issue for me.
> 
> I am however using sssd-kcm.  In comment 10 you mentioned erasing sssd-kcm. 
> Is gssproxy/nfs/krb5 not going to work when using sssd-kcm?  If so are there
> plans to address this (as sssd-kcm became the default),

gssproxy can't be responsible for KCM-specific bugs.  https://bugzilla.redhat.com/show_bug.cgi?id=1521110 is the bug I have filed for the failure I find when trying to use KCM.
Comment 17 rob.verduijn 2017-12-07 04:20 EST
Created attachment 1364119 [details]
vagrant reproducer

vagrant reproducer
Comment 18 rob.verduijn 2017-12-07 04:24:52 EST
Hello,

I've added a reproducer that proves this is still an issue on fedora27

on the fedoraclient:

using gssproxy 0.7.0-25.fc27
With GSS_USE_PROXY="yes" in /etc/sysconfig/nfs

try mounting the homes share on media with 
mount -overs=4,rw,async,noatime,timeo=14,soft,sec=krb5p,acl 192.168.122.3:/homes /media

it will fail 

change With GSS_USE_PROXY="no" in /etc/sysconfig/nfs
reboot and try again 
mount -overs=4,rw,async,noatime,timeo=14,soft,sec=krb5p,acl 192.168.122.3:/homes /media

it will succeed


the vagrant setup is kinda heavy so bring up the vms one at a time.
vagrant up ipaserver && vagrant up nfsserver && vagrant up centosclient && vagrant up fedoraclient

the centosclient is to validate that the nfs setup is working ok

accounts in this setup are : root/admin/vagrant/testuser
all passwords are : centos74

Rob Verduijn
Comment 19 rob.verduijn 2017-12-07 04:26:03 EST
using vagrant 2.0.1-1 on centos (because it is broken on fedora)

Rob Verduijn
Comment 20 rob.verduijn 2017-12-07 04:27:57 EST
using vagrant 2.0.1-1 on centos (because it is broken on fedora)

Rob Verduijn
Comment 21 Robbie Harwood 2017-12-08 14:32:04 EST
Hi Rob, thanks for the very detailed reproducer.  Unforuntately I'm having some issues with vagrant at the moment.  (I'm not really sure what's up with the eth1 management you're doing, among other things.)

Does the problem continue to manifest if you restart rpc-gssd after updating gssproxy?

(The gssproxy update affects what we call the "mechglue" - it's a shim that runs in the process of applications consuming GSSAPI, like rpc-gssd does.)
Comment 22 James Ettle 2017-12-08 14:38:56 EST
There's an element of 'stale state' to this problem. I just upgraded a machine to 
gssproxy-0.7.0-26.fc27.x86_64 without any service restarts and immediately rebooted to find I had no NFS4 mounts. I reverted back to -25 and rebooted, still nothing. I then restart rpc-gssd, gssproxy and autofs and it came back, surviving reboots.

I then upgraded to -26 again but this time emptied /var/lib/gssproxy/clients/ before rebooting. It's working now. (This is with USE_GSS_PROXY="yes".)
Comment 23 rob.verduijn 2017-12-09 07:46:32 EST
Hello,

After running  

yum update -y --enablerepo=updates-testing gssproxy

and 
systemctl restart gssproxy.service

The command
mount -overs=4,rw,async,noatime,timeo=14,soft,sec=krb5p,acl 192.168.122.3:/homes /media

Still gives
mount.nfs: access denied by server while mounting 192.168.122.3:/homes

A reboot doesn't help either.


I've found that booting 4 vagrant systems at the same time gives quite a few issues, therefore I start them one at the time
And that if you are not using root you need to add
domain.uri = 'qemu_tcp://<fqdn of vagrant host>/system' 
to the :libvirt section of each system.

The eth1 stuff is a trick to make the vagrant guests use the ipa server as their dns service (that way you can use the example.com domain for ipa)
And I like eth1 better on the command line than 'Wired connection 1'
Also I forgo all the issues that might come from raceconditions dhcp by setting up static ip addresses on eth1 for each machine
That and my lack of knowledge on how to manipulate vagrant network settings is more or less the reason that I do all that eth1 stuff.

Rob
Comment 24 rob.verduijn 2017-12-09 07:47:45 EST
ps if resources are a problem on your vagrant host, do not start the centosclient.
It's only function is to validate the working of the kerberized nfs exports.

Rob
Comment 25 Robbie Harwood 2017-12-12 13:43:19 EST
Right, so what I was asking you to do was `service rpc-gssd restart` after you run the `yum update`.
Comment 26 rob.verduijn 2017-12-12 16:13:44 EST
Hi,

Seems I misread your question.

After updateing the system again (now at gssproxy-0.7.0-26)
The command `service rpc-gssd restart` did help.

But only once.
After a reboot I could not get the mount to work again no matter how often I issued 'service rpc-gssd restart' 

Rob Verduijn
Comment 27 Robbie Harwood 2017-12-13 14:44:51 EST
Hi Rob, I have a new version of gssproxy that may fix this issue.  Could you test gssproxy-0.7.0-29 and let me know if it works for you?  Thanks!
Comment 28 rob.verduijn 2017-12-13 15:29:22 EST
Hello,

I pulled it from the build system since it wasn't in testing yet.
https://koji.fedoraproject.org/koji/buildinfo?buildID=1009320

That version seems to be working fine on my system.
with GSS_USE_PROXY="yes" in /etc/sysconfig/nfs

I can now mount the kerberized share again with :
mount -overs=4,rw,async,noatime,timeo=14,soft,sec=krb5p,acl 192.168.122.3:/homes /media

Thanx

Rob

Note You need to log in before you can comment on or make changes to this bug.