Bug 207670
Summary: | wrong access rights on NFS mount | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | Derrien <derrien> | ||||||
Component: | kernel | Assignee: | Steve Dickson <steved> | ||||||
Status: | CLOSED CURRENTRELEASE | QA Contact: | Ben Levenson <benl> | ||||||
Severity: | urgent | Docs Contact: | |||||||
Priority: | medium | ||||||||
Version: | 6 | CC: | davej, k.georgiou, kzak, richardfearn, tbeattie, triage | ||||||
Target Milestone: | --- | ||||||||
Target Release: | --- | ||||||||
Hardware: | i386 | ||||||||
OS: | Linux | ||||||||
Whiteboard: | bzcl34nup | ||||||||
Fixed In Version: | 2.6.25-0.121.rc5.git4.fc9 | Doc Type: | Bug Fix | ||||||
Doc Text: | Story Points: | --- | |||||||
Clone Of: | Environment: | ||||||||
Last Closed: | 2008-04-21 13:40:16 UTC | Type: | --- | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Attachments: |
|
Description
Derrien
2006-09-22 13:44:34 UTC
Just curious, regardless of that book keeping on the client (which is obviously a bit confused) is the filesystem truly ro for everybody other than '@m-irisa' as the exports say? Yes the fs is ro for everybody on our network (and only our network I hope ;-)). NB : It works like expected on FC5 : [root@d380 (FedoraCore 5) ~]$ mount nas1b:/vol/ren1b_soft_unix/linux /mnt/linux -w [root@d380 (FedoraCore 5) ~]$ mount nas1b:/vol/ren1b_soft_unix/local/i686_linux /mnt/local -r [root@d380 (FedoraCore 5) ~]$ grep mnt /proc/mounts nas1b:/vol/ren1b_soft_unix/linux /mnt/linux nfs rw,vers=3,rsize=65536,wsize=65536,hard,proto=tcp,timeo=600,retrans=2,addr=nas1b 0 0 nas1b:/vol/ren1b_soft_unix/local/i686_linux /mnt/local nfs ro,vers=3,rsize=65536,wsize=65536,hard,proto=tcp,timeo=600,retrans=2,addr=nas1b 0 0 We have the same pb with FC5 2.6.18-1.2200 and FC6 and for us these kernels are unusable. Do I have to open a new bug report ? [root@d380 (FedoraCore 5) ~]$ uname -a Linux d380.irisa.fr 2.6.18-1.2200.fc5smp #1 SMP Sat Oct 14 17:15:35 EDT 2006 i686 i686 i386 GNU/Linux [root@d380 (FedoraCore 5) ~]$ grep linux /proc/mounts nas1b:/vol/ren1b_soft_unix/local/i686_linux /net/usr/local nfs ro,vers=3,rsize=65536,wsize=65536,hard,proto=tcp,timeo=600,retrans=2,sec=sys,addr=nas1b 0 0 nas1b:/vol/ren1b_soft_unix/linux /soft/Linux nfs ro,vers=3,rsize=65536,wsize=65536,hard,proto=tcp,timeo=600,retrans=2,sec=sys,addr=nas1b 0 0 The two fs are mounted ro With the same kernel without FC patches it works like expected : [root@d380 (FedoraCore 5) ~]$ uname -a Linux d380.irisa.fr 2.6.18 #1 SMP Thu Oct 26 17:33:22 CEST 2006 i686 i686 i386 GNU/Linux [root@d380 (FedoraCore 5) ~]$ grep linux /proc/mounts nas1b:/vol/ren1b_soft_unix/local/i686_linux /net/usr/local nfs ro,vers=3,rsize=65536,wsize=65536,hard,proto=tcp,timeo=600,retrans=2,sec=sys,addr=nas1b 0 0 nas1b:/vol/ren1b_soft_unix/linux /soft/Linux nfs rw,vers=3,rsize=65536,wsize=65536,hard,proto=tcp,timeo=600,retrans=2,sec=sys,addr=nas1b 0 0 one fs is ro and the other is rw On FC6 I tested the FC's kernel minus NFS patches ( Patch1430 to Patch1443) : the pb is solved Same pb with 2.6.18-1.2849.fc6 It's seems a security issue that ro mounts become rw mounts. And it's very annoying that rw mounts become ro mounts. So could you tell me if it's really a bug or a new feature ? Is not clear at this point... I'm still looking into it... Here is the upstream discussion on this subject: ([RFC] [PATCH] Per-mountpoint read-only and noatime revisite) http://linux.derkeiler.com/Mailing-Lists/Kernel/2005-01/9038.html It seem VFS support is needed to fix this problem... This seems relevant: http://lkml.org/lkml/2006/10/18/264 A new feature... So it seems we will have to re-examine our NFS infrastructure :-( Created attachment 158984 [details]
Proposed Upstream patch.
Created attachment 158985 [details]
Seconded Proposed Upstream patch.
Because of this new 'feature' we had to change our NFS infrastructure so for us it's now difficult to test these patches. Imho Trond is not fully right. If I want 2 different mounts on my machine then they should look like the same 2 mounts on separate machines and synchronization is done purely by NFS between these machines/mounts. Sure, one shouldn't run applications working with both mount points simultenously and expecting POSIX'ness, but in general everything should work as if 2 applications looking in each own mount worked on separate machines. +1 vote for this behvaiour is poor and illogical. I've verified this problem on both RHEL 5 (2.6.18-8.el5) and FC 6 (2.6.20-1.2962.fc6). A few more details: * Permission masking only affects NFS mounts from the same server. If mounting several NFS directories from different servers, the rw/ro attributes have no effect on the *first* NFS mount from each new server. * Server-side export restrictions are still honored. That is, if the server exports one directory (call it "/readable") read-only and another directory (call it "/writable") read-write, and *IF* the client mounts /writable first with "rw" followed by /readable (regardless of ro/rw setting), then the client will be able to write to /writable but not to /readable. Let me clarify my first statement: If mounting several NFS directories from different servers, the rw/ro attributes *of previous mounts* have no effect on the *first* NFS mount from each new server. I'm encountering a different problem after having upgraded my FC6 kernel to 2.6.22.14-72.fc6. Now I am completely unable to mount two NFS mounts from the same server with different options! The second mount gives the error: "mount.nfs: /mnt/nfs2 is already mounted or busy". It's not just the rw/ro options, either. If for example I have the intr option set on one mount but not the other, and all other options are the same, I still can't mount both volumes. On the other hand if I have timeo set with different values on each and the rest of the options are the same, I can mount them both. (Though at this point I doubt that it's actually -using- a different timeout for each mount.) This is unacceptable since it means I have to unmount the first mount point in order to use the other mount every time I want to switch, instead of just fixing the mount order once after each reboot, and it's impossible to use both mounts at the same time unless they're both made writable, which is unsafe. Fedora apologizes that these issues have not been resolved yet. We're sorry it's taken so long for your bug to be properly triaged and acted on. We appreciate the time you took to report this issue and want to make sure no important bugs slip through the cracks. If you're currently running a version of Fedora Core between 1 and 6, please note that Fedora no longer maintains these releases. We strongly encourage you to upgrade to a current Fedora release. In order to refocus our efforts as a project we are flagging all of the open bugs for releases which are no longer maintained and closing them. http://fedoraproject.org/wiki/LifeCycle/EOL If this bug is still open against Fedora Core 1 through 6, thirty days from now, it will be closed 'WONTFIX'. If you can reporduce this bug in the latest Fedora version, please change to the respective version. If you are unable to do this, please add a comment to this bug requesting the change. Thanks for your help, and we apologize again that we haven't handled these issues to this point. The process we are following is outlined here: http://fedoraproject.org/wiki/BugZappers/F9CleanUp We will be following the process here: http://fedoraproject.org/wiki/BugZappers/HouseKeeping to ensure this doesn't happen again. And if you'd like to join the bug triage team to help make things better, check out http://fedoraproject.org/wiki/BugZappers It looks like this problem has been resolved in kernel 2.6.25-0.121.rc5.git4.fc9. The system properly respects the ro/rw flag on each of two different mount points from the same NFS server regardless of the order in which they're mounted, and I am able to mount the two volumes with different mount options. |