Bug 501848 - on the fly varying device numbers on a NFS mount point
on the fly varying device numbers on a NFS mount point
Status: ASSIGNED
Product: Fedora
Classification: Fedora
Component: kernel (Show other bugs)
rawhide
All Linux
medium Severity medium
: ---
: ---
Assigned To: Steve Dickson
Fedora Extras Quality Assurance
: Reopened
Depends On:
Blocks: 538536
  Show dependency treegraph
 
Reported: 2009-05-20 22:29 EDT by Issue Tracker
Modified: 2015-08-31 23:55 EDT (History)
14 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 538536 (view as bug list)
Environment:
Last Closed: 2013-04-23 13:26:31 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)
strace of failure (343.61 KB, text/plain)
2009-05-21 13:14 EDT, Charlie Wyse
no flags Details
Workaround for ".." directories and ?race conditions? (576 bytes, patch)
2009-07-21 07:27 EDT, Ondrej Vasik
no flags Details | Diff
Better one ;) workaround for ".." directories and ?race conditions? (577 bytes, patch)
2009-07-21 07:37 EDT, Ondrej Vasik
no flags Details | Diff

  None (edit)
Description Issue Tracker 2009-05-20 22:29:39 EDT
Escalated to Bugzilla from IssueTracker
Comment 1 Issue Tracker 2009-05-20 22:29:41 EDT
Description of problem:
If you run a du -h on a directory with .snapshot sub directories with coreutils-6.10+ (Could be lower, but >5.97-20) you will get a fts_read error:
du: fts_read failed: No such file or directory

How reproducible:
Every time.

Steps to Reproduce:
1. Use F10 or anything with the higher versions of coreutils. and a machine with the .snapshot directories created by netapp.
2. du -h
3. wait.

Actual results:
du: fts_read failed: No such file or directory

Expected results:
The size listing of all files and/or directories

Additional info:
This event sent from IssueTracker by cwyse  [Pixar Animation Studios - Fedora Queue]
 issue 298936
Comment 2 Issue Tracker 2009-05-20 23:18:21 EDT
I guess I spoke to soon.  With coreutils-6.9-2 the problem is less
noticeable.  On smaller directories it doesn't show up at all.  But with
larger directories (directories with many sub directories) it is still
there, so some users will notice it and some will not.  Problem is now
between:
coreutils-5.97-19 > and < coreutils-6.9-2.  

I tried compiling the 6.7.* coreutils package but it keeps failing on
build and isn't saying what or why.  I will look more into this and
update with what I find.


This event sent from IssueTracker by cwyse 
 issue 298936
Comment 3 Kamil Dudka 2009-05-21 03:36:22 EDT
It can be caused by the on-the-fly changes within the directory. It just want to traverse a directory (or file?) which no more exists. I am pretty sure you can't see the errors if you mount the file system read-only.

But there is no doubt the error message might be more verbose, it is listed as FIXME in du.c.
Comment 4 Ondrej Vasik 2009-05-21 05:44:01 EDT
Additionally - I guess Fedora version should be changed to something not EOL (as F-8 is EOL and F-9 will be EOL in ~2 months). From the comments I think version should be changed to F-10 - correct? Or some RHEL version?
Comment 5 Ondrej Vasik 2009-05-21 05:47:01 EDT
Additionally strace from the failure could be useful to better analyze what's the culprit...
Comment 6 Charlie Wyse 2009-05-21 13:14:53 EDT
Created attachment 344994 [details]
strace of failure

I agree, changing version to F10, I originally just set it for the first version I noticed this problem in. Also, here is an strace of the failure on a F10 machine.  I'm gonna try some of the 6.7 packages again and see if I can narrow down the window in which this fails.
Comment 7 Issue Tracker 2009-05-21 13:49:54 EDT
Finally got 6.7-1 compiled.  It shows the same fts_read issue. 
5.97-22 >< 6.7-1
This is about as narrowed as I can get it, I'm gonna try diff'ing up a
patch between du.c and... see what happens.


This event sent from IssueTracker by cwyse 
 issue 298936
Comment 8 Issue Tracker 2009-05-21 14:46:02 EDT
patching fts.c was a fail on compile.  There looks like a fts.c.du file in
6.7 and a fts.c.inaccessibledirs in 5.7.  Since the files do not exist in
both trees I'm kinda not sure how to test patching that. 


This event sent from IssueTracker by cwyse 
 issue 298936
Comment 10 Kamil Dudka 2009-05-21 15:55:42 EDT
Could you please try following on the same directory?

$ find -printf %b\\n

Does it give the same errors? Different errors? No errors?
Comment 11 Issue Tracker 2009-05-21 16:54:59 EDT
Ran "find -printf %b\n"  I didn't get any errors, it took over an hour
and ran my cpu at 121.6%, but no errors, running it again to verify. 


This event sent from IssueTracker by cwyse 
 issue 298936
Comment 12 Kamil Dudka 2009-05-21 17:23:49 EDT
Does 'du' print just one error message and then die? Is the output obviously incomplete? Or the problem is only about the error message and return code?
Comment 14 Ondrej Vasik 2009-06-10 08:03:46 EDT
Something interesting to read (about the same issue and how to reduce impact):
http://www.unixtutorial.org/2009/02/troubleshooting-du-fts_read-no-such-file-or-directory-error/

From what I have quickly checked, if find's fts_read() returns NULL, it just closes FTS structure and goes to next argument. If du's fts_read() returns NULL, it checks for errno - and spits corresponding diagnostics. Difference is in checking function - find has a bit more complex checking function consider_visiting() - maybe some parts from it should be used/adapted in du's process_file() function.
Comment 16 Ondrej Vasik 2009-07-20 15:08:25 EDT
Played a bit with that bz again - fts_read error is being set on lib/fts.c:2000 - hardcoding ENOENT to errno. Error occurs when ".." entry is not cached yet, so with more repeating after mount, it seems to be possible to get rid off those errors and to get correct result. It seems that check on fts.c:1997/1998 has to be extended to handle properly that situation with NetApp .snapshot dir. Using du -Lsh also helps in some cases.
Comment 17 Ondrej Vasik 2009-07-21 07:27:05 EDT
Created attachment 354463 [details]
Workaround for ".." directories and ?race conditions?

Played a bit more with that fts_read failure, attached patch is workarounding the issue. It seems that due to "maybe caching race condition" after fstat on ".." fts entry it sometimes has device number of the parent directory (first run after mount). 

e.g. (variable: fts_value : fstat_value):
devicenum: 25 :  33
inode: 8217100 : 8217100
Next run on the same place has correctly same values for fts_value and fstat_value and it looks like:
devicenum: 33 :  33
inode: 8217100 : 8217100

I'm quite sure that patch is NOT correct way how to solve that issue, that race condition should be eliminated - but I'm not really sure where. Filesystem? Kamil - any idea?
Comment 18 Ondrej Vasik 2009-07-21 07:37:27 EDT
Created attachment 354464 [details]
Better one ;) workaround for ".." directories and ?race conditions?

Damned, previous one was obviously not correct ... that one should be better...
Comment 19 Charlie Wyse 2009-07-21 15:31:50 EDT
I added the patch to the latest coreutils package and I haven't seen the error yet.  I ran a du -h over my lunch break.  I'm letting my customer try it out and give it his stamp of approval.  But so far it looks like it resolves the issue.  I'll let you know if anything changes.
Comment 20 Kamil Dudka 2009-07-22 10:01:18 EDT
I've narrowed down the strange behavior to sort of minimal example (/mnt/archive is a NetApp mount point):

umount /mnt/archive && mount /mnt/archive \
    && stat --printf "%d\t%i\t%n\n" /mnt/archive/.snapshot \
    && stat --printf "%d\t%i\t%n\n" /mnt/archive/.snapshot/hourly.0 \
    && stat --printf "%d\t%i\t%n\n" /mnt/archive/.snapshot

The output is following:

    20      67      /mnt/archive/.snapshot
    26      222     /mnt/archive/.snapshot/hourly.0
    26      67      /mnt/archive/.snapshot

The device number is being changed on the fly while the inode number stays unchanged. It sounds like a file system bug to me. It's 100% reproducible on my box.
Comment 21 Charlie Wyse 2009-07-22 14:36:17 EDT
I ran the patched coreutils package on my .snapshot directory 3 times and didn't see a single error.  It takes about 30 minutes to go through the .snapshot directory.  Before Ovasik's patch it would run for about 30 seconds then fail.  Kdudka, are you using the patch and still noticing this?
Comment 22 Issue Tracker 2009-07-22 14:51:16 EDT
Event posted on 07-22-2009 02:51pm EDT by cwyse

Customer just got back to me with some comments.  This new package creates
.snapshot directories on his desktop.  This was a problem in F10 which
went away with F11.  So it looks like a slight regression?


This event sent from IssueTracker by cwyse 
 issue 298936
Comment 23 Charlie Wyse 2009-07-22 14:53:17 EDT
Here are the previous bugs that were related to the .snapshots showing up on the desktop.  Just posting them here in case they help.

As noted in https://bugzilla.redhat.com/show_bug.cgi?id=472778 and https://now.netapp.com/Knowledgebase/solutionarea.asp?id=kb44598, NetApp filers use different FSIDs for the hidden snapshot directories they provide.
Comment 24 Kamil Dudka 2009-07-23 06:52:33 EDT
(In reply to comment #21)
> Kdudka, are you using the patch and still noticing this?

The patch is only workaround for 'du' utility. It works for me, too. But it does not fix the file system bug. The minimal example uses 'stat', so it has nothing to do with that patch.

The comment #22 is missing some context here. Which package does create the .snapshot directories on customer's desktop? I am quite sure that coreutils does not.
Comment 28 Kamil Dudka 2009-08-05 10:53:25 EDT
The problem persists with latest rawhide kernel:
Linux 2.6.31-0.122.rc5.git2.fc12.x86_64 #1 SMP Mon Aug 3 12:58:47 EDT 2009 x86_64

/etc/fstab:
filer-eng.brq.redhat.com:/vol/engineering/share /mnt/archive nfs ro 0 0
Comment 29 Jeff Layton 2009-08-05 13:34:32 EDT
Looking at the capture, it doesn't appear that the server is returning inconsistent inode info. However Kamil's reproducer seems to indicate that the client is changing the device number after it traverses into the directory.

I suspect that this means that the client isn't doing the shrinkable mount before returning the info on the first stat call.
Comment 30 Jeff Layton 2009-09-02 07:54:16 EDT
Confirmed...same behavior in rawhide too. I can also reproduce this with a non-netapp server simply by exporting a filesystem and then exporting another filesystem mounted onto a subdir of the first fs. Nothing netapp-specific here.

There's also a somewhat related problem...if a submount is done and then gets automatically unmounted, then the device numbers can change and even be reused for a completely different submount.

This is a bit tricky. On the one hand, the device number seems to change and that's probably bad for some apps. On the other hand, do we really want to trigger a mount just because someone did a stat() on the directory where we would eventually do a submount?

If I have a ton of exports that are subdirs of another exported filesystem I don't think I really want to do submounts of all of those filesystems just because someone did a "ls -l" in that directory.

Unfortunately, the device numbers for NFS are allocated on the fly during mount. So we can't easily "fake up" the device numbers and expect them to remain consistent without actually triggering a mount. The device number may be different once the submount gets done.

I suspect that the best we can probably do is to just make sure the device number is different from that of the parent filesystem, but we probably won't be able to make it consistent. That is, it'll change as soon as you walk into the dir...

I'll plan to do a writeup of this problem in the near future and post it to the upstream mailing list.
Comment 31 Jeff Layton 2009-09-02 12:47:45 EDT
This problem is really no different than how autofs works. When you run stat on an autofs mountpoint, you'll just get the directory until you walk into that directory.

That's actually correct behavior since you're adding a new mount when that occurs. This is almost completely the same thing, it's just that the kernel does a new mount w/o needing autofs.

I'm not sure this is actually bug, rather you're just seeing expected results when the kernel adds a new mount on the fly.
Comment 32 Kamil Dudka 2009-09-02 13:07:37 EDT
Jeff, thanks for the analysis. I'll look at the fts code again and possibly reassign back to coreutils. Good to know it's reproducible independently on the NetApp mount point.
Comment 33 Jeff Layton 2009-09-03 13:55:10 EDT
Sounds good. I'll reassign this back to you for now.

Let me know if you need further clarification.
Comment 38 Kamil Dudka 2009-10-18 11:51:21 EDT
making the bug public...
Comment 39 Kamil Dudka 2009-10-21 14:27:51 EDT
Reported upstream:
http://lists.gnu.org/archive/html/bug-gnulib/2009-10/msg00207.html
Comment 40 Jim Meyering 2009-10-31 08:11:36 EDT
Hello,

Is this happening because the device number is assigned first to one value initially, and later to another value -- all during a single hierarchy traversal?

If so, I'll have to push this back into the kernel/file-system court.
I think we'll have to make the file system present a consistent device and inode number for any file it serves.
Comment 41 Kamil Dudka 2009-10-31 08:34:27 EDT
(In reply to comment #40)
> Is this happening because the device number is assigned first to one value
> initially, and later to another value -- all during a single hierarchy
> traversal?

It looks like a sort of expected behavior to me. If the file system is not mounted, the device number describes the directory which belongs to the surrounding file system. Once you trigger the mount, the same path (directory) belongs to the newly mounted file system, thus gets a new device number.

In fact I was more likely surprised how the inode number could stay consistent among the mounts.

> If so, I'll have to push this back into the kernel/file-system court.
> I think we'll have to make the file system present a consistent device and
> inode number for any file it serves.

Well, I try to prepare a complete client/server reproducer first since the one from comment #20 uses our internal server, not available to others for testing.
Comment 42 Jim Meyering 2009-10-31 08:58:53 EDT
What event triggers the mount?
Comment 43 Kamil Dudka 2009-10-31 09:13:44 EDT
(In reply to comment #42)
> What event triggers the mount?  

From my observation with gdb:
1. calling fstatat() with AT_SYMLINK_NOFOLLOW does NOT trigger the mount.
2. calling fstatat() without AT_SYMLINK_NOFOLLOW triggers the mount, opening a directory as well.

If you are asking which events are guaranteed to trigger the mount and/or which events are guaranteed to NOT trigger the mount, kernel guys might give you a reliable answer.

Jeff, any idea?
Comment 44 Jeff Layton 2009-11-02 14:01:06 EST
Submounts are triggered via the follow_link inode operation, so in some ways these are treated like symlinks...

The short answer is that the mount will be triggered whenever you walk a path in such a way that, if this component were a symlink it would be resolved to its target.

Longer answer:

If the place where you transition into a new filesystem is in the middle of a path, then generally the path will be resolved. If it's the last component of the path, then it depends on whether the LOOKUP_FOLLOW link flag is set in nameidata in the kernel. That varies with the type of operation -- for instance, lstat() won't have that set, but a "normal" stat() generally will.
Comment 45 Kamil Dudka 2009-11-03 07:41:30 EST
Minimal example which works reliably on my Fedora 11 installation:

# mount | grep ^/
/dev/sda1 on / type ext3 (rw)
/dev/sda3 on /home type ext4 (rw)

# ls -d /home/test
/home/test

# printf "/ *(fsid=0,crossmnt)\n/home *(crossmnt)\n" \
    > /etc/exports

# service nfs restart
# mkdir /tmp/mnt
# mount -t nfs4 localhost:/ /tmp/mnt \
    && stat --printf "%d\t%i\t%n\n" /tmp/mnt/home \
    && stat --printf "%d\t%i\t%n\n" /tmp/mnt/home/test \
    && stat --printf "%d\t%i\t%n\n" /tmp/mnt/home

29      2       /tmp/mnt/home
30      12      /tmp/mnt/home/test
30      2       /tmp/mnt/home
Comment 46 Kamil Dudka 2009-11-03 15:22:57 EST
A patch for gnulib proposed upstream:

http://lists.gnu.org/archive/html/bug-gnulib/2009-11/msg00027.html
Comment 47 Kamil Dudka 2009-11-04 11:37:03 EST
(In reply to comment #46)
> A patch for gnulib proposed upstream:
> 
> http://lists.gnu.org/archive/html/bug-gnulib/2009-11/msg00027.html  

The patch has been rejected by upstream because of performance impact in some obscure situations (namely traversing a directory which consists of 200000 directories nested in each other):

http://lists.gnu.org/archive/html/bug-gnulib/2009-11/msg00032.html

As solution it was proposed to find (or perhaps implement?) a low cost way of recognizing a mount point during the traversal. "low cost" means cheaper than a stat call here.

Since there seems to be nothing I can do with this bug at the moment, I am reassigning it back to kernel.
Comment 48 Jim Meyering 2009-11-04 13:32:30 EST
Hi Kamil,

Using your reproducer (above, thanks!) let's print one more dev/ino pair
(this is on F12):

$ stat --printf "%d %i %n\n" /tmp/mnt/home /tmp/mnt   
24 2 /tmp/mnt/home
24 2 /tmp/mnt

That shows a big problem: two distinct directories have the same dev/ino pair,
and fts rightly objects, returning FTS_DC to indicate the directory cycle.
Because when fts encounters the same dev/ino pair twice in a traversal, and when not traversing symlinks, that represents a hard-linked directory cycle, which is usually a big problem.  [Note that currently du does not diagnose this problem, but I'll fix that shortly. ]

Even if the above kernel/nfs bug is fixed, I am becoming more and more convinced that this varying-device-number problem is something that must be addressed in the kernel, and not in every single application that must perform dev/ino checks for security.  Thanks for reassigning to the kernel.
Comment 49 Kamil Dudka 2009-11-04 13:51:58 EST
(In reply to comment #48)
> $ stat --printf "%d %i %n\n" /tmp/mnt/home /tmp/mnt   
> 24 2 /tmp/mnt/home
> 24 2 /tmp/mnt

Good catch! Though I don't think you hit the cause of the original bug report, this looks indeed broken. The dev/ino pair should be unique per whole VFS, or am I wrong?

Jeff, what do think about the example?
Comment 50 Jeff Layton 2009-11-04 14:18:08 EST
I'd have to look at the example more closely, but it's likely that the kernel code is picking up the inode number of the root inode of the underlying filesystem. 

I think what's happening is that the server sends the inode number of /tmp/mnt/home and a new fsid, but the client doesn't actually spawn a new submount there. So the device ID ends up the same. In fact, all of my ext3/4 filesystems seem to give the root inode st_ino == 2, so that's probably what's happening.

The trivial workaround here is to probably use stat() instead of lstat() here (-L option to the stat program), but I imagine that won't be suitable?

How to fix this? I don't think there is a way to do so without triggering a submount even when we don't want to follow symlinks.

That's going to be very costly for performance in many cases (if it's even reasonably doable). Imagine cd'ing into a directory that has a 1000 exported filesystems under it. Simply doing a readdir() in there is going to make the client spawn 1000 new mounts.
Comment 51 Kamil Dudka 2009-11-04 14:34:32 EST
(In reply to comment #50)
> The trivial workaround here is to probably use stat() instead of lstat() here
> (-L option to the stat program), but I imagine that won't be suitable?

Yep, this suppresses the bug as well as du -L in the original bug report. But we get a different result, so it's really not suitable.

> How to fix this? I don't think there is a way to do so without triggering a
> submount even when we don't want to follow symlinks.

I think this *should* be fixed since it breaks one of the basic axioms about VFS.

> That's going to be very costly for performance in many cases (if it's even
> reasonably doable). Imagine cd'ing into a directory that has a 1000 exported
> filesystems under it. Simply doing a readdir() in there is going to make the
> client spawn 1000 new mounts.  

No chance to get unique dev/ino pairs without triggering the mount first?
Comment 52 Peter Staubach 2009-11-04 14:50:18 EST
No, sorry, no way to determine what the ino is for the new file system
without talking to the server.

Doing an ls in a directory full of many autofs mounted file systems
should not trigger mounts for all of those file systems.  This will
cause a bigger performance problem than the original perceived
problem ever did.

Perhaps the right way to address this is to flag the returned
directory entries to the user level with something which indicates
that the metadata information for that entry will change if the
file system which would be mounted there was actually mounted
there.  This would eliminate most of the extra stat calls that Jim
Meyering is worried about.
Comment 53 Jim Meyering 2009-11-04 15:07:51 EST
FYI, I've (re)raised the issue on LKML:

    http://lkml.org/lkml/2009/11/4/451
Comment 54 Jeff Layton 2009-11-04 15:13:13 EST
Minor nit...we get the correct st_ino for the directory. The problem is that we don't have accurate st_dev info at that point since the mount hasn't occurred yet.

That said...it would be nice to be able to flag the entries in the way that Peter suggests. The question is how to do that in a way that's compatible with POSIX here.

Maybe we could declare a new S_IF* value for st_mode:

S_IFXDEV       020000

That should allow us to leave the S_IFDIR bit set and it employs a bit that's outside of __S_IFMT. The kernel could set this bit in the statbuf when it detects that the fsid on the inode is not the same as that of the parent directory.

The big question is whether and if someone wants to implement this and then sell it upstream :)
Comment 55 Kamil Dudka 2009-11-05 11:18:53 EST
Another question is how coreutils will detect that running kernel has the ability to indicate mount points, thus decide whether to use the optimization or not.
Comment 56 Peter Staubach 2009-11-05 11:35:56 EST
If an approach similar to what Jeff has suggested, then it won't matter.
If the kernel sets S_IFXDEV,then coreutils can use the optimization.  If
it doesn't, then it won't?
Comment 57 Kamil Dudka 2009-11-05 11:50:03 EST
Nope, if I understand it correctly, the semantic of S_IFXDEV bit is exactly opposite. If the bit is set, we need to call stat again after opening a directory. But if it's not set and we don't know if the kernel provides this feature, we can't use the optimization and need to call stat anyway. Or am I wrong?
Comment 58 Peter Staubach 2009-11-05 12:06:58 EST
Yes, sorry, was looking at the other way around.
Comment 59 Kamil Dudka 2009-11-05 12:21:09 EST
I think we need either a bit with exactly inverse value, or another equipment indicating that kernel is able to set the S_IFXDEV bit reliably.
Comment 60 Jim Meyering 2009-11-07 06:54:03 EST
(In reply to comment #48)
> Using your reproducer (above, thanks!) let's print one more dev/ino pair
> (this is on F12):
> 
> $ stat --printf "%d %i %n\n" /tmp/mnt/home /tmp/mnt   
> 24 2 /tmp/mnt/home
> 24 2 /tmp/mnt
> 
> That shows a big problem: two distinct directories have the same dev/ino pair,

FYI, I've opened a new BZ to track this separate problem:

https://bugzilla.redhat.com/show_bug.cgi?id=533569
Comment 62 Bug Zapper 2010-04-27 10:26:07 EDT
This message is a reminder that Fedora 11 is nearing its end of life.
Approximately 30 (thirty) days from now Fedora will stop maintaining
and issuing updates for Fedora 11.  It is Fedora's policy to close all
bug reports from releases that are no longer maintained.  At that time
this bug will be closed as WONTFIX if it remains open with a Fedora 
'version' of '11'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version prior to Fedora 11's end of life.

Bug Reporter: Thank you for reporting this issue and we are sorry that 
we may not be able to fix it before Fedora 11 is end of life.  If you 
would still like to see this bug fixed and are able to reproduce it 
against a later version of Fedora please change the 'version' of this 
bug to the applicable version.  If you are unable to change the version, 
please add a comment here and someone will do it for you.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events.  Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

The process we are following is described here: 
http://fedoraproject.org/wiki/BugZappers/HouseKeeping
Comment 63 Bug Zapper 2010-06-28 08:38:09 EDT
Fedora 11 changed to end-of-life (EOL) status on 2010-06-25. Fedora 11 is 
no longer maintained, which means that it will not receive any further 
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of 
Fedora please feel free to reopen this bug against that version.

Thank you for reporting this bug and we are sorry it could not be fixed.
Comment 64 Jim Meyering 2010-06-28 09:59:17 EDT
I wish it could be closed...
Still afflicts rawhide.
Comment 65 Bug Zapper 2010-07-30 06:39:47 EDT
This bug appears to have been reported against 'rawhide' during the Fedora 14 development cycle.
Changing version to '14'.

More information and reason for this action is here:
http://fedoraproject.org/wiki/BugZappers/HouseKeeping
Comment 66 Jim Meyering 2010-11-24 11:57:34 EST
still affects rawhide.
Comment 67 Kamil Dudka 2011-01-11 08:47:33 EST
(In reply to comment #45)
> # mount -t nfs4 localhost:/ /tmp/mnt \
>     && stat --printf "%d\t%i\t%n\n" /tmp/mnt/home \
>     && stat --printf "%d\t%i\t%n\n" /tmp/mnt/home/test \
>     && stat --printf "%d\t%i\t%n\n" /tmp/mnt/home
> 
> 29      2       /tmp/mnt/home
> 30      12      /tmp/mnt/home/test
> 30      2       /tmp/mnt/home

FYI I tried the same example on my RHEL-5 machine and, surprisingly, there seems to be no such optimization.  The first lstat() syscall on /tmp/mnt/home triggers the the mount of /tmp/mnt/home and picks the final dev/ino pair.
Comment 68 Kamil Dudka 2011-01-11 09:01:37 EST
... but it is still reproducible with autofs mount points even on RHEL-5.
Comment 69 Jeff Layton 2011-01-11 13:55:31 EST
I concur. I can't reproduce this any more either on nfsv4:

# mount /mnt/dantu && stat --printf "%d\t%i\t%n\n" /mnt/dantu && stat --printf "%d\t%i\t%n\n" /mnt/dantu/ext3 && stat --printf "%d\t%i\t%n\n" /mnt/dantu/ext3/testfile && stat --printf "%d\t%i\t%n\n" /mnt/dantu/ext3
24	2	/mnt/dantu
25	2	/mnt/dantu/ext3
25	49153	/mnt/dantu/ext3/testfile
25	2	/mnt/dantu/ext3

...in my setup the host exports a filesystem and "ext3" is a mounted and exported filesystem under that. It seems like something has changed and now lstat() calls are triggering the mount. I'm going back through the changelogs now to see why it's different now.
Comment 70 Jeff Layton 2011-01-11 13:59:18 EST
I should point out that those last results were with my latest RHEL5 test kernels.
Comment 71 Kamil Dudka 2011-01-11 14:13:36 EST
Jeff, sorry if my comment was confusing, but I think we both have exactly same results.  This bug (501848) is against Fedora.  RHEL-5 didn't repeat the the bug with nfsv4 for me, but I am still able to reproduce it on RHEL-5 with autofs.  I wrote the comment here only as an auxiliary observation while investigating bug 537463 , which is against RHEL-5.
Comment 72 Jeff Layton 2011-01-11 14:44:42 EST
No problem. It wasn't confusing. Steve asked me to have a look at this and I was just surprised that I was unable to reproduce this on recent RHEL5 kernels with NFSv4. Not sure why that is so far...
Comment 75 Fedora End Of Life 2013-04-03 15:57:42 EDT
This bug appears to have been reported against 'rawhide' during the Fedora 19 development cycle.
Changing version to '19'.

(As we did not run this process for some time, it could affect also pre-Fedora 19 development
cycle bugs. We are very sorry. It will help us with cleanup during Fedora 19 End Of Life. Thank you.)

More information and reason for this action is here:
https://fedoraproject.org/wiki/BugZappers/HouseKeeping/Fedora19
Comment 76 Justin M. Forbes 2013-04-05 11:52:38 EDT
Is this still a problem with 3.9 based F19 kernels?
Comment 77 Justin M. Forbes 2013-04-23 13:26:31 EDT
This bug is being closed with INSUFFICIENT_DATA as there has not been a
response in 2 weeks.  If you are still experiencing this issue,
please reopen and attach the relevant data from the latest kernel you are
running and any data that might have been requested previously.
Comment 78 Kamil Dudka 2013-04-23 18:03:13 EDT
The problem still exists in kernel-3.9.0-0.rc7.git3.1.fc20.x86_64.  The reproducer from comment #45 works for me:

[root@f20 ~]# mount -t nfs4 localhost:/ /tmp/mnt && stat --printf "%d\t%i\t%n\n" /tmp/mnt/boot && stat --printf "%d\t%i\t%n\n" /tmp/mnt/boot/grub2 && stat --printf "%d\t%i\t%n\n" /tmp/mnt/boot
36      2       /tmp/mnt/boot
37      65025   /tmp/mnt/boot/grub2
37      2       /tmp/mnt/boot

Note You need to log in before you can comment on or make changes to this bug.