RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1197463 - df command misses out NFS mounts
Summary: df command misses out NFS mounts
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: coreutils
Version: 7.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Ondrej Vasik
QA Contact: qe-baseos-daemons
URL:
Whiteboard:
Depends On: 920806
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-03-01 12:38 UTC by Pádraig Brady
Modified: 2019-08-15 04:18 UTC (History)
19 users (show)

Fixed In Version: coreutils-8.22-13.el7
Doc Type: Enhancement
Doc Text:
Previously, if multiple NFS mount points shared a superblock, the "df" utility only listed one of them unless the "-a" option was used. This update improves the filtering methods of "df", and the utility now properly displays all NFS mount points on the same superblock.
Clone Of: 920806
Environment:
Last Closed: 2015-11-19 12:44:58 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:2160 0 normal SHIPPED_LIVE coreutils bug fix and enhancement update 2015-11-19 11:10:56 UTC

Description Pádraig Brady 2015-03-01 12:38:21 UTC
+++ This bug was initially created as a clone of Bug #920806 +++

Description of problem:
I have 5 NFS volumes mounted from my NAS. Running 'mount' shows all 5, but running 'df' will only show 1 volume out of the 5.

# mount | grep nas
nas.example.com:/Photos on /mnt/Photos type nfs (rw,nosuid,nodev,noexec,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.1.64,mountvers=3,mountport=35825,mountproto=udp,local_lock=none,addr=192.168.1.64,user)
nas.example.com:/Download on /mnt/Download type nfs (rw,nosuid,nodev,noexec,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.1.64,mountvers=3,mountport=35825,mountproto=udp,local_lock=none,addr=192.168.1.64,user)
nas.example.com:/Multimedia on /mnt/Multimedia type nfs (rw,nosuid,nodev,noexec,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.1.64,mountvers=3,mountport=35825,mountproto=udp,local_lock=none,addr=192.168.1.64,user)
nas.example.com:/VirtualMachines on /mnt/VirtualMachines type nfs (rw,nosuid,nodev,noexec,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.1.64,mountvers=3,mountport=35825,mountproto=udp,local_lock=none,addr=192.168.1.64,user)
nas.example.com:/Backups on /mnt/Backups type nfs (rw,nosuid,nodev,noexec,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.1.64,mountvers=3,mountport=35825,mountproto=udp,local_lock=none,addr=192.168.1.64,user)


# df | grep nas
nas.example.com:/Photos   2879673344 2489566368 389582688  87% /mnt/Photos


I believe this broken behaviour arrived in Fedora 18, though I no longer have any F17 install to confirm this categorically.

Version-Release number of selected component (if applicable):
coreutils-8.17-8.fc18.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Export multiple NFS volumes from a remote server
2. Mount all volumes on the client
3. Run 'df'
  
Actual results:
df only shows usage for one random mount

Expected results:
df shows all usage for every single mount point that the 'mount' command reports

Additional info:

--- Additional comment from Pádraig Brady on 2013-03-12 15:58:34 EDT ---

Daniel that's probably fixed in 8.21 (in rawhide)

--- Additional comment from Pádraig Brady on 2013-03-12 16:03:48 EDT ---

Ondrej, either the duplicate suppression patch should be backed out,
or augmented with the alternative duplicate suppression method
we came up with here:
http://git.sv.gnu.org/gitweb/?p=coreutils.git;a=commitdiff;h=bb116d35

thanks.

--- Additional comment from Daniel Berrange on 2013-03-12 16:32:43 EDT ---

I've done a local RPM rebuild with the df dup patch removed and that solved this. Happy to try the upstream fix if someone wants to provide a backport of it to f18

--- Additional comment from Pádraig Brady on 2013-03-12 16:55:04 EDT ---

I'm not convinced the current upstream duplicate suppression
method suffices here either. Will need to look further into it.
Perhaps remote file systems should be removed from duplicate suppression.

--- Additional comment from Daniel Berrange on 2013-03-12 19:13:26 EDT ---

Indeed, the 'df' from 8.21 has the same problem as that seen in 8.17 dup patch with my NFS mounts.

--- Additional comment from Pádraig Brady on 2013-03-12 19:30:25 EDT ---

So in 8.21 the deduping is done based on the "device ID".
df is by default trying to impart the storage available,
without displaying multiple entries to the same storage,
which may be confusing, especially with --total.

So I presume `stat -c %D /mnt/Photos /mnt/Downloads` give
the same device ID for you. Hopefully that implies that they correspond
to the same storage on the NFS server.

df 8.21 should display /mnt/Photos by default as it is the shortest path,
which is a heuristic used to display to most appropriate entry
avoiding bind mounts, remounts, multiple mounts, pivot roots, ...

If you want to display all nfs entries, you now need to be explicit:
  df -h -t nfs -a

--- Additional comment from Ondrej Vasik on 2013-03-13 04:35:13 EDT ---

I tend to NOTABUG this report - as this is one of the side effects of /proc/mounts -> /etc/mtab symlink... with this change, df shown a lot of duplicate entries. 
Users started to complain about it. 
->
Ok, deduplication was done - but it started to not show the entry for root filesystem. 
->
So the "shortest match" improvement was done.
->
Still not perfect for nfs.

Only improvement I can imagine is kind of "exception list" for "preferred filesystem types" - which would be hard to do properly anyway. Any thoughts?

--- Additional comment from Pádraig Brady on 2013-03-13 05:16:58 EDT ---

I wanted to wait a little while for possible feedback from Daniel before closing,
but yes this is an expected change in behaviour, the reasoning being detailed in comment 6.

We have another bug #887763 to track issues with deduplication, though TBH the issues mentioned there are all outside of coreutils AFAICS. I do think we should consider adjusting the dedupe patch to what upstream currently uses, though that's not the issue being seen here.

The orig bug for the dedup was bug #709351

thanks.

--- Additional comment from Daniel Berrange on 2013-03-13 05:53:43 EDT ---

(In reply to comment #6)
> So in 8.21 the deduping is done based on the "device ID".
> df is by default trying to impart the storage available,
> without displaying multiple entries to the same storage,
> which may be confusing, especially with --total.
> 
> So I presume `stat -c %D /mnt/Photos /mnt/Downloads` give
> the same device ID for you. Hopefully that implies that they correspond
> to the same storage on the NFS server.

Yes, that is sort of correct. On one of my clients they all have the same device ID, one another F18 client some of them have different device IDs, so 'df' in fact shows 2 out of the 5 volumes


> df 8.21 should display /mnt/Photos by default as it is the shortest path,
> which is a heuristic used to display to most appropriate entry
> avoiding bind mounts, remounts, multiple mounts, pivot roots, ...
> 
> If you want to display all nfs entries, you now need to be explicit:
>   df -h -t nfs -a

I think this is a really very unhelpful, unpleasant behaviour. As a user I want 'df' to report usage of each volume that I have explicitly mounted.

--- Additional comment from Daniel Berrange on 2013-03-13 05:56:23 EDT ---

(In reply to comment #7)
> I tend to NOTABUG this report - as this is one of the side effects of
> /proc/mounts -> /etc/mtab symlink... with this change, df shown a lot of
> duplicate entries. 
> Users started to complain about it. 
> ->
> Ok, deduplication was done - but it started to not show the entry for root
> filesystem. 
> ->
> So the "shortest match" improvement was done.
> ->
> Still not perfect for nfs.

If choosing between seeing some duplicate entries, vs missing out random volumes entirely, I absolutely prefer to see a few duplicates. Throwing away / hiding data is not good.

--- Additional comment from Pádraig Brady on 2013-03-13 06:20:57 EDT ---

(In reply to comment #9)
> (In reply to comment #6)
> > So in 8.21 the deduping is done based on the "device ID".
> > df is by default trying to impart the storage available,
> > without displaying multiple entries to the same storage,
> > which may be confusing, especially with --total.
> > 
> > So I presume `stat -c %D /mnt/Photos /mnt/Downloads` give
> > the same device ID for you. Hopefully that implies that they correspond
> > to the same storage on the NFS server.
> 
> Yes, that is sort of correct. On one of my clients they all have the same
> device ID, one another F18 client some of them have different device IDs, so
> 'df' in fact shows 2 out of the 5 volumes

I didn't see that inconsistency in my testing last night.
I wonder are there different options used or something
on those mount points that would trigger different device IDs.

> > df 8.21 should display /mnt/Photos by default as it is the shortest path,
> > which is a heuristic used to display to most appropriate entry
> > avoiding bind mounts, remounts, multiple mounts, pivot roots, ...
> > 
> > If you want to display all nfs entries, you now need to be explicit:
> >   df -h -t nfs -a
> 
> I think this is a really very unhelpful, unpleasant behaviour. As a user I
> want 'df' to report usage of each volume that I have explicitly mounted.

So df is in a quandary here to distinguish user vs system mounts.
The recent changes beneath df in the area haven't been fully
designed, and never really were TBH given both the old and
new heuristics present in df. Maybe we could heap more heuristics
into the mix, like giving entries in fstab precedence, though
we probably don't want to open that can of worms.

I do think that the current compromise to show one entry
per unique device ID in the system isn't a bad scheme, and
allows df to report "storage available" rather than "File systems mounted".
Noting the Daniel's inconsistency above with distinct IDs from
apparently the same storage on the NFS server, that inconsistency
seems best addressed outside of df.

--- Additional comment from Daniel Berrange on 2013-03-13 06:31:30 EDT ---

(In reply to comment #11)
> > > df 8.21 should display /mnt/Photos by default as it is the shortest path,
> > > which is a heuristic used to display to most appropriate entry
> > > avoiding bind mounts, remounts, multiple mounts, pivot roots, ...
> > > 
> > > If you want to display all nfs entries, you now need to be explicit:
> > >   df -h -t nfs -a
> > 
> > I think this is a really very unhelpful, unpleasant behaviour. As a user I
> > want 'df' to report usage of each volume that I have explicitly mounted.
> 
> So df is in a quandary here to distinguish user vs system mounts.
> The recent changes beneath df in the area haven't been fully
> designed, and never really were TBH given both the old and
> new heuristics present in df. Maybe we could heap more heuristics
> into the mix, like giving entries in fstab precedence, though
> we probably don't want to open that can of worms.
> 
> I do think that the current compromise to show one entry
> per unique device ID in the system isn't a bad scheme, and
> allows df to report "storage available" rather than "File systems mounted".

The issue I have is that when you show a list like

# df
Filesystem                       1K-blocks       Used Available Use% Mounted on
devtmpfs                           1958380          0   1958380   0% /dev
tmpfs                              1970892        192   1970700   1% /dev/shm
tmpfs                              1970892       1180   1969712   1% /run
tmpfs                              1970892          0   1970892   0% /sys/fs/cgroup
/dev/mapper/vg_t500wlan-lv_root  151476396  135012952   8762148  94% /
tmpfs                              1970892        564   1970328   1% /tmp
/dev/sda1                           194241      98663     85338  54% /boot
nas.example.com:/Photos   2879673344 2494189312 384959744  87% /mnt/Photos

This is implicitly telling the user that '/mnt/Download' (which is not shown) is inheriting "free space" from '/'. Unless the user actually knows ahead of time that /mnt/Download is actually a separate mount from the same NFS volume as /mnt/Photos, the user is simply being misled by 'df' into thinking that '/mnt/Downloads' is just part of the root filesystem mount volume & thus shares its free space.

Seeing duplicates doesn't have the potential to cause harm since all info is still accurate even if duplicated. Leaving out mounts entirely does cause harm by giving the user a mis-leading view of space on various volumes.

> Noting the Daniel's inconsistency above with distinct IDs from
> apparently the same storage on the NFS server, that inconsistency
> seems best addressed outside of df.

Yep, lets ignore that inconsistency for sake of these discussions.

--- Additional comment from Ondrej Vasik on 2013-03-13 06:55:04 EDT ---

Just as a anti-argument to the "see few duplicates" - https://bugzilla.redhat.com/show_bug.cgi?id=709351#c16 ... with 300 bind mounts, df gets unusable completely without the deduplication. Of course, the heuristic may be improved, the shortest match was the easiest one, as Padraig said, we may probably skip all remote filesystems deduplication or something like that. Still - I think the current behaviour is better than having many duplicates - you can always get full listing if you want with df -a .

--- Additional comment from Pádraig Brady on 2013-03-13 07:07:47 EDT ---

(In reply to comment #12)
> (In reply to comment #11)
> 
> The issue I have is that when you show a list like
> 
> # df
> Filesystem                       1K-blocks       Used Available Use% Mounted
> on
> devtmpfs                           1958380          0   1958380   0% /dev
> tmpfs                              1970892        192   1970700   1% /dev/shm
> tmpfs                              1970892       1180   1969712   1% /run
> tmpfs                              1970892          0   1970892   0%
> /sys/fs/cgroup
> /dev/mapper/vg_t500wlan-lv_root  151476396  135012952   8762148  94% /
> tmpfs                              1970892        564   1970328   1% /tmp
> /dev/sda1                           194241      98663     85338  54% /boot
> nas.example.com:/Photos   2879673344 2494189312 384959744  87% /mnt/Photos
> 
> This is implicitly telling the user that '/mnt/Download' (which is not
> shown) is inheriting "free space" from '/'.

Yes I agree in part, and by that argument we should only be
doing dedup for /'s device id.  But it's a bit of a stretch to
infer anything about a path not shown by df.
By the same argument one might infer that /proc is inheriting
"free space" from '/'?

Is it too onerous to require `df -a` to display all mount points?
That is a bit verbose I suppose, as is `findmnt`.
Perhaps we should add a pseudo '[:dummy:]' param to -x so
that one can enable the traditional behaviour of displaying
all devices with storage even if duplicates like:
alias df='df -a -x "[:dummy:]"'

--- Additional comment from Daniel Berrange on 2013-03-13 07:17:46 EDT ---

(In reply to comment #14)
> (In reply to comment #12)
> > (In reply to comment #1)
> > This is implicitly telling the user that '/mnt/Download' (which is not
> > shown) is inheriting "free space" from '/'.
> 
> Yes I agree in part, and by that argument we should only be
> doing dedup for /'s device id.  But it's a bit of a stretch to
> infer anything about a path not shown by df.
> By the same argument one might infer that /proc is inheriting
> "free space" from '/'?
> 
> Is it too onerous to require `df -a` to display all mount points?

The issue is that users have to know there is something wrong, in order to decide that they need to issue 'df -a'. Since there is no indication that df is dropping valid volumes, they may never realize the need to run 'df -a' to see the correct data. 

Also if the user has a combination of many bind mounts, but also a number of  NFS volumes at the same time, they're back where they started from a usability POV. There's no way to get the valid info for NFS, without also polluting their display with the bind mounts. 

I can't help thinking we're in a no-win situation here and that the kernel needs to provide more information in /proc/mounts to let userspace do the right job. ie provide some indication that the mount came from a bind mount, so userspace can then filter them ?

--- Additional comment from Pádraig Brady on 2013-03-13 08:02:04 EDT ---

Yes in general the kernel could provide a lot more info to help here.
Some notes for possible future dev:

We added a find_bind_mount() function to stat(1) a while back
which might be of use:
http://git.savannah.gnu.org/gitweb/?p=coreutils.git;a=blob;f=src/stat.c;h=2326698#l805

I see that findmnt now supports --df, and that it outputs duplicates by default.

--- Additional comment from Ondrej Vasik on 2014-01-15 02:13:56 EST ---



--- Additional comment from Fridolín Pokorný on 2014-09-01 08:59:37 EDT ---

(In reply to Daniel Berrange from comment #15)
> I can't help thinking we're in a no-win situation here and that the kernel
> needs to provide more information in /proc/mounts to let userspace do the
> right job. ie provide some indication that the mount came from a bind mount,
> so userspace can then filter them ?

There is no "bind" flag stored in the kernel, so there is nothing to propagate to the userspace - there is no diff between bind mount and regular mount from the kernel POV.

(In reply to Pádraig Brady from comment #16)
> Yes in general the kernel could provide a lot more info to help here.
> Some notes for possible future dev:
> 
> We added a find_bind_mount() function to stat(1) a while back
> which might be of use:
> http://git.savannah.gnu.org/gitweb/?p=coreutils.git;a=blob;f=src/stat.c;
> h=2326698#l805

I don't see any advantage by using find_bind_mount() since kernel does not provide bind flag in /proc/self/mounts nor in /proc/self/mountinfo (as stated above).

(In reply to Pádraig Brady from comment #11)
> > > So I presume `stat -c %D /mnt/Photos /mnt/Downloads` give
> > > the same device ID for you. Hopefully that implies that they correspond
> > > to the same storage on the NFS server.
> > 
> > Yes, that is sort of correct. On one of my clients they all have the same
> > device ID, one another F18 client some of them have different device IDs, so
> > 'df' in fact shows 2 out of the 5 volumes
> 
> I didn't see that inconsistency in my testing last night.
> I wonder are there different options used or something
> on those mount points that would trigger different device IDs.

I cannot reproduce this too. Does this issue persist? Could you please provide mount options?

--- Additional comment from Ondrej Vasik on 2014-10-03 07:50:22 EDT ---

ping?...

--- Additional comment from Daniel Berrange on 2014-10-03 08:10:18 EDT ---

This bug is still a problem on F20 coreutils-8.21-21.fc20.x86_64  I'm not using any special mount flags

# mount nas.home:/share/Photos /mnt/Photos/
# mount nas.home:/share/Backups /mnt/Backups/
# df | grep /mnt
/dev/dm-3                            240231096  182501608  45503332  81% /mnt/1c5321f5-14d6-40a2-8ca1-dd61524c9576
nas.home:/share/Photos 2882739712 2838224224  44515488  99% /mnt/Photos
# umount /mnt/Photos 
# df | grep /mnt
/dev/dm-3                             240231096  182501608  45503332  81% /mnt/1c5321f5-14d6-40a2-8ca1-dd61524c9576
nas.home:/share/Backups 2882739712 2838224224  44515488  99% /mnt/Backups
# mount nas.home:/share/Photos /mnt/Photos/
# df | grep /mnt
/dev/dm-3                            240231096  182501608  45503332  81% /mnt/1c5321f5-14d6-40a2-8ca1-dd61524c9576
nas.home:/share/Photos 2882739712 2838224224  44515488  99% /mnt/Photos

I no longer have the machine that previously showed inconsistent device IDs for the same NFS server, so can't verify that qiwrd behaviour.

--- Additional comment from Pádraig Brady on 2014-10-28 23:52:57 EDT ---

I'm coming around to the idea of not suppressing _separate_ exports as they're probably explicitly mounted and may have separate ACLs etc. Proposed patch  against upstream trunk is attached.

--- Additional comment from Ondrej Vasik on 2014-10-29 02:06:54 EDT ---

Thanks Pádraig!

Comment 6 errata-xmlrpc 2015-11-19 12:44:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-2160.html


Note You need to log in before you can comment on or make changes to this bug.