Bug 104317 - df shows incorrect % full and available values.
Summary: df shows incorrect % full and available values.
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Linux
Classification: Retired
Component: filesystem
Version: 8.0
Hardware: i586
OS: Linux
medium
high
Target Milestone: ---
Assignee: Bill Nottingham
QA Contact: Mike McLean
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2003-09-12 14:40 UTC by Matt Schillinger
Modified: 2014-03-17 02:38 UTC (History)
1 user (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2003-09-12 14:52:31 UTC
Embargoed:


Attachments (Terms of Use)

Description Matt Schillinger 2003-09-12 14:40:04 UTC
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.0.1) Gecko/20021003

Description of problem:
We have a NexSAN ataboy RAID connected to our RH 8 server. One of the partitions
(an EXT3 partition @ 961GB) reached 100% earlier this week. Since that time,
data has been removed from the partition (over 30GB). 

when doing a df -k /export/imagery (the partition), df shows:

[root@loki root]# df /export/imagery/
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/Imagery/lvol1   961211736 916595968         0 100% /export/imagery
[root@loki root]#

NOTE that used is not near the 961GB, yet it reads 'Available' at '0', and
'Use%' at '100%'.  

I can add and remove files locally as normal, but the df counters don't change.
we are also having problems with NFS since this problem occured, where clients
(in particular Solaris 8) cannot write files larger than a few bites to the
partition.

What I've Done::

I have unexported the partition and unmounted it. --- Then remounted it, and
found the problem still there.

I then rebooted the machine, and the problem still exists.

I have attempted multiple times to fsck the partition, but it always tells me
that the partition is clean.

I have looked at tune2fs, and have been unable to find any options that might
help my situation.. Please advise.

Version-Release number of selected component (if applicable):


How reproducible:
Didn't try

Steps to Reproduce:
1. umount partition
2. remount partition
3. problem still exists.

1. reboot machine
2. when partition is mounted, problem still exists.
    

Additional info:

I have put this bug Severity as High, because I fear that, as this is a
filesystem problem, that it may lead to a partition data loss, or other
extremely bad things.

OUTPUT OF 'tune2fs -l /dev/Imagery/lvol1'

[root@loki root]# tune2fs -l /dev/Imagery/lvol1
tune2fs 1.27 (8-Mar-2002)
Filesystem volume name:   <none>
Last mounted on:          <not available>
Filesystem UUID:          e85c895a-de26-4dc0-81c7-36296f42cc35
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal filetype needs_recovery sparse_super
large_file
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              122077184
Block count:              244133888
Reserved block count:     12206694
Free blocks:              11153942
Free inodes:              121931155
First block:              0
Block size:               4096
Fragment size:            4096
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         16384
Inode blocks per group:   512
Last mount time:          Thu Sep 11 14:49:26 2003
Last write time:          Thu Sep 11 14:49:26 2003
Mount count:              17
Maximum mount count:      37
Last checked:             Tue Aug 26 09:40:42 2003
Check interval:           15552000 (6 months)
Next check after:         Sun Feb 22 08:40:42 2004
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:               128
Journal UUID:             <none>
Journal inode:            8
Journal device:           0x0000
First orphan inode:       0
[root@loki root]#

Comment 1 Bill Nottingham 2003-09-12 14:52:31 UTC
The % is % available to normal users. Since 5% is reserved for the superuser
only, you still need to remove about 30GB (if I'm reading it right) for it to
show as < 100% full.

You can change the reserved percentage with tune2fs -m .


Note You need to log in before you can comment on or make changes to this bug.