Bug 154959 - df -h shows -64Z after resize2fs filesystem shrink
Summary: df -h shows -64Z after resize2fs filesystem shrink
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Fedora
Classification: Fedora
Component: e2fsprogs
Version: rawhide
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Stephen Tweedie
QA Contact:
URL:
Whiteboard: bzcl34nup
Depends On:
Blocks: FC4Target
TreeView+ depends on / blocked
 
Reported: 2005-04-15 03:17 UTC by Charles R. Anderson
Modified: 2008-05-07 00:08 UTC (History)
3 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2008-05-07 00:08:41 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)
strace df (5.07 KB, text/plain)
2005-04-15 03:32 UTC, Charles R. Anderson
no flags Details
e2image of /home (170.89 KB, application/x-bzip2)
2005-04-15 18:39 UTC, Charles R. Anderson
no flags Details

Description Charles R. Anderson 2005-04-15 03:17:11 UTC
Description of problem:

I ran out of space for updates after I did an everything install of
FC4T2, so I decided to try out the LVM/ext3 resizing features.  I
shrunk /home and expanded /.  It appears to have been successful, and
fsck finds no errors on the filesystems, however, now df shows this:

Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VGSystem-LVRoot
                       11G  8.8G  1.1G  90% /
/dev/hda1              99M   23M   72M  25% /boot
/dev/shm              251M     0  251M   0% /dev/shm
/dev/mapper/VGSystem-LVHome
                       20G  -64Z   25G 101% /home

I don't have the exact figures for filesystem size before the shrink other than
a "df -h" report from logcheck (see the end).

Version-Release number of selected component (if applicable):

e2fsprogs-1.37-2
kernel-smp-2.6.11-1.1234_FC4

How reproducible:
didn't try

Steps to Reproduce:
1. umount /home; resize2fs /dev/VGSystem/LVHome smallersize
2. lvreduce -L -shrinksize /dev/VGSystem/LVHome 
3. e2fsck -f /dev/VGSystem/LVHome (no errors found)
4. mount /home; df -h
  
Actual results:

# Currently:
 sh-3.00# lvdisplay --units b
  --- Logical volume ---
  LV Name                /dev/VGSystem/LVRoot
  VG Name                VGSystem
  LV UUID                TnkG5j-OYqA-HUwZ-oSCX-RGIf-kNfw-w4HiaD
  LV Write Access        read/write
  LV Status              available
  # open                 1   LV Size                11542724608 B
  Current LE             344   Segments               2
  Allocation             inherit   Read ahead sectors     0
  Block device           253:0 
  --- Logical volume ---   LV Name                /dev/VGSystem/LVHome
  VG Name                VGSystem   LV UUID               
L7PANq-ydxX-wyUx-vfAQ-KaSM-QbJP-0pnZ1X
  LV Write Access        read/write   LV Status              available
  # open                 1
  LV Size                21474836480 B
  Current LE             640
  Segments               3
  Allocation             inherit
  Read ahead sectors     0
  Block device           253:1 
  --- Logical volume ---   LV Name                /dev/VGSystem/LVSwap
  VG Name                VGSystem
  LV UUID                xqo1QC-s4hq-GKJP-9kmL-eemr-GGWD-bmIavm
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1073741824 B
  Current LE             32   Segments               1
  Allocation             inherit   Read ahead sectors     0
  Block device           253:2 
sh-3.00# vgdisplay --units b   --- Volume group ---
  VG Name               VGSystem   System ID
  Format                lvm2
  Metadata Areas        3
  Metadata Sequence No  6
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3   Open LV               3
  Max PV                0
  Cur PV                3
  Act PV                3
  VG Size               39862665216 B
  PE Size               33554432 B
  Total PE              1188
  Alloc PE / Size       1016 / 34091302912 B
  Free  PE / Size       172 / 5771362304 B
  VG UUID               hlcOdx-x93C-EDgH-m7tH-H5xA-1qpT-wBX92k

sh-3.00# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "VGSystem" using metadata type lvm2

sh-3.00# pvscan
  PV /dev/hda2   VG VGSystem   lvm2 [12.38 GB / 0    free]
  PV /dev/hda3   VG VGSystem   lvm2 [12.38 GB / 0    free]
  PV /dev/hda4   VG VGSystem   lvm2 [12.38 GB / 5.38 GB free]
  Total: 3 [37.12 GB] / in use: 3 [37.12 GB] / in no VG: 0 [0   ]

sh-3.00# pvdisplay --units b
  --- Physical volume ---
  PV Name               /dev/hda2
  VG Name               VGSystem
  PV Size               13287555072 B  / not usable 0 B
  Allocatable           yes (but full)
  PE Size (KByte)       32768
  Total PE              396
  Free PE               0
  Allocated PE          396
  PV UUID               qLV0Yp-acTK-XeKY-XaGU-Rrwa-xGbK-02dYbv

  --- Physical volume ---
  PV Name               /dev/hda3
  VG Name               VGSystem
  PV Size               13287555072 B  / not usable 0 B
  Allocatable           yes (but full)
  PE Size (KByte)       32768
  Total PE              396
  Free PE               0
  Allocated PE          396
  PV UUID               eDC9NO-h6VM-c2FB-LrjW-SHHX-1FxM-tIoz5U

  --- Physical volume ---
  PV Name               /dev/hda4
  VG Name               VGSystem
  PV Size               13287555072 B  / not usable 0 B
  Allocatable           yes
  PE Size (KByte)       32768
  Total PE              396
  Free PE               172
  Allocated PE          224
  PV UUID               LkmStD-5rbe-VyCg-bQ8c-eUlO-oSk6-EgkcFZ

sh-3.00# fdisk -l

Disk /dev/hda: 40.0 GB, 40000000000 bytes
255 heads, 63 sectors/track, 4863 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/hda1   *           1          13      104391   83  Linux
/dev/hda2              14        1630    12988552+  8e  Linux LVM
/dev/hda3            1631        3247    12988552+  8e  Linux LVM
/dev/hda4            3248        4863    12980520   8e  Linux LVM
sh-3.00# cat /proc/mdstat
Personalities :
unused devices: <none>

sh-3.00# df -a
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/VGSystem-LVRoot
                      10919168   9216964   1138608  90% /
/dev/proc                    0         0         0   -  /proc
/dev/sys                     0         0         0   -  /sys
/dev/devpts                  0         0         0   -  /dev/pts
/dev/hda1               101086     23133     72734  25% /boot
/dev/shm                256928         0    256928   0% /dev/shm
none                         0         0         0   -  /proc/sys/fs/binfmt_misc
sunrpc                       0         0         0   -  /var/lib/nfs/rpc_pipefs
/dev/mapper/VGSystem-LVHome
                      20107976 -73786976294832210704  25264876 101% /home

sh-3.00# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VGSystem-LVRoot
                       11G  8.8G  1.1G  90% /
/dev/hda1              99M   23M   72M  25% /boot
/dev/shm              251M     0  251M   0% /dev/shm
/dev/mapper/VGSystem-LVHome
                       20G  -64Z   25G 101% /home

# Before the resize:

/dev/mapper/VGSystem-LVRoot
                      9.5G  8.3G  740M  92% /
/dev/hda1              99M   23M   72M  25% /boot
/dev/shm              251M     0  251M   0% /dev/shm
/dev/mapper/VGSystem-LVHome
                       26G  577M   24G   3% /home

Comment 1 Charles R. Anderson 2005-04-15 03:25:04 UTC
If this helps, here is an excerpt from the anaconda.ks file that should give the
exact pre-resize filesystem sizes:

# Kickstart file automatically generated by anaconda.
# The following is the partition information you requested
# Note that any partitions you deleted are not expressed
# here so unless you clear all partitions first, this is
# not guaranteed to work
#clearpart --linux
#part /boot --fstype ext3 --onpart hda1
#part pv.2 --noformat --onpart hda2
#part pv.3 --noformat --onpart hda3
#part pv.4 --noformat --onpart hda4
#volgroup VGSystem --pesize=32768 pv.2 pv.3 pv.4
#logvol / --fstype ext3 --name=LVRoot --vgname=VGSystem --size=9984
#logvol swap --fstype swap --name=LVSwap --vgname=VGSystem --size=1024
#logvol /home --noformat --fstype ext3 --name=LVHome --vgname=VGSystem --size=26912


Comment 2 Charles R. Anderson 2005-04-15 03:32:31 UTC
Created attachment 113209 [details]
strace df

Comment 3 Stephen Tweedie 2005-04-15 12:18:57 UTC
3. e2fsck -f /dev/VGSystem/LVHome (no errors found)
4. mount /home; df -h

That's weird --- fsck finds no problems, yet we're still seeing bad df output,
with strace on statfs giving:

f_blocks=5026994, f_bfree=6525934, f_bavail=6316219

Could you please send me more information about this filesystem?  Ideally, a raw
dump of the fs would be perfect (e2image -r /dev/mapper/VGSystem-LVHome - |
bzip2 > $file) --- that will contain all of the metadata, but none of the data
itself.  You can use e2image's "-s" option to scramble directory entry contents
too if you're worried about that.

If that's too big, "dumpe2fs" output may help.

Thanks!


Comment 4 Charles R. Anderson 2005-04-15 18:39:44 UTC
Created attachment 113244 [details]
e2image of /home

Here is the output of:

e2image -r -s /dev/mapper/VGSystem-LVHome - | bzip2 -c > /tmp/home-e2image.bz2

I took a look at the output with "strings", and it doesn't appear that it
actually scrambled anything...Not a big deal in this case, though.

Comment 5 Charles R. Anderson 2005-04-15 19:08:49 UTC
I just tried another fsck.  After a supposedly clean umount, it recovers the
journal and modifies the filesystem.  Why?

# umount /home
# e2fsck -f /dev/mapper/VGSystem-LVHome
e2fsck 1.37 (21-Mar-2005)
/dev/mapper/VGSystem-LVHome: recovering journal
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information

/dev/mapper/VGSystem-LVHome: ***** FILE SYSTEM WAS MODIFIED *****
/dev/mapper/VGSystem-LVHome: 300/5227520 files (9.3% non-contiguous),
311365/5242880 blocks



Comment 6 Bug Zapper 2008-04-03 16:05:37 UTC
Based on the date this bug was created, it appears to have been reported
against rawhide during the development of a Fedora release that is no
longer maintained. In order to refocus our efforts as a project we are
flagging all of the open bugs for releases which are no longer
maintained. If this bug remains in NEEDINFO thirty (30) days from now,
we will automatically close it.

If you can reproduce this bug in a maintained Fedora version (7, 8, or
rawhide), please change this bug to the respective version and change
the status to ASSIGNED. (If you're unable to change the bug's version
or status, add a comment to the bug and someone will change it for you.)

Thanks for your help, and we apologize again that we haven't handled
these issues to this point.

The process we're following is outlined here:
http://fedoraproject.org/wiki/BugZappers/F9CleanUp

We will be following the process here:
http://fedoraproject.org/wiki/BugZappers/HouseKeeping to ensure this
doesn't happen again.

Comment 7 Bug Zapper 2008-05-07 00:08:39 UTC
This bug has been in NEEDINFO for more than 30 days since feedback was
first requested. As a result we are closing it.

If you can reproduce this bug in the future against a maintained Fedora
version please feel free to reopen it against that version.

The process we're following is outlined here:
http://fedoraproject.org/wiki/BugZappers/F9CleanUp


Note You need to log in before you can comment on or make changes to this bug.