Bug 895903 - scrub-freespace didn't fill any free space
scrub-freespace didn't fill any free space
Status: CLOSED NOTABUG
Product: Virtualization Tools
Classification: Community
Component: libguestfs (Show other bugs)
unspecified
Unspecified Unspecified
high Severity high
: ---
: ---
Assigned To: Richard W.M. Jones
:
Depends On:
Blocks: 892274
  Show dependency treegraph
 
Reported: 2013-01-16 04:06 EST by Richard W.M. Jones
Modified: 2013-01-25 03:45 EST (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 892274
: 903890 (view as bug list)
Environment:
Last Closed: 2013-01-25 03:44:18 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Richard W.M. Jones 2013-01-16 04:06:10 EST
+++ This bug was initially created as a clone of Bug #892274 +++

Description of problem:

scrub-freespace didn't fill any free space, scrub-device, scrub-file work correctly,  see below, 

><fs> scrub-freespace  /scrub
libguestfs: trace: scrub_freespace "/scrub"
libguestfs: send_to_daemon: 56 bytes: 00 00 00 34 | 20 00 f5 f5 | 00 00 00 04 | 00 00 00 74 | 00 00 00 0
0 | ...
guestfsd: main_loop: new request, len 0x34
scrub -X /sysroot/scrub
scrub: created directory /sysroot/scrub
scrub: unlinked /sysroot/scrub/scrub.000
scrub: removed /sysroot/scrub
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gueslibguestfs: recv_from_daemon: 40 bytes: 20 00 f5 f5 | 00 00 00 04 | 00 00 00 74 | 00 00 00 01 | 00 1
2 34 15 | ...
libguestfs: trace: scrub_freespace = 0

><fs> df
libguestfs: trace: df
libguestfs: send_to_daemon: 44 bytes: 00 00 00 28 | 20 00 f5 f5 | 00 00 00 04 | 00 00 00 7d | 00 00 00 0
0 | ...
tfsd: main_loop: proc 116 (scrub_freespace) took 1.59 seconds
guestfsd: main_loop: new request, len 0x28
df
libguestfs: recv_from_daemon: 208 bytes: 20 00 f5 f5 | 00 00 00 04 | 00 00 00 7d | 00 00 00 01 | 00 12 3
4 16 | ...
libguestfs: trace: df = "Filesystem     1K-blocks  Used Available Use% Mounted on\n/dev              243
680     0    243680   0% /dev\n/dev/sda1          99035  1551     92371   2% /sysroot\n"
Filesystem     1K-blocks  Used Available Use% Mounted on
/dev              243680     0    243680   0% /dev
/dev/sda1          99035  1551     92371   2% /sysroot



Version-Release number of selected component (if applicable):
libguestfs-1.20.1-4.el7.x86_64

How reproducible:
always

Steps to Reproduce:
1,guestfish -x -v -N fs mount /dev/sda1 / : scrub-freespace  /scrub : df 

  
Actual results:


Expected results:


Additional info:
Comment 1 Richard W.M. Jones 2013-01-21 05:51:14 EST
moli: What exactly are you expecting here?
Comment 2 Mohua Li 2013-01-22 01:14:58 EST
(In reply to comment #1)
> moli: What exactly are you expecting here?

ok, i admit, i'm a little confuse,according to the help

"  This command creates the directory "dir" and then fills it with files
    until the filesystem is full, and scrubs the files as for "scrub_file",
    and deletes them. The intention is to scrub any free space on the
    partition containing "dir".
"


in rhel6, it will fill the free space with scrub.*** file, and the filesystem is 100% full, but in rhel7, no scrub.*** file, and the filesystem have nothing changed, but in the help got "delete them" keyword, but from what scrub did, i think the behvaiour in rhel7 is a bug, 


><fs> scrub-freespace /scrub
libguestfs: trace: scrub_freespace "/scrub"
libguestfs: send_to_daemon: 56 bytes: 00 00 00 34 | 20 00 f5 f5 | 00 00 00 04 | 00 00 00 74 | 00 00 00 00 | ...
guestfsd: main_loop: proc 1 (mount) took 0.05 seconds
guestfsd: main_loop: new request, len 0x34
scrub -X /sysroot/scrub
libguestfs: recv_from_daemon: 40 bytes: 20 00 f5 f5 | 00 00 00 04 | 00 00 00 74 | 00 00 00 01 | 00 12 34 03 | ...
libguestfs: trace: scrub_freespace = 0
><fs> ls /scrub
libguestfs: trace: ls "/scrub"
libguestfs: send_to_daemon: 56 bytes: 00 00 00 34 | 20 00 f5 f5 | 00 00 00 04 | 00 00 00 06 | 00 00 00 00 | ...
guestfsd: main_loop: proc 116 (scrub_freespace) took 6.68 seconds
guestfsd: main_loop: new request, len 0x34
guestfsd: main_loop: proc 6 (ls) took 0.00 seconds
libguestfs: recv_from_daemon: 60 bytes: 20 00 f5 f5 | 00 00 00 04 | 00 00 00 06 | 00 00 00 01 | 00 12 34 04 | ...
libguestfs: trace: ls = ["scrub.000"]
scrub.000
><fs> df
libguestfs: trace: df
libguestfs: send_to_daemon: 44 bytes: 00 00 00 28 | 20 00 f5 f5 | 00 00 00 04 | 00 00 00 7d | 00 00 00 00 | ...
guestfsd: main_loop: new request, len 0x28
df
[   40.662316] df used greatest stack depth: 3056 bytes left
guestfsd: main_loop: proc 125 (df) took 0.00 seconds
libguestfs: recv_from_daemon: 240 bytes: 20 00 f5 f5 | 00 00 00 04 | 00 00 00 7d | 00 00 00 01 | 00 12 34 05 | ...
libguestfs: trace: df = "Filesystem           1K-blocks      Used Available Use% Mounted on\n/dev                    232704       132    232572   1% /dev\n/dev/vda1                99035     99034         0 100% /sysroot\n"
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev                    232704       132    232572   1% /dev
/dev/vda1                99035     99034         0 100% /sysroot
Comment 3 Richard W.M. Jones 2013-01-22 08:38:10 EST
My test for this is:

$ guestfish -v -N fs -m /dev/sda1 scrub-freespace /scrub

For me this produces the output:

[...]
scrub -X /sysroot/scrub
scrub: created directory /sysroot/scrub
scrub: unlinked /sysroot/scrub/scrub.000
scrub: removed /sysroot/scrub
[...]

Also the disk image file is fully allocated after the scrub,
which indicates that data (scrubbing patterns) were written
all the way through to the block device:

$ du -sh test1.img 
99M	test1.img

I tried this out on Fedora 18 and RHEL 7 with the same results.

moli: Do you agree that my test indicates that this is not a bug?

I do not believe the output of 'df' inside the guest is relevant,
since scrub removes the scrubbed file afterwards, so disk free space
should be substantially the same before and after the test.
Comment 4 Mohua Li 2013-01-22 21:54:25 EST
(In reply to comment #3)
> My test for this is:
> 
> $ guestfish -v -N fs -m /dev/sda1 scrub-freespace /scrub
> 
> For me this produces the output:
> 
> [...]
> scrub -X /sysroot/scrub
> scrub: created directory /sysroot/scrub
> scrub: unlinked /sysroot/scrub/scrub.000
> scrub: removed /sysroot/scrub
> [...]
> 
> Also the disk image file is fully allocated after the scrub,
> which indicates that data (scrubbing patterns) were written
> all the way through to the block device:
> 
> $ du -sh test1.img 
> 99M	test1.img
> 
> I tried this out on Fedora 18 and RHEL 7 with the same results.
> 
> moli: Do you agree that my test indicates that this is not a bug?
> 
> I do not believe the output of 'df' inside the guest is relevant,
> since scrub removes the scrubbed file afterwards, so disk free space
> should be substantially the same before and after the test.

Hi Rich,

That's what i confuse, if it's not a bug in rhel7, then there is a bug in rhel6, as it didn't remove the scrub file, as what imply in the help it do
need to remove the scrub file, any thought?
Comment 5 Richard W.M. Jones 2013-01-23 05:25:02 EST
Yes I agree, this does seem to be a bug on RHEL 6.

$ guestfish -v -N fs -m /dev/sda1 scrub-freespace /scrub : df-h
[...]
scrub -X /sysroot/scrub
[...]
Filesystem            Size  Used Avail Use% Mounted on
/dev                  237M  132K  237M   1% /dev
/dev/vda1              97M   97M     0 100% /sysroot

On RHEL 6.3 the scrub directory is not removed and so it
ends up filling the disk to 100%.

This is a bug in RHEL 6 scrub:
http://code.google.com/p/diskscrub/issues/detail?id=9
which was fixed upstream:
http://code.google.com/p/diskscrub/source/detail?r=98d36b4836d741c7cb7664cbd21b3ae1c138881e

This fix needs to be backported to RHEL 6 scrub (NB:
it's a bug in scrub, not in libguestfs).
Comment 6 Richard W.M. Jones 2013-01-25 03:44:18 EST
Closing because this is NOTABUG in everything except RHEL 6.

Note You need to log in before you can comment on or make changes to this bug.