Bug 895903
Summary: | scrub-freespace didn't fill any free space | |||
---|---|---|---|---|
Product: | [Community] Virtualization Tools | Reporter: | Richard W.M. Jones <rjones> | |
Component: | libguestfs | Assignee: | Richard W.M. Jones <rjones> | |
Status: | CLOSED NOTABUG | QA Contact: | ||
Severity: | high | Docs Contact: | ||
Priority: | high | |||
Version: | unspecified | CC: | bfan, dyasny, leiwang, mbooth, moli, qguan, wshi | |
Target Milestone: | --- | |||
Target Release: | --- | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | Bug Fix | ||
Doc Text: | Story Points: | --- | ||
Clone Of: | 892274 | |||
: | 903890 (view as bug list) | Environment: | ||
Last Closed: | 2013-01-25 08:44:18 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 892274 |
Description
Richard W.M. Jones
2013-01-16 09:06:10 UTC
moli: What exactly are you expecting here? (In reply to comment #1) > moli: What exactly are you expecting here? ok, i admit, i'm a little confuse,according to the help " This command creates the directory "dir" and then fills it with files until the filesystem is full, and scrubs the files as for "scrub_file", and deletes them. The intention is to scrub any free space on the partition containing "dir". " in rhel6, it will fill the free space with scrub.*** file, and the filesystem is 100% full, but in rhel7, no scrub.*** file, and the filesystem have nothing changed, but in the help got "delete them" keyword, but from what scrub did, i think the behvaiour in rhel7 is a bug, ><fs> scrub-freespace /scrub libguestfs: trace: scrub_freespace "/scrub" libguestfs: send_to_daemon: 56 bytes: 00 00 00 34 | 20 00 f5 f5 | 00 00 00 04 | 00 00 00 74 | 00 00 00 00 | ... guestfsd: main_loop: proc 1 (mount) took 0.05 seconds guestfsd: main_loop: new request, len 0x34 scrub -X /sysroot/scrub libguestfs: recv_from_daemon: 40 bytes: 20 00 f5 f5 | 00 00 00 04 | 00 00 00 74 | 00 00 00 01 | 00 12 34 03 | ... libguestfs: trace: scrub_freespace = 0 ><fs> ls /scrub libguestfs: trace: ls "/scrub" libguestfs: send_to_daemon: 56 bytes: 00 00 00 34 | 20 00 f5 f5 | 00 00 00 04 | 00 00 00 06 | 00 00 00 00 | ... guestfsd: main_loop: proc 116 (scrub_freespace) took 6.68 seconds guestfsd: main_loop: new request, len 0x34 guestfsd: main_loop: proc 6 (ls) took 0.00 seconds libguestfs: recv_from_daemon: 60 bytes: 20 00 f5 f5 | 00 00 00 04 | 00 00 00 06 | 00 00 00 01 | 00 12 34 04 | ... libguestfs: trace: ls = ["scrub.000"] scrub.000 ><fs> df libguestfs: trace: df libguestfs: send_to_daemon: 44 bytes: 00 00 00 28 | 20 00 f5 f5 | 00 00 00 04 | 00 00 00 7d | 00 00 00 00 | ... guestfsd: main_loop: new request, len 0x28 df [ 40.662316] df used greatest stack depth: 3056 bytes left guestfsd: main_loop: proc 125 (df) took 0.00 seconds libguestfs: recv_from_daemon: 240 bytes: 20 00 f5 f5 | 00 00 00 04 | 00 00 00 7d | 00 00 00 01 | 00 12 34 05 | ... libguestfs: trace: df = "Filesystem 1K-blocks Used Available Use% Mounted on\n/dev 232704 132 232572 1% /dev\n/dev/vda1 99035 99034 0 100% /sysroot\n" Filesystem 1K-blocks Used Available Use% Mounted on /dev 232704 132 232572 1% /dev /dev/vda1 99035 99034 0 100% /sysroot My test for this is: $ guestfish -v -N fs -m /dev/sda1 scrub-freespace /scrub For me this produces the output: [...] scrub -X /sysroot/scrub scrub: created directory /sysroot/scrub scrub: unlinked /sysroot/scrub/scrub.000 scrub: removed /sysroot/scrub [...] Also the disk image file is fully allocated after the scrub, which indicates that data (scrubbing patterns) were written all the way through to the block device: $ du -sh test1.img 99M test1.img I tried this out on Fedora 18 and RHEL 7 with the same results. moli: Do you agree that my test indicates that this is not a bug? I do not believe the output of 'df' inside the guest is relevant, since scrub removes the scrubbed file afterwards, so disk free space should be substantially the same before and after the test. (In reply to comment #3) > My test for this is: > > $ guestfish -v -N fs -m /dev/sda1 scrub-freespace /scrub > > For me this produces the output: > > [...] > scrub -X /sysroot/scrub > scrub: created directory /sysroot/scrub > scrub: unlinked /sysroot/scrub/scrub.000 > scrub: removed /sysroot/scrub > [...] > > Also the disk image file is fully allocated after the scrub, > which indicates that data (scrubbing patterns) were written > all the way through to the block device: > > $ du -sh test1.img > 99M test1.img > > I tried this out on Fedora 18 and RHEL 7 with the same results. > > moli: Do you agree that my test indicates that this is not a bug? > > I do not believe the output of 'df' inside the guest is relevant, > since scrub removes the scrubbed file afterwards, so disk free space > should be substantially the same before and after the test. Hi Rich, That's what i confuse, if it's not a bug in rhel7, then there is a bug in rhel6, as it didn't remove the scrub file, as what imply in the help it do need to remove the scrub file, any thought? Yes I agree, this does seem to be a bug on RHEL 6. $ guestfish -v -N fs -m /dev/sda1 scrub-freespace /scrub : df-h [...] scrub -X /sysroot/scrub [...] Filesystem Size Used Avail Use% Mounted on /dev 237M 132K 237M 1% /dev /dev/vda1 97M 97M 0 100% /sysroot On RHEL 6.3 the scrub directory is not removed and so it ends up filling the disk to 100%. This is a bug in RHEL 6 scrub: http://code.google.com/p/diskscrub/issues/detail?id=9 which was fixed upstream: http://code.google.com/p/diskscrub/source/detail?r=98d36b4836d741c7cb7664cbd21b3ae1c138881e This fix needs to be backported to RHEL 6 scrub (NB: it's a bug in scrub, not in libguestfs). Closing because this is NOTABUG in everything except RHEL 6. |