Bug 895897 - virt-format fail to format the same disk more than twice with lvm enable
virt-format fail to format the same disk more than twice with lvm enable
Status: CLOSED NOTABUG
Product: Virtualization Tools
Classification: Community
Component: libguestfs (Show other bugs)
unspecified
Unspecified Unspecified
medium Severity medium
: ---
: ---
Assigned To: Richard W.M. Jones
:
Depends On:
Blocks: 892271
  Show dependency treegraph
 
Reported: 2013-01-16 03:58 EST by Richard W.M. Jones
Modified: 2013-01-21 03:52 EST (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 892271
Environment:
Last Closed: 2013-01-21 03:52:30 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Richard W.M. Jones 2013-01-16 03:58:06 EST
+++ This bug was initially created as a clone of Bug #892271 +++

Description of problem:

guestfs_wipefs fail if try to wipe the device more than 2 times with lvm enable, wipefs could be succeed of only have 1 try, also won't fail wipe the device more than 2 times without lvm

1st time won't fail anyway,
#virt-format -a virt_format.raw --filesystem=ext3 --lvm
try with the same device 2nd time, it will fail anyway,
#virt-format -a virt_format.raw --filesystem=ext3 --lvm

exit with 
/*format.c*/

336       if (have_wipefs && guestfs_wipefs (g, devices[i]) == -1)
337         exit (EXIT_FAILURE);  



below is the detail log, 
libguestfs: trace: available "wipefs"
libguestfs: send_to_daemon: 60 bytes: 00 00 00 38 | 20 00 f5 f5 | 00 00 00 04 | 00 00 00 d8 | 00 00 00 0
0 | ...
guestfsd: main_loop: new request, len 0x38
libguestfs: recv_from_daemon: 40 bytes: 20 00 f5 f5 | 00 00 00 04 | 00 00 00 d8 | 00 00 00 01 | 00 12 34
 00 | ...
libguestfs: trace: available = 0
libguestfs: trace: list_devices
libguestfs: send_to_daemon: 44 bytes: 00 00 00 28 | 20 00 f5 f5 | 00 00 00 04 | 00 00 00 07 | 00 00 00 0
0 | ...
guestfsd: main_loop: proc 216 (available) took 0.00 seconds
guestfsd: main_loop: new request, len 0x28
guestfsd: main_loop: proc 7 (list_devices) took 0.00 seconds
libguestfs: recv_from_daemon: 56 bytes: 20 00 f5 f5 | 00 00 00 04 | 00 00 00 07 | 00 00 00 01 | 00 12 34
 01 | ...
libguestfs: trace: list_devices = ["/dev/sda"]
libguestfs: trace: wipefs "/dev/sda"
libguestfs: send_to_daemon: 56 bytes: 00 00 00 34 | 20 00 f5 f5 | 00 00 00 04 | 00 00 01 32 | 00 00 00 0
0 | ...
guestfsd: main_loop: new request, len 0x34
wipefs --help
wipefs -a /dev/sda
wipefs: error: /dev/sda: probing initialization failed: Device or resource busy
guestfsd: error: wipefs: error: /dev/sda: probing initialization failed: Device or resource busy
guestfsd: main_loop: proc 306 (wipefs) took 0.00 seconds
libguestfs: recv_from_daemon: 128 bytes: 20 00 f5 f5 | 00 00 00 04 | 00 00 01 32 | 00 00 00 01 | 00 12 3
4 02 | ...
libguestfs: trace: wipefs = -1 (error)
libguestfs: error: wipefs: wipefs: error: /dev/sda: probing initialization failed: Device or resource bu
sy
libguestfs: trace: close
libguestfs: closing guestfs handle 0x15ba820 (state 2)
libguestfs: trace: internal_autosync
libguestfs: send_to_daemon: 44 bytes: 00 00 00 28 | 20 00 f5 f5 | 00 00 00 04 | 00 00 01 1a | 00 00 00 0
0 | ...
guestfsd: main_loop: new request, len 0x28
fsync /dev/sda
guestfsd: main_loop: proc 282 (internal_autosync) took 0.00 seconds
libguestfs: recv_from_daemon: 40 bytes: 20 00 f5 f5 | 00 00 00 04 | 00 00 01 1a | 00 00 00 01 | 00 12 34
 03 | ...
libguestfs: trace: internal_autosync = 0
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfsS9zRa8





Version-Release number of selected component (if applicable):
libguestfs-1.20.1-4.el7.x86_64

How reproducible:
always with lvm enable

Steps to Reproduce:
1. #virt-format -a virt_format.raw --filesystem=ext3 --lvm
2. #virt-format -a virt_format.raw --filesystem=ext3 --lvm
3.
  
Actual results:


Expected results:


Additional info:
Comment 1 Richard W.M. Jones 2013-01-21 03:52:30 EST
This works fine for me on Fedora 18 and Rawhide.

I suspect it's a bug in the RHEL 7 wipefs program, ie. a
duplicate of bug 872831.

Note You need to log in before you can comment on or make changes to this bug.