Bug 895897

Summary: virt-format fail to format the same disk more than twice with lvm enable
Product: [Community] Virtualization Tools Reporter: Richard W.M. Jones <rjones>
Component: libguestfsAssignee: Richard W.M. Jones <rjones>
Status: CLOSED NOTABUG QA Contact:
Severity: medium Docs Contact:
Priority: medium    
Version: unspecifiedCC: bfan, dyasny, leiwang, mbooth, moli, qguan, wshi
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 892271 Environment:
Last Closed: 2013-01-21 08:52:30 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 892271    

Description Richard W.M. Jones 2013-01-16 08:58:06 UTC
+++ This bug was initially created as a clone of Bug #892271 +++

Description of problem:

guestfs_wipefs fail if try to wipe the device more than 2 times with lvm enable, wipefs could be succeed of only have 1 try, also won't fail wipe the device more than 2 times without lvm

1st time won't fail anyway,
#virt-format -a virt_format.raw --filesystem=ext3 --lvm
try with the same device 2nd time, it will fail anyway,
#virt-format -a virt_format.raw --filesystem=ext3 --lvm

exit with 
/*format.c*/

336       if (have_wipefs && guestfs_wipefs (g, devices[i]) == -1)
337         exit (EXIT_FAILURE);  



below is the detail log, 
libguestfs: trace: available "wipefs"
libguestfs: send_to_daemon: 60 bytes: 00 00 00 38 | 20 00 f5 f5 | 00 00 00 04 | 00 00 00 d8 | 00 00 00 0
0 | ...
guestfsd: main_loop: new request, len 0x38
libguestfs: recv_from_daemon: 40 bytes: 20 00 f5 f5 | 00 00 00 04 | 00 00 00 d8 | 00 00 00 01 | 00 12 34
 00 | ...
libguestfs: trace: available = 0
libguestfs: trace: list_devices
libguestfs: send_to_daemon: 44 bytes: 00 00 00 28 | 20 00 f5 f5 | 00 00 00 04 | 00 00 00 07 | 00 00 00 0
0 | ...
guestfsd: main_loop: proc 216 (available) took 0.00 seconds
guestfsd: main_loop: new request, len 0x28
guestfsd: main_loop: proc 7 (list_devices) took 0.00 seconds
libguestfs: recv_from_daemon: 56 bytes: 20 00 f5 f5 | 00 00 00 04 | 00 00 00 07 | 00 00 00 01 | 00 12 34
 01 | ...
libguestfs: trace: list_devices = ["/dev/sda"]
libguestfs: trace: wipefs "/dev/sda"
libguestfs: send_to_daemon: 56 bytes: 00 00 00 34 | 20 00 f5 f5 | 00 00 00 04 | 00 00 01 32 | 00 00 00 0
0 | ...
guestfsd: main_loop: new request, len 0x34
wipefs --help
wipefs -a /dev/sda
wipefs: error: /dev/sda: probing initialization failed: Device or resource busy
guestfsd: error: wipefs: error: /dev/sda: probing initialization failed: Device or resource busy
guestfsd: main_loop: proc 306 (wipefs) took 0.00 seconds
libguestfs: recv_from_daemon: 128 bytes: 20 00 f5 f5 | 00 00 00 04 | 00 00 01 32 | 00 00 00 01 | 00 12 3
4 02 | ...
libguestfs: trace: wipefs = -1 (error)
libguestfs: error: wipefs: wipefs: error: /dev/sda: probing initialization failed: Device or resource bu
sy
libguestfs: trace: close
libguestfs: closing guestfs handle 0x15ba820 (state 2)
libguestfs: trace: internal_autosync
libguestfs: send_to_daemon: 44 bytes: 00 00 00 28 | 20 00 f5 f5 | 00 00 00 04 | 00 00 01 1a | 00 00 00 0
0 | ...
guestfsd: main_loop: new request, len 0x28
fsync /dev/sda
guestfsd: main_loop: proc 282 (internal_autosync) took 0.00 seconds
libguestfs: recv_from_daemon: 40 bytes: 20 00 f5 f5 | 00 00 00 04 | 00 00 01 1a | 00 00 00 01 | 00 12 34
 03 | ...
libguestfs: trace: internal_autosync = 0
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfsS9zRa8





Version-Release number of selected component (if applicable):
libguestfs-1.20.1-4.el7.x86_64

How reproducible:
always with lvm enable

Steps to Reproduce:
1. #virt-format -a virt_format.raw --filesystem=ext3 --lvm
2. #virt-format -a virt_format.raw --filesystem=ext3 --lvm
3.
  
Actual results:


Expected results:


Additional info:

Comment 1 Richard W.M. Jones 2013-01-21 08:52:30 UTC
This works fine for me on Fedora 18 and Rawhide.

I suspect it's a bug in the RHEL 7 wipefs program, ie. a
duplicate of bug 872831.