Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 708959 - dirty install fail at partition sometimes as race condition
dirty install fail at partition sometimes as race condition
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: ovirt-node (Show other bugs)
5.7
Unspecified Unspecified
urgent Severity urgent
: rc
: ---
Assigned To: Joey Boggs
Virtualization Bugs
: Regression
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2011-05-30 05:35 EDT by Mohua Li
Modified: 2016-04-26 12:43 EDT (History)
10 users (show)

See Also:
Fixed In Version: ovirt-node-1.0-60.el5
Doc Type: Bug Fix
Doc Text:
Previously, you could not reinstall Red Hat Enterprise Virtualization Hypervisor over an existing installation because of a bug in the installation script that would fail to remove all of the volume group data. The bug has been fixed and you can now reinstall Red Hat Enterprise Virtualization Hypervisor as expected.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2011-07-27 10:41:11 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
ovirt.log (12.84 KB, text/plain)
2011-05-30 05:35 EDT, Mohua Li
no flags Details
ovirt.log(auto) (9.26 KB, text/plain)
2011-05-30 05:39 EDT, Mohua Li
no flags Details
ovirt.log on rhev-hypervisor-5.7-20110616.0.el5 (29.67 KB, text/plain)
2011-06-16 22:36 EDT, Guohua Ouyang
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2011:1090 normal SHIPPED_LIVE Moderate: rhev-hypervisor security and bug fix update 2011-07-27 10:40:56 EDT

  None (edit)
Description Mohua Li 2011-05-30 05:35:13 EDT
Description of problem:
PXE auto reinstall(old rhev-hypervisor installed) with

"storage_init=/dev/mapper/360*b1 storage_vol=::::: local_boot firstboot" 

fail at crating pv, as sth keep it busy, 


  Can't remove open logical volume "Config"
  Logical volume "Swap" successfully removed
  Logical volume "Root" successfully removed
  Logical volume "RootBackup" successfully removed
  No physical volume label read from /dev/mapper/3600a0b80005b10ca00008e254c7726b1
  Can't open /dev/mapper/3600a0b80005b10ca00008e254c7726b1 exclusively - not removing. Mounted filesystem?
May 30 09:12:35 Wiping old boot sector 1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.001669 seconds, 628 MB/s
May 30 09:12:35 Wiping secondary gpt header 1+0 records in
1+0 records out
1024 bytes (1.0 kB) copied, 3.2e-05 seconds, 32 MB/s
May 30 09:12:35 Labeling Drive device-mapper: remove ioctl failed: Device or resource busy
May 30 09:12:35 Creating boot partition device-mapper: remove ioctl failed: Device or resource busy
May 30 09:12:36 Creating LVM partition device-mapper: remove ioctl failed: Device or resource busy
device-mapper: create ioctl failed: Device or resource busy
May 30 09:12:36 Toggling boot on device-mapper: remove ioctl failed: Device or resource busy
device-mapper: create ioctl failed: Device or resource busy
May 30 09:12:36 Toggling LVM on device-mapper: remove ioctl failed: Device or resource busy
device-mapper: create ioctl failed: Device or resource busy

Model: Linux device-mapper (dm)
Disk /dev/mapper/3600a0b80005b10ca00008e254c7726b1: 21.5GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type     File system  Flags
 1      0.51kB  52.0MB  52.0MB  primary  ext2         boot 
 2      52.0MB  21.5GB  21.4GB  primary               lvm  

May 30 09:12:57 Creating physical volume 1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.002123 seconds, 494 MB/s




[root@amd-1216-8-5 ~]# lvs

  No volume groups found
[root@amd-1216-8-5 ~]# pvs
 
[root@amd-1216-8-5 ~]# vgs
  No volume groups found
[root@amd-1216-8-5 ~]# ls /dev/mapper/*
/dev/mapper/3600a0b80005b10ca00008e254c7726b1    /dev/mapper/HostVG-Config  /dev/mapper/HostVG-Logging  /dev/mapper/live-osimg-min
/dev/mapper/3600a0b80005b10ca00008e254c7726b1p2  /dev/mapper/HostVG-Data    /dev/mapper/control         /dev/mapper/live-rw
[root@amd-1216-8-5 ~]# serivce multipathd status
-bash: serivce: command not found
[root@amd-1216-8-5 ~]# service multipathd status
multipathd is stopped
[root@amd-1216-8-5 ~]# multipath -ll
3600a0b80005b10ca00008e254c7726b1 dm-0 IBM,1726-4xx  FAStT
[size=20G][features=1 queue_if_no_path][hwhandler=1 rdac][rw]
\_ round-robin 0 [prio=200][active]
 \_ 0:0:0:0 sda 8:0   [active][ready]
 \_ 3:0:0:0 sde 8:64  [active][ready]
\_ round-robin 0 [prio=0][enabled]
 \_ 0:0:2:0 sdc 8:32  [active][ghost]
 \_ 3:0:1:0 sdg 8:96  [active][ghost]
[root@amd-1216-8-5 ~]# multipath -F
3600a0b80005b10ca00008e254c7726b1: map in use


i believe this is the reason after analyse keep it busy,

#lsof 

brcm_iscs  5602      root    3w      REG               0,19     385      18454 /var/log/brcm-iscsi.log


i saw  this commit is in tag ovirt-node-1.0-58.el5,

commit 36445d34cddae3238b9fb14cdd84a729286bcb82

Author: Mike Burns <mburns@redhat.com>

Date:   Tue May 17 08:11:20 2011 -0400



    fix ovirt-node logrotate

    

    rhbz#633919



don't know if this is the root cause



Version-Release number of selected component (if applicable):
rhev-hypervisor-5.7.4

How reproducible:
always

Steps to Reproduce:
1.auto install with above parameter, and have another old rhev-hypervisor instaleld,
2.
3.
  
Actual results:
fail at partition

Expected results:


Additional info:
Comment 1 Mohua Li 2011-05-30 05:35:44 EDT
Created attachment 501752 [details]
ovirt.log
Comment 2 Mohua Li 2011-05-30 05:39:14 EDT
Created attachment 501757 [details]
ovirt.log(auto)
Comment 4 Alan Pevec 2011-05-31 05:52:09 EDT
(In reply to comment #0)
>     fix ovirt-node logrotate
>     rhbz#633919
> 
> don't know if this is the root cause

It is not, when claiming regression please test with the older version which doesn't have this patch.

/var/log/brcm-iscsi.log is opened by brcm_iscsiuio which is started by iscsid initscript and unmount_logging_services doesn't seem to handle this, but there weren't recent changes, 5.6 should be the same.
Comment 15 Guohua Ouyang 2011-06-16 22:36:02 EDT
Created attachment 505170 [details]
ovirt.log on rhev-hypervisor-5.7-20110616.0.el5

Tested on rhev-hypervisor-5.7-20110616.0.el5, dirty install still failed at partition, it's the same error "Can't open /dev/mapper/SATA_WDC_WD3200AAKS-_WD-WMAV27854193p2 exclusively.  Mounted filesystem?"
Comment 16 Mike Burns 2011-06-17 08:03:04 EDT
(In reply to comment #15)
> Created attachment 505170 [details]
> ovirt.log on rhev-hypervisor-5.7-20110616.0.el5
> 
> Tested on rhev-hypervisor-5.7-20110616.0.el5, dirty install still failed at
> partition, it's the same error "Can't open
> /dev/mapper/SATA_WDC_WD3200AAKS-_WD-WMAV27854193p2 exclusively.  Mounted
> filesystem?"

This is because we didn't include the patch in this build.  It should be included in the next build.
Comment 19 Kate Grainger 2011-07-22 02:20:26 EDT
    Technical note added. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    New Contents:
Previously, you could not reinstall Red Hat Enterprise Virtualization Hypervisor over an existing installation because of a bug in the installation script that would fail to remove all of the volume group data. The bug has been fixed and you can now reinstall Red Hat Enterprise Virtualization Hypervisor as expected.
Comment 20 errata-xmlrpc 2011-07-27 10:41:11 EDT
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHSA-2011-1090.html

Note You need to log in before you can comment on or make changes to this bug.