Red Hat Bugzilla – Bug 708959
dirty install fail at partition sometimes as race condition
Last modified: 2016-04-26 12:43:00 EDT
Description of problem: PXE auto reinstall(old rhev-hypervisor installed) with "storage_init=/dev/mapper/360*b1 storage_vol=::::: local_boot firstboot" fail at crating pv, as sth keep it busy, Can't remove open logical volume "Config" Logical volume "Swap" successfully removed Logical volume "Root" successfully removed Logical volume "RootBackup" successfully removed No physical volume label read from /dev/mapper/3600a0b80005b10ca00008e254c7726b1 Can't open /dev/mapper/3600a0b80005b10ca00008e254c7726b1 exclusively - not removing. Mounted filesystem? May 30 09:12:35 Wiping old boot sector 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.001669 seconds, 628 MB/s May 30 09:12:35 Wiping secondary gpt header 1+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 3.2e-05 seconds, 32 MB/s May 30 09:12:35 Labeling Drive device-mapper: remove ioctl failed: Device or resource busy May 30 09:12:35 Creating boot partition device-mapper: remove ioctl failed: Device or resource busy May 30 09:12:36 Creating LVM partition device-mapper: remove ioctl failed: Device or resource busy device-mapper: create ioctl failed: Device or resource busy May 30 09:12:36 Toggling boot on device-mapper: remove ioctl failed: Device or resource busy device-mapper: create ioctl failed: Device or resource busy May 30 09:12:36 Toggling LVM on device-mapper: remove ioctl failed: Device or resource busy device-mapper: create ioctl failed: Device or resource busy Model: Linux device-mapper (dm) Disk /dev/mapper/3600a0b80005b10ca00008e254c7726b1: 21.5GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 0.51kB 52.0MB 52.0MB primary ext2 boot 2 52.0MB 21.5GB 21.4GB primary lvm May 30 09:12:57 Creating physical volume 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.002123 seconds, 494 MB/s [root@amd-1216-8-5 ~]# lvs No volume groups found [root@amd-1216-8-5 ~]# pvs [root@amd-1216-8-5 ~]# vgs No volume groups found [root@amd-1216-8-5 ~]# ls /dev/mapper/* /dev/mapper/3600a0b80005b10ca00008e254c7726b1 /dev/mapper/HostVG-Config /dev/mapper/HostVG-Logging /dev/mapper/live-osimg-min /dev/mapper/3600a0b80005b10ca00008e254c7726b1p2 /dev/mapper/HostVG-Data /dev/mapper/control /dev/mapper/live-rw [root@amd-1216-8-5 ~]# serivce multipathd status -bash: serivce: command not found [root@amd-1216-8-5 ~]# service multipathd status multipathd is stopped [root@amd-1216-8-5 ~]# multipath -ll 3600a0b80005b10ca00008e254c7726b1 dm-0 IBM,1726-4xx FAStT [size=20G][features=1 queue_if_no_path][hwhandler=1 rdac][rw] \_ round-robin 0 [prio=200][active] \_ 0:0:0:0 sda 8:0 [active][ready] \_ 3:0:0:0 sde 8:64 [active][ready] \_ round-robin 0 [prio=0][enabled] \_ 0:0:2:0 sdc 8:32 [active][ghost] \_ 3:0:1:0 sdg 8:96 [active][ghost] [root@amd-1216-8-5 ~]# multipath -F 3600a0b80005b10ca00008e254c7726b1: map in use i believe this is the reason after analyse keep it busy, #lsof brcm_iscs 5602 root 3w REG 0,19 385 18454 /var/log/brcm-iscsi.log i saw this commit is in tag ovirt-node-1.0-58.el5, commit 36445d34cddae3238b9fb14cdd84a729286bcb82 Author: Mike Burns <mburns@redhat.com> Date: Tue May 17 08:11:20 2011 -0400 fix ovirt-node logrotate rhbz#633919 don't know if this is the root cause Version-Release number of selected component (if applicable): rhev-hypervisor-5.7.4 How reproducible: always Steps to Reproduce: 1.auto install with above parameter, and have another old rhev-hypervisor instaleld, 2. 3. Actual results: fail at partition Expected results: Additional info:
Created attachment 501752 [details] ovirt.log
Created attachment 501757 [details] ovirt.log(auto)
(In reply to comment #0) > fix ovirt-node logrotate > rhbz#633919 > > don't know if this is the root cause It is not, when claiming regression please test with the older version which doesn't have this patch. /var/log/brcm-iscsi.log is opened by brcm_iscsiuio which is started by iscsid initscript and unmount_logging_services doesn't seem to handle this, but there weren't recent changes, 5.6 should be the same.
Created attachment 505170 [details] ovirt.log on rhev-hypervisor-5.7-20110616.0.el5 Tested on rhev-hypervisor-5.7-20110616.0.el5, dirty install still failed at partition, it's the same error "Can't open /dev/mapper/SATA_WDC_WD3200AAKS-_WD-WMAV27854193p2 exclusively. Mounted filesystem?"
(In reply to comment #15) > Created attachment 505170 [details] > ovirt.log on rhev-hypervisor-5.7-20110616.0.el5 > > Tested on rhev-hypervisor-5.7-20110616.0.el5, dirty install still failed at > partition, it's the same error "Can't open > /dev/mapper/SATA_WDC_WD3200AAKS-_WD-WMAV27854193p2 exclusively. Mounted > filesystem?" This is because we didn't include the patch in this build. It should be included in the next build.
http://git.virt.bos.redhat.com/git/?p=ovirt-node/.git;a=commit;h=70bf48cb4c5960e221f54187abe156f66bf6a343
Technical note added. If any revisions are required, please edit the "Technical Notes" field accordingly. All revisions will be proofread by the Engineering Content Services team. New Contents: Previously, you could not reinstall Red Hat Enterprise Virtualization Hypervisor over an existing installation because of a bug in the installation script that would fail to remove all of the volume group data. The bug has been fixed and you can now reinstall Red Hat Enterprise Virtualization Hypervisor as expected.
An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on therefore solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHSA-2011-1090.html