Red Hat Bugzilla – Bug 1250637
[RHEV-H] RHEV-H 7.1 installation fails while firstboot=1 is set: " TransactionError: 'Transaction failed: Unable to save to file!' "
Last modified: 2016-02-10 14:17:29 EST
Created attachment 1059541 [details]
Description of problem:
RHEV-H 7.1 installation, while firstboot=1 is set, fails. The host had RHEV-H 6.7 installed before.
Version-Release number of selected component (if applicable):
Red Hat Enterprise Virtualization Hypervisor release 7.1 (20150803.0.el7ev)
The RHEV-H version that was on the host before the RHEV-H 7.1 installation:
Install RHEL 7.1 and then install the RHEV-H ISO.
Additional info: Console screenshots
rhev-hypervisor7-7.1-20150803.0 is unsigned build, so I test this issue with latest signed build - rhev-hypervisor7-7.1-20150805.0.
I can't reproduce this issue.
Auto instal RHEV-H 7.1 with
BOOTIF=eth0 hostname=cshaotest.redhat.com storage_init=scsi:36090a038d0f721901d033566b2493f23 adminpw=OKr05SbCu3D3g storage_vol=:500:::: firstboot=1
Auto install with firstboot=1 can succeed.
> Description of problem:
> RHEV-H 7.1 installation, while firstboot=1 is set, fails. The host had
> RHEV-H 6.7 installed before.
I noticed that the host had RHEV-H 6.7 installed before, so I re-test it again with this scenario, but I still can't reproduce this issue.
From the attachment, the error info show as: there appears to already be an installation on another device: /dev/sda4, the installation cannot process until the device is removed.
So please try to remove PV from another device and try again.
(In reply to shaochen from comment #2)
> > Description of problem:
> > RHEV-H 7.1 installation, while firstboot=1 is set, fails. The host had
> > RHEV-H 6.7 installed before.
> I noticed that the host had RHEV-H 6.7 installed before, so I re-test it
> again with this scenario, but I still can't reproduce this issue.
> From the attachment, the error info show as: there appears to already be an
> installation on another device: /dev/sda4, the installation cannot process
> until the device is removed.
> So please try to remove PV from another device and try again.
I tried to to set it to another device (/dev/sdb, /dev/sdc) using the extra_boot_options. It didn't help
Seeing this same issue... sometimes. Seems to work about half the time.
ERROR - ovirtfunctions - There appears to already be an installation on another device:
ERROR - ovirtfunctions - /dev/sda4
ERROR - ovirtfunctions - The installation cannot proceed until the device is removed
ERROR - ovirtfunctions - from the system of the HostVG volume group is removed
This is being installed from PXE.
MENU LABEL rhev-hypervisor7-7.1-20150911.0
append initrd=/images/rhev-hypervisor7-7.1-20150911.0/initrd.img root=live:/rhevh-7.1-20150911.0.el7ev.iso storage_init=/dev/sda rd.live.check crashkernel=256M rootflags=ro rd.dm=0 rootfstype=auto rd_NO_MULTIPATH rd.luks=0 elevator=deadline rhgb rd.md=0 quiet rd.live.image ro max_loop=256 storage_vol=4300:8192:4300:128:8192:-1 adminpw=<passwd_hash_here> rootpw=<passwd_hash_here> management_server=<address_here> rhevm_admin_password=<passwd_hash_here> rhn_activationkey=<key_here> rhn_url=https://<ip_here> rhn_ca_cert=https://<ip_here>pub/RHN-ORG-TRUSTED-SSL-CERT vlan=30: ntp=<ip_here>:<ip_here> dns=<ip_here> gateway=<ip_here> netmask=<mask_here> ip=<ip_here> hostname=<hostname_here> firstboot
Fixed bug tickets must have target milestone set prior to fixing them. Please set the correct milestone and move the bugs back to the previous status after this is corrected.
Jeff -- has the previous installation been wiped? Either by manually destroying the vg or booting rhev-hypervisor with "uninstall" (booting with "reinstall" will destroy the vg before trying to install, which should also clear this problem).
If so, please open a new bug about the appropriate parameter (uninstall|reinstall) not wiping the LVM groups. If not, I'll close this as NOTABUG, since upgrading from EL6 to EL7 (or reinstalling EL7 over EL6 without wiping first) is not supported.
The previous installation was not wiped. This was reinstalling using the reinstall or firstboot flag. I misread this ticket to mean the same thing I was seeing. I'll open a new bug.