Bug 1136300 - [rhevh][el7] Can't install - Exception: RuntimError('Failed to partition/format',) appears at 33 % of installation
Summary: [rhevh][el7] Can't install - Exception: RuntimError('Failed to partition/form...
Keywords:
Status: CLOSED DUPLICATE of bug 1095081
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-node
Version: 3.5.0
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: 3.5.0
Assignee: Ryan Barry
QA Contact: Virtualization Bugs
URL:
Whiteboard: node
Depends On:
Blocks: rhev35betablocker rhev35rcblocker rhev35gablocker
TreeView+ depends on / blocked
 
Reported: 2014-09-02 09:59 UTC by Jiri Belka
Modified: 2016-02-10 20:03 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-09-30 11:38:57 UTC
oVirt Team: Node


Attachments (Terms of Use)
screenshot (12.99 KB, application/octet-stream)
2014-09-02 09:59 UTC, Jiri Belka
no flags Details

Description Jiri Belka 2014-09-02 09:59:00 UTC
Created attachment 933673 [details]
screenshot

Description of problem:
I can't install rhevh, it's always stucked at 33 % of the installation and I see:

~~~
Exception: RuntimError('Failed to partition/format',) appers at 33 % of installation
~~~

and there also appears systemd output - [OK] Created slice user-0.slice.

See screenshot.

This is super annoying especially because one doesn't get any other info - please consider to show some more text info from the installer on other console (like tail -f ...). Also please consider to make shell accessible on another console, there are only gettys there now which is useless...

The disk is not cleaned, that's true but I used 'firstboot' which should make it pass.

My configuration of pxe:

~~~
LABEL rhevh-7.0-20140827.0.el7ev
MENU LABEL ^interactive - rhevh-7.0-20140827.0.el7ev
        KERNEL images/RHEVH/rhevh-7.0-20140827.0.el7ev/vmlinuz0
        APPEND rootflags=loop initrd=images/RHEVH/rhevh-7.0-20140827.0.el7ev/initrd0.img root=live:/rhevh-7.0-20140827.0.el7ev.iso rootfstype=auto ro rd.live.image rd.live.check rd.lvm=0 rd_NO_MULTIPATH rootflags=ro crashkernel=128M elevator=deadline install max_loop=256 rd.luks=0 rd.md=0 rd.dm=0 firstboot
~~~

Version-Release number of selected component (if applicable):
rhevh-7.0-20140827.0.el7ev

How reproducible:
???

Steps to Reproduce:
1. not really idea, just netboot via pxe and have some mbr/partitions on the disk (although I tried to `dd' first 100 MB with /dev/zero and it didn't help)
2.
3.

Actual results:
can't install

Expected results:
should pass, force to wipe all mbr/partitions/whatever

Additional info:

Comment 1 Fabian Deutsch 2014-09-02 16:24:24 UTC
Could it be that the disks you tried to install RHEV-H on contained a partition table or some filesystems?

Comment 2 Jiri Belka 2014-09-03 07:01:20 UTC
IIRC I tried both with "dirty" disk and later with cleaned disk (dd if=/dev/zero...). Anyway I used 'firstboot' as kernel arg during boot which should make the installer to ignore anything on the disk.

Comment 3 Ying Cui 2014-09-03 07:33:52 UTC
Jiri, this bug should be duplicated of Bug 1095081 - Reinstalling rhevh7.0 hang on disk partitioning and creating file system [POST]
You can check your issue when new ovirt-node/rhevh will be built this week again.
And you need to boot the image using enforcing=0 because there are some SELinux 
issues on such scratch build, you also need provide the audit.log.

Comment 5 Ryan Barry 2014-09-24 16:34:26 UTC
This strongly appears to be a duplicate of 1095081, the root cause of which is a behavior change in LVM which prompts for input if it thinks you're trying to create an LV which has already existed.

Comment 6 Fabian Deutsch 2014-09-30 11:37:43 UTC
According to comment 5 this seems to be a dupe of bug 1095081, moving this bug to ON_QA, a build will be named shortly.

Comment 7 Fabian Deutsch 2014-09-30 11:38:57 UTC

*** This bug has been marked as a duplicate of bug 1095081 ***


Note You need to log in before you can comment on or make changes to this bug.