RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1500811 - RHVh 4.1 latest rhel 7.4 fails with multiple disk
Summary: RHVh 4.1 latest rhel 7.4 fails with multiple disk
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: python-blivet
Version: 7.4
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Blivet Maintenance Team
QA Contact: Release Test Team
URL:
Whiteboard:
Depends On:
Blocks: ovirt-node-ng-43-el76-platform
TreeView+ depends on / blocked
 
Reported: 2017-10-11 14:18 UTC by ldomb
Modified: 2022-03-13 14:29 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-12-18 09:22:00 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
anaconada logs (731.00 KB, application/x-tar)
2017-10-11 14:18 UTC, ldomb
no flags Details

Description ldomb 2017-10-11 14:18:43 UTC
Created attachment 1337262 [details]
anaconada logs

Description of problem:
We the newest build of RHVH-4.1-20170925.0-RHVH-x86_64-dvd1.iso the rhevh install fails when creating disks. I can use the same kickstart with the RHEL 7.3 release and it works perfectly. 

Version-Release number of selected component (if applicable):


How reproducible:

Steps to Reproduce:
1. Install a 3 node HCI cluster with the boot partition NOT on the first disk
2. Reinstall the RHVH nodes with the image mentioned above using a kickstart defineing clearpart --all and initlabel
3. Run the install

Actual results:
Installation fails

Expected results:
Installation should work

Additional info:

anaconda logs and storage logs in attachment. 

As a workaround I can use the stage2 install of the 7.3 install with the squashfs image of 7.4 and the install works.

Comment 2 Ryan Barry 2017-10-11 14:23:08 UTC
Samantha -

From the logs, it looks like the autopart requests are doing something wrong here, but I wasn't able to reproduce.

This is not reproducible with Anaconda from 7.3. But on 7.4, despite the installclass, Anaconda is attempting to create:

/boot
PV
-swap
thinpv
-everything else

There are 2 problems:

*swap should be on the the thin pv
*the 'normal' PV is taking the entire disk, which does not leave space.

We haven't seen this before, and don't have a reproducer (Laurent does), but it's pretty apparent from the logs.

Any ideas here?

Comment 3 Vratislav Podzimek 2017-10-27 08:07:41 UTC
Please attach the program.log from the installation.

Comment 5 Samantha N. Bueno 2018-02-01 14:56:59 UTC
Fortunately program.log is dumped in the anaconda-tb-* file. This looks relevant:
16:58:31,729 INFO program: Running... lvm lvcreate --thinpool rhvh_hosted-engine1/pool00 --size 79224m --poolmetadatasize 40 --chunksize 64 --config  devices { preferred_names=["^/dev/mapper/", "^/dev/md/", "^/dev/sd"] filter=["r|/sda$|","r|/sdb$|","r|/sdc$|"] } 
16:58:31,756 INFO program:   /usr/sbin/modprobe failed: 1
16:58:31,756 INFO program:   thin-pool: Required device-mapper target(s) not detected in your kernel.
16:58:31,756 INFO program:   Run `lvcreate --help' for more information.
16:58:31,756 DEBUG program: Return code: 3

The traceback itself looks like it may be an issue with our storage library, so I'm flipping components.

Comment 6 David Lehman 2018-02-01 17:34:20 UTC
Looks to me like Samantha nailed it. Something went wrong with the compose, causing the required kernel modules to not be available (or loaded).

Comment 7 Sandro Bonazzola 2018-10-10 15:27:37 UTC
Any update on this?
Chance that a fix can get into RHEL 7.6?

Comment 8 David Lehman 2018-10-15 13:17:35 UTC
I don't believe there is a problem in blivet. It looks like the runtime images do not contain the dm-thin kernel module. I don't know how the installation media is created, but I would take a look at that process to find the problem.

Comment 9 David Lehman 2018-11-12 15:23:21 UTC
See previous comment. I don't know what your process is for creating the installer runtime images, but it seems to be the problem.

Please reassign to an appropriate component as I don't have the knowledge of the RHV/ovirt stack to do so correctly.

Comment 10 Yuval Turgeman 2018-11-28 10:49:12 UTC
You mentioned the 7.3 kickstart - is this a PXE installation, if so, please make sure you're using the correct kernel+initrd combination.  The modprobe error usually happens on wrong PXE configurations, if the kernel version and the version of the modules in the initrd do not match

Comment 11 Jaroslav Spanko 2018-12-13 13:47:57 UTC
Hi
Looks like this affect also the 4.2 image, we have the same error about device-mapper not detected for one of our customer.
Ryan Have you seen it also for 4.2 by chance ? 
Thanks !

Comment 12 Ryan Barry 2018-12-13 14:09:46 UTC
I haven't, but I moved to a different RHV team in September.

Which 4.2 version? The underlying version of RHEL is important

Comment 13 Jaroslav Spanko 2018-12-13 14:50:42 UTC
Hi Ryan 
Sorry for the ping then, it's redhat-virtualization-host-4.2-20181026 so 7.6 
Thx

Comment 14 Yuval Turgeman 2018-12-13 15:35:15 UTC
Jaroslav, is your pxe environment set correctly with the kernel+initrd that match this version of rhvh ?

Comment 15 Jaroslav Spanko 2018-12-18 09:10:05 UTC
Hi Yuval 
You were right, cu fixed the PXE config and installation was successful .
Thanks for hint !

Comment 16 Yuval Turgeman 2018-12-18 09:22:00 UTC
Thanks for the update ! Closing this according to Comment 15


Note You need to log in before you can comment on or make changes to this bug.