Bug 1435257 - [Docs][Install] Update advanced RHVH Install instructions
Summary: [Docs][Install] Update advanced RHVH Install instructions
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: Documentation
Version: 4.0.7
Hardware: All
OS: All
high
high
Target Milestone: ovirt-4.1.5
: ---
Assignee: Tahlia Richardson
QA Contact: Byron Gravenorst
URL:
Whiteboard:
: 1474066 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-03-23 13:27 UTC by Pavol Brilla
Modified: 2019-05-07 13:11 UTC (History)
21 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-08-29 05:45:24 UTC
oVirt Team: Docs
Target Upstream Version:
Embargoed:
rbarry: needinfo-
rbarry: needinfo-


Attachments (Terms of Use)
RHV Host kickstart template (Foreman) (457 bytes, text/plain)
2017-07-05 08:40 UTC, Mark Keir
no flags Details
Actual kickstart (476 bytes, text/plain)
2017-07-06 04:35 UTC, Mark Keir
no flags Details
Anaconda log (10.08 KB, text/plain)
2017-07-06 04:36 UTC, Mark Keir
no flags Details

Description Pavol Brilla 2017-03-23 13:27:09 UTC
Description of problem:
Following instructions from documentation:

https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.0/html-single/installation_guide/#Advanced_RHVH_Install

6.2.2. Automating Deployment using PXE and Kickstart

only change of example pxe was path to squash.img on DVD:

liveimg --url=http://IP_of_tft_server/tftp/images/RHVH/4.0-20170307.1/LiveOS/squashfs.img

Install fails with:
There was an error running the kickstart script at line 11.  This is a fatal
error and installation will be aborted.  The details of this error are:
  
  /tmp/ks-script-GvwdxN: line 1: nodectl: command not found
ation Host Deployment


Version-Release number of selected component (if applicable):
Documentation from access.redhat.com

How reproducible:
100%

Steps to Reproduce:
1. follow instruction in docs
2. look for and find squash.img = how is file named in docs
3. change ks-example in docs, and try install rhvh
4. installation fails

Actual results:
Installation failed

Expected results:
Installation should work, but for that we need correct information in documentation

Additional info:
ks.cfg which is attached on DVD iso is showing another way to generate squashfs with rhvh image

Comment 1 Ryan Barry 2017-03-23 13:38:26 UTC
The ks.cfg in the DVD is not generating the squashfs -- it's extracting it onto a ramdisk from a package which is available on real media (/Packages), but not over PXE.

I would suspect that something is wrong with your installation environment, or that the squashfs was not installed at all. Can you post your kickstart?

Comment 2 Ryan Barry 2017-04-03 15:10:05 UTC
Pavol, ping?

Comment 3 Sandro Bonazzola 2017-04-10 15:19:10 UTC
Closing with insufficient data, please reopen if you ahve requested info

Comment 4 Mark Keir 2017-07-05 08:40:16 UTC
Created attachment 1294498 [details]
RHV Host kickstart template (Foreman)

Comment 5 Mark Keir 2017-07-05 08:42:11 UTC
I can confirm this issue.

Red Hat staff, please find a recording of the install at https://drive.google.com/a/redhat.com/file/d/0B8WpqWTdKqS-OTV6WEg2ZTJCaGs/view?usp=sharing.

The installation source used was RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso loop back mounted and exposed via HTTP.

Comment 6 Ryan Barry 2017-07-05 14:47:51 UTC
Hi Mark -

It's extremely likely that the loopback mounted method is using Anaconda's squashfs.img, not the RHVH squashfs (which would require unpacking the RPM in %pre, similar to the scripts on the ISO).

In addition, installations of 4.0 must use "imgbase --init", not nodectl.

Comment 7 Mark Keir 2017-07-06 04:35:26 UTC
Created attachment 1294796 [details]
Actual kickstart

Comment 8 Mark Keir 2017-07-06 04:36:00 UTC
Created attachment 1294797 [details]
Anaconda log

Comment 9 Qin Yuan 2017-07-06 10:37:35 UTC
According to #c5 and #c7, this bug could be reproduced even with "imgbase layout --init" in the %post section.

According to #c6, I adjusted the ks file as:

%pre
cd /tmp
rpm2cpio http://<IP>/rhvh/Packages/redhat-virtualization-host-image-update-4.0-20170307.1.el7_3.noarch.rpm|cpio -ivd
squashfs=$(find|grep squashfs|grep -v meta)
ln -s $squashfs /tmp/squashfs
%end

liveimg --url=file:///tmp/squashfs

clearpart --all
autopart --type=thinp
rootpw --plaintext ovirt
timezone --utc UTC
zerombr
text

reboot

%post --erroronfail
nodectl init
%end

Installation succeeded.

So, the misleading part of that doc is the value of "liveimg --url", it should add %pre section and change "liveimg --url" as above.

Comment 10 Ryan Barry 2017-07-06 10:49:08 UTC
I would suggest instead that the installation instructions should include directions to extract the squashfs from the RPM separately, and include that in %pre if necessary.

The ISO does not need to be loopback mounted for this to work, as the squashfs can be kept elsewhere.

Comment 11 Lucy Bopf 2017-07-11 00:15:46 UTC
Moving to Documentation.

Ryan, can you confirm that the changes suggested by Qin in comment 9 are what we should document? If not, could you provide the steps you think should be added?

Comment 12 Ryan Barry 2017-07-11 00:32:52 UTC
I strongly disagree with using %pre in the KS outside of cdrom installs. Instead, the customer should extract the rhvh squashfs with rpm2cpio

Comment 13 Lucy Bopf 2017-07-11 01:49:26 UTC
(In reply to Ryan Barry from comment #12)
> I strongly disagree with using %pre in the KS outside of cdrom installs.
> Instead, the customer should extract the rhvh squashfs with rpm2cpio

Can you specify where in the existing procedure the customer should extract the squashfs using rpm2cpio? Before they create the kickstart? And then does 'liveimg --url=URL/to/squashfs.img' path need to change?

Comment 14 Ryan Barry 2017-07-11 02:37:51 UTC
Ideally, while they host the stage2 and create the kickstart. The --url bit is ok as-is.

Using a %pre script will only work if the ISO is loopback mounted, and we expect that most customers will be using PXE without loopback, which is why I'm against

Comment 15 Qin Yuan 2017-07-12 06:22:20 UTC
If put the redhat-virtualization-host-image-update*.rpm in a publicly available directory, such as exposing via http, then you can use the %pre script to extract squashfs from that rpm. From my point of view, using %pre script is not related to whether the ISO is loopback mounted or not, it's just like you can put squashfs in a publicly available directory, and use it in kicstart file as "liveimg --url=URL/to/squashfs.img".

I agree with the point that it's better to extract squashfs from rpm and put it in a publicly available directory in advance.

Comment 16 Lucy Bopf 2017-07-18 00:29:58 UTC
(In reply to Qin Yuan from comment #15)
> If put the redhat-virtualization-host-image-update*.rpm in a publicly
> available directory, such as exposing via http, then you can use the %pre
> script to extract squashfs from that rpm. From my point of view, using %pre
> script is not related to whether the ISO is loopback mounted or not, it's
> just like you can put squashfs in a publicly available directory, and use it
> in kicstart file as "liveimg --url=URL/to/squashfs.img".
> 
> I agree with the point that it's better to extract squashfs from rpm and put
> it in a publicly available directory in advance.

Ryan, does this comment change your view, or can we proceed with just adding steps to extract squashfs? To where should it be extracted? I'm not sure what qualifies as a publicly available directory.

If you can provide an example command, that would be ideal.

Comment 17 Ryan Barry 2017-07-18 02:01:57 UTC
From my point of view, I suppose I'd prefer a direct extraction, but something like:

# yum install nginx
# cd /usr/share/nginx/html
# wget redhat-virtualization-host-image-update 
# rpm2cpio ...

^ use the nginx server for liveimg

Comment 18 Jiri Belka 2017-07-18 06:23:49 UTC
(In reply to Ryan Barry from comment #17)
> From my point of view, I suppose I'd prefer a direct extraction, but
> something like:
> 
> # yum install nginx
> # cd /usr/share/nginx/html
> # wget redhat-virtualization-host-image-update 
> # rpm2cpio ...
> 
> ^ use the nginx server for liveimg

IMO use of %pre section is overkill. Just state how to extract image and that it should be put "online" for download. Please do NOT state details how to configure web server and do never recommend to put user content into /usr which is "owned" by rpms. People who are interested in PXE should know details about dhcp, tftp and getting rest of bootable images via various protocols.

Comment 19 Ryan Barry 2017-07-18 08:49:18 UTC
Just as an aside, /usr/share/nginx/html is the default webroot for nginx, which is why it was suggested

Comment 20 Lucy Bopf 2017-07-25 00:25:59 UTC
Jiri, Ryan, can you please confirm what you'd like us to document here?

I agree that asking users to install and configure a web server is out of scope. Is there some generic way we can instruct advise users to proceed?

"Extract the image to a publicly accessible directory."?

Comment 21 Ryan Barry 2017-07-25 01:17:23 UTC
I completely agree that configuring any webserver is overkill.

I would probably say "a publicly accessible path" and link to the anaconda docs for liveimg, which will make the options (nfs, ftp, http, etc) clear,

Comment 22 Lucy Bopf 2017-07-25 01:51:54 UTC
*** Bug 1474066 has been marked as a duplicate of this bug. ***

Comment 23 Lucy Bopf 2017-07-25 02:44:30 UTC
Thanks, Ryan.

Assigning to Tahlia for review. Tahlia, see comment 21 for final instructions.

Comment 34 Byron Gravenorst 2017-08-29 05:28:06 UTC
Reviewed and merged


Note You need to log in before you can comment on or make changes to this bug.