Description of problem: Following instructions from documentation: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.0/html-single/installation_guide/#Advanced_RHVH_Install 6.2.2. Automating Deployment using PXE and Kickstart only change of example pxe was path to squash.img on DVD: liveimg --url=http://IP_of_tft_server/tftp/images/RHVH/4.0-20170307.1/LiveOS/squashfs.img Install fails with: There was an error running the kickstart script at line 11. This is a fatal error and installation will be aborted. The details of this error are: /tmp/ks-script-GvwdxN: line 1: nodectl: command not found ation Host Deployment Version-Release number of selected component (if applicable): Documentation from access.redhat.com How reproducible: 100% Steps to Reproduce: 1. follow instruction in docs 2. look for and find squash.img = how is file named in docs 3. change ks-example in docs, and try install rhvh 4. installation fails Actual results: Installation failed Expected results: Installation should work, but for that we need correct information in documentation Additional info: ks.cfg which is attached on DVD iso is showing another way to generate squashfs with rhvh image
The ks.cfg in the DVD is not generating the squashfs -- it's extracting it onto a ramdisk from a package which is available on real media (/Packages), but not over PXE. I would suspect that something is wrong with your installation environment, or that the squashfs was not installed at all. Can you post your kickstart?
Pavol, ping?
Closing with insufficient data, please reopen if you ahve requested info
Created attachment 1294498 [details] RHV Host kickstart template (Foreman)
I can confirm this issue. Red Hat staff, please find a recording of the install at https://drive.google.com/a/redhat.com/file/d/0B8WpqWTdKqS-OTV6WEg2ZTJCaGs/view?usp=sharing. The installation source used was RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso loop back mounted and exposed via HTTP.
Hi Mark - It's extremely likely that the loopback mounted method is using Anaconda's squashfs.img, not the RHVH squashfs (which would require unpacking the RPM in %pre, similar to the scripts on the ISO). In addition, installations of 4.0 must use "imgbase --init", not nodectl.
Created attachment 1294796 [details] Actual kickstart
Created attachment 1294797 [details] Anaconda log
According to #c5 and #c7, this bug could be reproduced even with "imgbase layout --init" in the %post section. According to #c6, I adjusted the ks file as: %pre cd /tmp rpm2cpio http://<IP>/rhvh/Packages/redhat-virtualization-host-image-update-4.0-20170307.1.el7_3.noarch.rpm|cpio -ivd squashfs=$(find|grep squashfs|grep -v meta) ln -s $squashfs /tmp/squashfs %end liveimg --url=file:///tmp/squashfs clearpart --all autopart --type=thinp rootpw --plaintext ovirt timezone --utc UTC zerombr text reboot %post --erroronfail nodectl init %end Installation succeeded. So, the misleading part of that doc is the value of "liveimg --url", it should add %pre section and change "liveimg --url" as above.
I would suggest instead that the installation instructions should include directions to extract the squashfs from the RPM separately, and include that in %pre if necessary. The ISO does not need to be loopback mounted for this to work, as the squashfs can be kept elsewhere.
Moving to Documentation. Ryan, can you confirm that the changes suggested by Qin in comment 9 are what we should document? If not, could you provide the steps you think should be added?
I strongly disagree with using %pre in the KS outside of cdrom installs. Instead, the customer should extract the rhvh squashfs with rpm2cpio
(In reply to Ryan Barry from comment #12) > I strongly disagree with using %pre in the KS outside of cdrom installs. > Instead, the customer should extract the rhvh squashfs with rpm2cpio Can you specify where in the existing procedure the customer should extract the squashfs using rpm2cpio? Before they create the kickstart? And then does 'liveimg --url=URL/to/squashfs.img' path need to change?
Ideally, while they host the stage2 and create the kickstart. The --url bit is ok as-is. Using a %pre script will only work if the ISO is loopback mounted, and we expect that most customers will be using PXE without loopback, which is why I'm against
If put the redhat-virtualization-host-image-update*.rpm in a publicly available directory, such as exposing via http, then you can use the %pre script to extract squashfs from that rpm. From my point of view, using %pre script is not related to whether the ISO is loopback mounted or not, it's just like you can put squashfs in a publicly available directory, and use it in kicstart file as "liveimg --url=URL/to/squashfs.img". I agree with the point that it's better to extract squashfs from rpm and put it in a publicly available directory in advance.
(In reply to Qin Yuan from comment #15) > If put the redhat-virtualization-host-image-update*.rpm in a publicly > available directory, such as exposing via http, then you can use the %pre > script to extract squashfs from that rpm. From my point of view, using %pre > script is not related to whether the ISO is loopback mounted or not, it's > just like you can put squashfs in a publicly available directory, and use it > in kicstart file as "liveimg --url=URL/to/squashfs.img". > > I agree with the point that it's better to extract squashfs from rpm and put > it in a publicly available directory in advance. Ryan, does this comment change your view, or can we proceed with just adding steps to extract squashfs? To where should it be extracted? I'm not sure what qualifies as a publicly available directory. If you can provide an example command, that would be ideal.
From my point of view, I suppose I'd prefer a direct extraction, but something like: # yum install nginx # cd /usr/share/nginx/html # wget redhat-virtualization-host-image-update # rpm2cpio ... ^ use the nginx server for liveimg
(In reply to Ryan Barry from comment #17) > From my point of view, I suppose I'd prefer a direct extraction, but > something like: > > # yum install nginx > # cd /usr/share/nginx/html > # wget redhat-virtualization-host-image-update > # rpm2cpio ... > > ^ use the nginx server for liveimg IMO use of %pre section is overkill. Just state how to extract image and that it should be put "online" for download. Please do NOT state details how to configure web server and do never recommend to put user content into /usr which is "owned" by rpms. People who are interested in PXE should know details about dhcp, tftp and getting rest of bootable images via various protocols.
Just as an aside, /usr/share/nginx/html is the default webroot for nginx, which is why it was suggested
Jiri, Ryan, can you please confirm what you'd like us to document here? I agree that asking users to install and configure a web server is out of scope. Is there some generic way we can instruct advise users to proceed? "Extract the image to a publicly accessible directory."?
I completely agree that configuring any webserver is overkill. I would probably say "a publicly accessible path" and link to the anaconda docs for liveimg, which will make the options (nfs, ftp, http, etc) clear,
*** Bug 1474066 has been marked as a duplicate of this bug. ***
Thanks, Ryan. Assigning to Tahlia for review. Tahlia, see comment 21 for final instructions.
Reviewed and merged
Now published at https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/installation_guide/advanced_rhvh_install