Description of problem: The default temporary path offered by hosted engine setup is /tmp. All recent Fedoras and RHEL 7 use ramdisk as a backend for that path. Big files are supposed to go to /var/tmp which resides on the standard filesystem. Version-Release number of selected component (if applicable): 3.6 beta 5 How reproducible: Always Steps to Reproduce: 1. install ovirt-engine-appliance and get to the step.. 2. Specify the memory size for the appliance... 3. Press enter Actual results: Not enough space in the temporary directory Please specify a path to a temporary directory with at least 10 GB [/tmp]: Expected results: /var/tmp used as the default temporary directory Additional info:
This is an automated message. oVirt 3.6.0 RC1 has been released. This bug has no target release and still have target milestone set to 3.6.0-rc. Please review this bug and set target milestone and release to one of the next releases.
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.
In oVirt testing is done on single release by default. Therefore I'm removing the 4.0 flag. If you think this bug must be tested in 4.0 as well, please re-add the flag. Please note we might not have testing resources to handle the 4.0 clone.
Works for me on these components: mom-0.5.1-1.el7ev.noarch qemu-kvm-rhev-2.3.0-31.el7_2.6.x86_64 rhevm-appliance-20150107.0-1.el7ev.noarch libvirt-client-1.2.17-13.el7_2.2.x86_64 vdsm-4.17.18-0.el7ev.noarch sanlock-3.2.4-2.el7_2.x86_64 ovirt-vmconsole-1.0.0-1.el7ev.noarch ovirt-host-deploy-1.4.1-1.el7ev.noarch ovirt-hosted-engine-ha-1.3.3.7-1.el7ev.noarch ovirt-vmconsole-host-1.0.0-1.el7ev.noarch ovirt-hosted-engine-setup-1.3.2.3-1.el7ev.noarch ovirt-setup-lib-1.0.1-1.el7ev.noarch Linux version 3.10.0-327.el7.x86_64 (mockbuild.eng.bos.redhat.com) (gcc version 4.8.3 20140911 (Red Hat 4.8.3-9) (GCC) ) #1 SMP Thu Oct 29 17:29:29 EDT 2015 I saw that /var/tmp/ was used during appliance image extraction. # df -ah Filesystem Size Used Avail Use% Mounted on rootfs - - - - / sysfs 0 0 0 - /sys proc 0 0 0 - /proc devtmpfs 3.9G 0 3.9G 0% /dev securityfs 0 0 0 - /sys/kernel/security tmpfs 3.9G 0 3.9G 0% /dev/shm devpts 0 0 0 - /dev/pts tmpfs 3.9G 105M 3.8G 3% /run tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup cgroup 0 0 0 - /sys/fs/cgroup/systemd pstore 0 0 0 - /sys/fs/pstore cgroup 0 0 0 - /sys/fs/cgroup/perf_event cgroup 0 0 0 - /sys/fs/cgroup/blkio cgroup 0 0 0 - /sys/fs/cgroup/cpuset cgroup 0 0 0 - /sys/fs/cgroup/memory cgroup 0 0 0 - /sys/fs/cgroup/hugetlb cgroup 0 0 0 - /sys/fs/cgroup/freezer cgroup 0 0 0 - /sys/fs/cgroup/net_cls cgroup 0 0 0 - /sys/fs/cgroup/devices cgroup 0 0 0 - /sys/fs/cgroup/cpu,cpuacct configfs 0 0 0 - /sys/kernel/config /dev/sda3 145G 8.0G 130G 6% / selinuxfs 0 0 0 - /sys/fs/selinux systemd-1 - - - - /proc/sys/fs/binfmt_misc mqueue 0 0 0 - /dev/mqueue hugetlbfs 0 0 0 - /dev/hugepages debugfs 0 0 0 - /sys/kernel/debug /dev/sda1 190M 132M 49M 74% /boot tmpfs 783M 0 783M 0% /run/user/0 binfmt_misc 0 0 0 - /proc/sys/fs/binfmt_misc 10.35.64.11:/vol/RHEV/Virt/nsednev_upgrade_he_3_5_6_to_3_5_7_el_6_7_SD1 2.0T 1.9T 172G 92% /rhev/data-center/mnt/10.35.64.11:_vol_RHEV_Virt_nsednev__upgrade__he__3__5__6__to__3__5__7__el__6__7__SD1 /dev/loop1 2.0G 6.1M 1.9G 1% /rhev/data-center/mnt/_var_lib_ovirt-hosted-engine-setup_tmppd8gxF ls /var/tmp/ abrt systemd-private-7d840b43399f4c9fb892b386f2d8c266-systemd-machined.service-eO0JJb tmp97_WZq yum-root-sW5c8t ls /var/tmp/yum-root-sW5c8t/ rhev-release-3.6.2-10-001.noarch.rpm hosted-engine --deploy [ INFO ] Stage: Initializing [ INFO ] Generating a temporary VNC password. [ INFO ] Stage: Environment setup Continuing will configure this host for serving as hypervisor and create a VM where you have to install the engine afterwards. Are you sure you want to continue? (Yes, No)[Yes]: Configuration files: [] Log file: /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160121200447-yi9gwv.log Version: otopi-1.4.0 (otopi-1.4.0-1.el7ev) It has been detected that this program is executed through an SSH connection without using screen. Continuing with the installation may lead to broken installation if the network connection fails. It is highly recommended to abort the installation and run it inside a screen session using command "screen". Do you want to continue anyway? (Yes, No)[No]: yes [ INFO ] Hardware supports virtualization [ INFO ] Stage: Environment packages setup [ INFO ] Stage: Programs detection [ INFO ] Stage: Environment setup [ INFO ] Stage: Environment customization --== STORAGE CONFIGURATION ==-- During customization use CTRL-D to abort. Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs3, nfs4)[nfs3]: Please specify the full shared storage connection path to use (example: host:/path): 10.35.64.11:/vol/RHEV/Virt/nsednev_upgrade_he_3_5_6_to_3_5_7_el_6_7_SD1 [ INFO ] Installing on first host Please provide storage domain name. [hosted_storage]: Local storage datacenter name is an internal name and currently will not be shown in engine's admin UI. Please enter local datacenter name [hosted_datacenter]: --== SYSTEM CONFIGURATION ==-- --== NETWORK CONFIGURATION ==-- Please indicate a nic to set ovirtmgmt bridge on: (enp4s0, enp6s0) [enp4s0]: iptables was detected on your computer, do you wish setup to configure it? (Yes, No)[Yes]: Please indicate a pingable gateway IP address [10.35.64.254]: --== VM CONFIGURATION ==-- Booting from cdrom on RHEL7 is ISO image based only, as cdrom passthrough is disabled (BZ760885) Please specify the device to boot the VM from (choose disk for the oVirt engine appliance) (cdrom, disk, pxe) [disk]: disk [ INFO ] Detecting available oVirt engine appliances The following appliance have been found on your system: [1] - The RHEV-M Appliance image (OVA) - 20150107.0-1.el7ev [2] - Directly select an OVA file Please select an appliance (1, 2) [1]: [ INFO ] Verifying its sha1sum [ INFO ] Checking OVF archive content (could take a few minutes depending on archive size) [ INFO ] Checking OVF XML content (could take a few minutes depending on archive size) [WARNING] OVF does not contain a valid image description, using default. Would you like to use cloud-init to customize the appliance on the first boot (Yes, No)[Yes]? Would you like to generate on-fly a cloud-init no-cloud ISO image or do you have an existing one (Generate, Existing)[Generate]? Please provide the FQDN you would like to use for the engine appliance. Note: This will be the FQDN of the engine VM you are now going to launch, it should not point to the base host or to any other existing machine. Engine VM FQDN: (leave it empty to skip): []: nsednev-he-3.qa.lab.tlv.redhat.com Automatically execute engine-setup on the engine appliance on first boot (Yes, No)[Yes]? Automatically restart the engine VM as a monitored service after engine-setup (Yes, No)[Yes]? Please provide the domain name you would like to use for the engine appliance. Engine VM domain: [qa.lab.tlv.redhat.com] Enter root password that will be used for the engine appliance (leave it empty to skip): Confirm appliance root password: How should the engine VM network should be configured (DHCP, Static)[DHCP]? Add lines for the appliance itself and for this host to /etc/hosts on the engine VM? Note: ensuring that this host could resolve the engine VM hostname is still up to you (Yes, No)[No] The following CPU types are supported by this host: - model_Penryn: Intel Penryn Family - model_Conroe: Intel Conroe Family Please specify the CPU type to be used by the VM [model_Penryn]: Please specify the number of virtual CPUs for the VM [Defaults to appliance OVF value: 2]: You may specify a unicast MAC address for the VM or accept a randomly generated default [00:16:3e:2e:66:0f]: 00:16:3E:7C:CC:CC Please specify the memory size of the VM in MB [Defaults to maximum available: 6950]: Please specify the console type you would like to use to connect to the VM (vnc, spice) [vnc]: --== HOSTED ENGINE CONFIGURATION ==-- Enter the name which will be used to identify this host inside the Administrator Portal [hosted_engine_1]: Enter 'admin@internal' user password that will be used for accessing the Administrator Portal: Confirm 'admin@internal' user password: Please provide the name of the SMTP server through which we will send notifications [localhost]: Please provide the TCP port number of the SMTP server [25]: Please provide the email address from which notifications will be sent [root@localhost]: Please provide a comma-separated list of email addresses which will get notifications [root@localhost]: [ INFO ] Stage: Setup validation --== CONFIGURATION PREVIEW ==-- Bridge interface : enp4s0 Engine FQDN : nsednev-he-3.qa.lab.tlv.redhat.com Bridge name : ovirtmgmt SSH daemon port : 22 Firewall manager : iptables Gateway address : 10.35.64.254 Host name for web application : hosted_engine_1 Host ID : 1 Image size GB : 50 GlusterFS Share Name : hosted_engine_glusterfs GlusterFS Brick Provisioning : False Storage connection : 10.35.64.11:/vol/RHEV/Virt/nsednev_upgrade_he_3_5_6_to_3_5_7_el_6_7_SD1 Console type : vnc Memory size MB : 6950 MAC address : 00:16:3E:7C:CC:CC Boot type : disk Number of CPUs : 2 OVF archive (for disk boot) : /usr/share/ovirt-engine-appliance/rhevm-appliance-20150107.0-1.el7ev.ova Restart engine VM after engine-setup: True CPU Type : model_Penryn Please confirm installation settings (Yes, No)[Yes]: [ INFO ] Stage: Transaction setup [ INFO ] Stage: Misc configuration [ INFO ] Stage: Package installation [ INFO ] Stage: Misc configuration [ INFO ] Configuring libvirt [ INFO ] Configuring VDSM [ INFO ] Starting vdsmd [ INFO ] Waiting for VDSM hardware info [ INFO ] Configuring the management bridge [ INFO ] Creating Storage Domain [ INFO ] Creating Storage Pool [ INFO ] Connecting Storage Pool [ INFO ] Verifying sanlock lockspace initialization [ INFO ] Creating Image for 'hosted-engine.lockspace' ... [ INFO ] Image for 'hosted-engine.lockspace' created successfully [ INFO ] Creating Image for 'hosted-engine.metadata' ... [ INFO ] Image for 'hosted-engine.metadata' created successfully [ INFO ] Creating VM Image [ INFO ] Extracting disk image from OVF archive (could take a few minutes depending on archive size) [ INFO ] Validating pre-allocated volume size [ INFO ] Uploading volume to data domain (could take a few minutes depending on archive size) [ INFO ] Image successfully imported from OVF [ INFO ] Destroying Storage Pool [ INFO ] Start monitoring domain [ INFO ] Configuring VM [ INFO ] Updating hosted-engine configuration [ INFO ] Stage: Transaction commit [ INFO ] Stage: Closing up [ INFO ] Creating VM You can now connect to the VM with the following command: /bin/remote-viewer vnc://localhost:5900 Use temporary password "4618tRPi" to connect to vnc console. Please note that in order to use remote-viewer you need to be able to run graphical applications. This means that if you are using ssh you have to supply the -Y flag (enables trusted X11 forwarding). Otherwise you can run the command from a terminal in your preferred desktop environment. If you cannot run graphical applications you can connect to the graphic console from another host or connect to the serial console using the following command: socat UNIX-CONNECT:/var/run/ovirt-vmconsole-console/10172798-1fb2-46ea-a3bc-c055ffb21d44.sock,user=ovirt-vmconsole STDIO,raw,echo=0,escape=1 Please ensure that your Guest OS is properly configured to support serial console according to your distro documentation. Follow http://www.ovirt.org/Serial_Console_Setup#I_need_to_access_the_console_the_old_way for more info. If you need to reboot the VM you will need to start it manually using the command: hosted-engine --vm-start You can then set a temporary password using the command: hosted-engine --add-console-password [ INFO ] Running engine-setup on the appliance