Bug 826523

Summary: Ovirt-engine setup failed: Failed to configure NFS share on this host
Product: [Retired] oVirt Reporter: Joost Ringoot <joost.ringoot>
Component: ovirt-engine-installerAssignee: Juan Hernández <juan.hernandez>
Status: CLOSED WORKSFORME QA Contact:
Severity: medium Docs Contact:
Priority: unspecified    
Version: unspecifiedCC: acathrow, dyasny, iheim, mgoldboi, ykaul
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard: integration
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2012-06-14 14:09:47 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
the log mentioned in the error message
none
enginecleanup
none
enginesetup
none
ovirt-setup transcript none

Description Joost Ringoot 2012-05-30 12:20:53 UTC
Created attachment 587709 [details]
the log mentioned in the error message

Description of problem:
Failed to complete the engine-setup successfully

Version-Release number of selected component (if applicable):
[root@cal-26 ~]# yum info ovirt-engine-setup.noarch
Loaded plugins: langpacks, presto, refresh-packagekit
Installed Packages
Name        : ovirt-engine-setup
Arch        : noarch
Version     : 3.0.0.0001
Release     : 12.fc17
Size        : 303 k
Repo        : installed
From repo   : fedora
Summary     : Setup and upgrade scripts for Open Virtualization Manager
URL         : http://www.ovirt.org
License     : ASL 2.0
Description : Setup and upgrade scripts for Open Virtualization Manager.

[root@cal-26 ~]# 


How reproducible:

running "engine-setup"
Steps to Reproduce:
1.[root@cal-26 ~]# engine-cleanup
WARNING: Executing oVirt Engine cleanup utility.
This utility will wipe all existing data including configuration settings, certificates and database.
In addition, all existing DB connections will be closed.
Would you like to proceed? (yes|no): y

Stopping Engine Service...                               [ DONE ]
Stopping all connections to DB...                        [ DONE ]
Removing Database...                                     [ DONE ]
Removing CA...                                           [ DONE ]

Cleanup finished successfully!
Cleanup log available at /var/log/ovirt-engine/engine-cleanup_2012_05_30_14_00_57.log
DB Backup available at /usr/share/ovirt-engine/db-backups/tmp2HXiDg.sql
[root@cal-26 ~]# mv /etc/exports /etc/exports-30-5-2012
[root@cal-26 ~]# rmdir /mnt/sdb5-ovirt/ovirt_iso
[root@cal-26 ~]# engine-setup
Welcome to oVirt Engine setup utility
HTTP Port  [8080] : 
HTTPS Port  [8443] : 
Host fully qualified domain name, note that this name should be fully resolvable  [cal-26.oma.be] : 
Password for Administrator (admin@internal) :
Confirm password :
Database password (required for secure authentication with the locally created database) :
Confirm password :
Organization Name for the Certificate: 
String length is less than the minimum allowed: 1
Organization Name for the Certificate: KMI
The default storage type you will be using  ['NFS'| 'FC'| 'ISCSI']  [NFS] : 
Should the installer configure NFS share on this server to be used as an ISO Domain? ['yes'| 'no']  [yes] : 
Mount point path: /mnt/sdb5-ovirt/ovirt_iso
Display name for the ISO Domain: ovirt_iso
Firewall ports need to be opened.
You can let the installer configure iptables automatically overriding the current configuration. The old configuration will be backed up.
Alternately you can configure the firewall later using an example iptables file found under /usr/share/ovirt-engine/conf/iptables.example
Configure iptables ? ['yes'| 'no']: yes

oVirt Engine will be installed using the following configuration:
=================================================================
http-port:                     8080
https-port:                    8443
host-fqdn:                     cal-26.oma.be
auth-pass:                     ********
db-pass:                       ********
org-name:                      KMI
default-dc-type:               NFS
nfs-mp:                        /mnt/sdb5-ovirt/ovirt_iso
iso-domain-name:               ovirt_iso
override-iptables:             yes
Proceed with the configuration listed above? (yes|no): yes

Installing:
Configuring oVirt-engine...                              [ DONE ]
Creating CA...                                           [ DONE ]
Editing JBoss Configuration...                           [ DONE ]
Setting Database Security...                             [ DONE ]
Creating Database...                                     [ DONE ]
Updating the Default Data Center Storage Type...         [ DONE ]
Editing oVirt Engine Configuration...                    [ DONE ]
Configuring the Default ISO Domain...                 [ ERROR ]
Failed to configure NFS share on this host
Please check log file /var/log/ovirt-engine/engine-setup_2012_05_30_14_01_54.log for more information
[root@cal-26 ~]# 

2.
3.
  
Actual results:

ERROR on "Configuring the Default ISO Domain... "


Expected results:

Succesfull setup.
**** Installation completed successfully ******

Additional info:

This found in the attached file: 
2012-05-30 14:04:36::DEBUG::common_utils::202::root:: stderr = Failed to issue method call: Unit var-run.mount failed to load: No such file or directory. See system logs and 'systemctl status var-run.mount' for details.

Nothing related found in the system logs.

This is the output of 'systemctl status var-run.mount'
[root@cal-26 ~]# systemctl status var-run.mount
var-run.mount - /var/run
	  Loaded: error (Reason: No such file or directory)
	  Active: inactive (dead)
	          start condition failed at Wed, 16 May 2012 09:59:29 +0200; 2 weeks and 0 days ago
	   Where: /var/run
	  CGroup: name=systemd:/system/var-run.mount

[root@cal-26 ~]#
[root@cal-26 ~]# systemctl enable var-run.mount
Failed to issue method call: No such file or directory
[root@cal-26 ~]# systemctl start var-run.mount
Failed to issue method call: Unit var-run.mount failed to load: No such file or directory. See system logs and 'systemctl status var-run.mount' for details.
[root@cal-26 ~]# 
[root@cal-26 ~]# yum search var-run.mount
Loaded plugins: langpacks, presto, refresh-packagekit
Warning: No matches found for: var-run.mount
No Matches found
[root@cal-26 ~]# yum search */var-run.mount
Loaded plugins: langpacks, presto, refresh-packagekit
Warning: No matches found for: */var-run.mount
No Matches found
[root@cal-26 ~]# yum search */*var-run.mount
Loaded plugins: langpacks, presto, refresh-packagekit
Warning: No matches found for: */*var-run.mount
No Matches found
[root@cal-26 ~]# 
[root@cal-26 ~]# yum provides */*var-run.mount
Loaded plugins: langpacks, presto, refresh-packagekit
google-chrome/filelists                                                                       | 1.1 kB     00:00     
updates/filelists_db                                                                          | 2.8 MB     00:00     
systemd-44-8.fc17.i686 : A System and Service Manager
Repo        : fedora
Matched from:
Filename    : /usr/lib/systemd/system/var-run.mount
Filename    : /usr/lib/systemd/system/local-fs.target.wants/var-run.mount



systemd-44-8.fc17.x86_64 : A System and Service Manager
Repo        : fedora
Matched from:
Filename    : /usr/lib/systemd/system/var-run.mount
Filename    : /usr/lib/systemd/system/local-fs.target.wants/var-run.mount



[root@cal-26 ~]# yum install systemd-44-8.fc17.x86_64
Loaded plugins: langpacks, presto, refresh-packagekit
Package matching systemd-44-8.fc17.x86_64 already installed. Checking for update.
Nothing to do
[root@cal-26 ~]# yum info systemd-44-8.fc17.x86_64
Loaded plugins: langpacks, presto, refresh-packagekit
Error: No matching Packages to list
[root@cal-26 ~]# yum info systemd
Loaded plugins: langpacks, presto, refresh-packagekit
Installed Packages
Name        : systemd
Arch        : x86_64
Version     : 44
Release     : 12.fc17
Size        : 3.5 M
Repo        : installed
From repo   : updates
Summary     : A System and Service Manager
URL         : http://www.freedesktop.org/wiki/Software/systemd
License     : GPLv2+
Description : systemd is a system and service manager for Linux, compatible with
            : SysV and LSB init scripts. systemd provides aggressive parallelization
            : capabilities, uses socket and D-Bus activation for starting services,
            : offers on-demand starting of daemons, keeps track of processes using
            : Linux cgroups, supports snapshotting and restoring of the system
            : state, maintains mount and automount points and implements an
            : elaborate transactional dependency-based service control logic. It can
            : work as a drop-in replacement for sysvinit.

Available Packages
Name        : systemd
Arch        : i686
Version     : 44
Release     : 12.fc17
Size        : 943 k
Repo        : updates
Summary     : A System and Service Manager
URL         : http://www.freedesktop.org/wiki/Software/systemd
License     : GPLv2+
Description : systemd is a system and service manager for Linux, compatible with
            : SysV and LSB init scripts. systemd provides aggressive parallelization
            : capabilities, uses socket and D-Bus activation for starting services,
            : offers on-demand starting of daemons, keeps track of processes using
            : Linux cgroups, supports snapshotting and restoring of the system
            : state, maintains mount and automount points and implements an
            : elaborate transactional dependency-based service control logic. It can
            : work as a drop-in replacement for sysvinit.

[root@cal-26 ~]#

Comment 1 Gilboa Davara 2012-06-05 05:03:26 UTC
This is not ovirt related, its systemd related.
I don't have systemd installed but got a more-or-less dead system due to the var-mount problem.
All the servers are being killed off by systemd, one by one, init 1/3 up and down doesn't work.

Help?

- Gilboa

Comment 2 Gilboa Davara 2012-06-05 05:15:48 UTC
... After numerous attempts at fixing this issue (in the end, systemd even killed off my getty services), I was forced to reboot the machine. (At the point SSH was only thing working on the machine; nfs, smb, dhcp were all gone).
As far as I can see, post reboot, the issue is resolved.

However, as far as I know, the issue started when I remotely sent an init 3; init 5 to update the binary nVidia drivers.
X did start successfully, but var-run.mount failed to start, leading to the subsequent blow-out.

- Gilboa

Comment 3 Joost Ringoot 2012-06-08 07:56:17 UTC
I appears that the current ovirt on fedora 17 is broken, I found this  while doing a regular yum update.
I tried removing and reïnstalling ovirt-engine, removing was successfull, see what happens on reïnstall:
[root@cal-26 ~]# uname -a
Linux cal-26.oma.be 3.3.7-1.fc17.x86_64 #1 SMP Mon May 21 22:32:19 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
[root@cal-26 ~]# cat /etc/*release
Fedora release 17 (Beefy Miracle)
NAME=Fedora
VERSION="17 (Beefy Miracle)"
ID=fedora
VERSION_ID=17
PRETTY_NAME="Fedora 17 (Beefy Miracle)"
ANSI_COLOR="0;34"
CPE_NAME="cpe:/o:fedoraproject:fedora:17"
Fedora release 17 (Beefy Miracle)
Fedora release 17 (Beefy Miracle)
[root@cal-26 ~]# yum ugrade
Loaded plugins: langpacks, presto, refresh-packagekit
No such command: ugrade. Please use /bin/yum --help
[root@cal-26 ~]# yum update
Loaded plugins: langpacks, presto, refresh-packagekit
No Packages marked for Update
[root@cal-26 ~]# yum install ovirt-engine
Loaded plugins: langpacks, presto, refresh-packagekit
Resolving Dependencies
--> Running transaction check
---> Package ovirt-engine.noarch 0:3.0.0.0001-12.fc17 will be installed
--> Processing Dependency: ovirt-engine-tools-common = 3.0.0.0001-12.fc17 for package: ovirt-engine-3.0.0.0001-12.fc17.noarch
--> Processing Dependency: ovirt-engine-setup = 3.0.0.0001-12.fc17 for package: ovirt-engine-3.0.0.0001-12.fc17.noarch
--> Processing Dependency: ovirt-engine-restapi = 3.0.0.0001-12.fc17 for package: ovirt-engine-3.0.0.0001-12.fc17.noarch
--> Processing Dependency: ovirt-engine-notification-service = 3.0.0.0001-12.fc17 for package: ovirt-engine-3.0.0.0001-12.fc17.noarch
--> Processing Dependency: ovirt-engine-log-collector = 3.0.0.0001-12.fc17 for package: ovirt-engine-3.0.0.0001-12.fc17.noarch
--> Processing Dependency: ovirt-engine-iso-uploader = 3.0.0.0001-12.fc17 for package: ovirt-engine-3.0.0.0001-12.fc17.noarch
--> Processing Dependency: ovirt-engine-dbscripts = 3.0.0.0001-12.fc17 for package: ovirt-engine-3.0.0.0001-12.fc17.noarch
--> Processing Dependency: ovirt-engine-config = 3.0.0.0001-12.fc17 for package: ovirt-engine-3.0.0.0001-12.fc17.noarch
--> Processing Dependency: ovirt-engine-backend = 3.0.0.0001-12.fc17 for package: ovirt-engine-3.0.0.0001-12.fc17.noarch
--> Processing Dependency: jboss-as >= 7.1.0-3 for package: ovirt-engine-3.0.0.0001-12.fc17.noarch
--> Running transaction check
---> Package jboss-as.noarch 0:7.1.1-3.fc17 will be installed
--> Processing Dependency: jboss-jaxrs-1.1-api for package: jboss-as-7.1.1-3.fc17.noarch
--> Processing Dependency: jboss-jaxr-1.0-api for package: jboss-as-7.1.1-3.fc17.noarch
--> Processing Dependency: hornetq for package: jboss-as-7.1.1-3.fc17.noarch
--> Processing Dependency: apache-scout for package: jboss-as-7.1.1-3.fc17.noarch
--> Processing Dependency: apache-juddi for package: jboss-as-7.1.1-3.fc17.noarch
---> Package ovirt-engine-backend.noarch 0:3.0.0.0001-12.fc17 will be installed
---> Package ovirt-engine-config.noarch 0:3.0.0.0001-12.fc17 will be installed
---> Package ovirt-engine-dbscripts.noarch 0:3.0.0.0001-12.fc17 will be installed
---> Package ovirt-engine-iso-uploader.noarch 0:3.0.0.0001-12.fc17 will be installed
---> Package ovirt-engine-log-collector.noarch 0:3.0.0.0001-12.fc17 will be installed
---> Package ovirt-engine-notification-service.noarch 0:3.0.0.0001-12.fc17 will be installed
---> Package ovirt-engine-restapi.noarch 0:3.0.0.0001-12.fc17 will be installed
---> Package ovirt-engine-setup.noarch 0:3.0.0.0001-12.fc17 will be installed
---> Package ovirt-engine-tools-common.noarch 0:3.0.0.0001-12.fc17 will be installed
--> Running transaction check
---> Package apache-scout.noarch 0:1.2.6-2.fc17 will be installed
--> Processing Dependency: wsdl4j for package: apache-scout-1.2.6-2.fc17.noarch
--> Processing Dependency: axis for package: apache-scout-1.2.6-2.fc17.noarch
--> Processing Dependency: apache-juddi for package: apache-scout-1.2.6-2.fc17.noarch
---> Package hornetq.x86_64 0:2.2.13-4.fc17 will be installed
--> Processing Dependency: jdepend for package: hornetq-2.2.13-4.fc17.x86_64
---> Package jboss-as.noarch 0:7.1.1-3.fc17 will be installed
--> Processing Dependency: apache-juddi for package: jboss-as-7.1.1-3.fc17.noarch
---> Package jboss-jaxr-1.0-api.noarch 0:1.0.2-1.fc17 will be installed
---> Package jboss-jaxrs-1.1-api.noarch 0:1.0.1-2.fc17 will be installed
--> Running transaction check
---> Package apache-scout.noarch 0:1.2.6-2.fc17 will be installed
--> Processing Dependency: apache-juddi for package: apache-scout-1.2.6-2.fc17.noarch
---> Package axis.noarch 0:1.4-14.fc17 will be installed
--> Processing Dependency: apache-commons-discovery for package: axis-1.4-14.fc17.noarch
---> Package jboss-as.noarch 0:7.1.1-3.fc17 will be installed
--> Processing Dependency: apache-juddi for package: jboss-as-7.1.1-3.fc17.noarch
---> Package jdepend.noarch 0:2.9.1-5.fc17 will be installed
---> Package wsdl4j.noarch 0:1.6.2-4.fc17 will be installed
--> Running transaction check
---> Package apache-commons-discovery.noarch 2:0.5-3.fc17 will be installed
---> Package apache-scout.noarch 0:1.2.6-2.fc17 will be installed
--> Processing Dependency: apache-juddi for package: apache-scout-1.2.6-2.fc17.noarch
---> Package jboss-as.noarch 0:7.1.1-3.fc17 will be installed
--> Processing Dependency: apache-juddi for package: jboss-as-7.1.1-3.fc17.noarch
--> Finished Dependency Resolution
Error: Package: jboss-as-7.1.1-3.fc17.noarch (updates)
           Requires: apache-juddi
Error: Package: apache-scout-1.2.6-2.fc17.noarch (updates)
           Requires: apache-juddi
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest
[root@cal-26 ~]#

Comment 4 Itamar Heim 2012-06-11 19:26:35 UTC
juan - iirc you replied on this on list as an issue with jboss verison?

Comment 5 Joost Ringoot 2012-06-12 05:17:56 UTC
Hello Itamar,

I found one bug report on this: https://bugzilla.redhat.com/show_bug.cgi?id=830067
Marek Goldmann suggesting to use the testing repo to install apache-juddi as a work around to get jboss fixed. I will try this when I am in the office.

Comment 6 Juan Hernández 2012-06-12 07:58:20 UTC
(In reply to comment #4)
> juan - iirc you replied on this on list as an issue with jboss verison?

Yes, the problem is that latest jboss-as version 7.1.1-3 requires apache-juddi, but apache-juddi has not been yet been moved to the Fedora stable repositories.

There are two possible workarounds:

1. Manually install the the previous version of jboss-as before installing ovirt-engine:

  yum install jboss-as-7.1.1-2

2. Manually install the apache-juddi package from the testing repository:

  yum install apache-juddi --enablerepo=updates-testing

Comment 7 Joost Ringoot 2012-06-13 09:45:57 UTC
Created attachment 591421 [details]
enginecleanup

engine cleanup

Comment 8 Joost Ringoot 2012-06-13 09:47:10 UTC
Created attachment 591422 [details]
enginesetup

enginesetup

Comment 9 Joost Ringoot 2012-06-13 09:50:33 UTC
It appeared to work but I could not get in the website  wit the password I choose during setup, so I did a reinstall, see below and attached logs:
(I also removed the ovirt folder /mnt/ovirt and cleaned the /etc/exports)


[root@cal-26 ~]# engine-cleanup
WARNING: Executing oVirt Engine cleanup utility.
This utility will wipe all existing data including configuration settings, certificates and database.
In addition, all existing DB connections will be closed.
Would you like to proceed? (yes|no): yes

Stopping Engine Service...                               [ DONE ]
Stopping all connections to DB...                        [ DONE ]
Removing Database...                                     [ DONE ]
Removing CA...                                           [ DONE ]

Cleanup finished successfully!
Cleanup log available at /var/log/ovirt-engine/engine-cleanup_2012_06_13_11_36_17.log
DB Backup available at /usr/share/ovirt-engine/db-backups/tmpmneQTm.sql
[root@cal-26 ~]# cd /usr/share/ovirt-engine/db-backups/
[root@cal-26 db-backups]# ls -al
total 2232
drwxr-xr-x.  2 root root   4096 Jun 13 11:36 .
drwxr-xr-x. 17 root root   4096 Jun 12 11:24 ..
-rw-------.  1 root root 567179 May 30 14:01 tmp2HXiDg.sql
-rw-------.  1 root root 567185 Jun 12 11:27 tmpEmSF1j.sql
-rw-------.  1 root root 567187 May 30 13:49 tmpKv9Ir7.sql
-rw-------.  1 root root 567842 Jun 13 11:36 tmpmneQTm.sql
[root@cal-26 db-backups]# engine-setup 
Welcome to oVirt Engine setup utility
HTTP Port  [8080] : 
HTTPS Port  [8443] : 
Host fully qualified domain name, note that this name should be fully resolvable  [cal-26.oma.be] : 
Password for Administrator (admin@internal) :
Confirm password :
Database password (required for secure authentication with the locally created database) :
Confirm password :
Organization Name for the Certificate: RMI
The default storage type you will be using  ['NFS'| 'FC'| 'ISCSI']  [NFS] : 
Should the installer configure NFS share on this server to be used as an ISO Domain? ['yes'| 'no']  [yes] : 
Mount point path: /mnt/ovirt
ERROR: mount point already exists in /etc/exports
Mount point path: /mnt/ovirt
Display name for the ISO Domain: ovirt_iso
Firewall ports need to be opened.
You can let the installer configure iptables automatically overriding the current configuration. The old configuration will be backed up.
Alternately you can configure the firewall later using an example iptables file found under /usr/share/ovirt-engine/conf/iptables.example
Configure iptables ? ['yes'| 'no']: yes

oVirt Engine will be installed using the following configuration:
=================================================================
http-port:                     8080
https-port:                    8443
host-fqdn:                     cal-26.oma.be
auth-pass:                     ********
db-pass:                       ********
org-name:                      RMI
default-dc-type:               NFS
nfs-mp:                        /mnt/ovirt
iso-domain-name:               ovirt_iso
override-iptables:             yes
Proceed with the configuration listed above? (yes|no): yes

Installing:
Configuring oVirt-engine...                              [ DONE ]
Creating CA...                                           [ DONE ]
Editing JBoss Configuration...                        [ ERROR ]
exceptions must be old-style classes or derived from BaseException, not str
Please check log file /var/log/ovirt-engine/engine-setup_2012_06_13_11_37_04.log for more information
[root@cal-26 db-backups]#

Comment 10 Joost Ringoot 2012-06-13 10:03:37 UTC
setup was successfull again but I could not get in with the user admin and the chosen password. on https://cal-26.oma.be:8443

Comment 11 Joost Ringoot 2012-06-13 10:06:58 UTC
Created attachment 591423 [details]
ovirt-setup transcript

ovirt-setup transcript: I choose a simple password to be sure I have no issue logging in, but again, not possible to login to the mentioned website:
http://cal-26.oma.be:8080

HTTP Status 401 -

type Status report

message

description This request requires HTTP authentication ().
JBoss Web/7.0.0.SNAPSHOT

Comment 12 Juan Hernández 2012-06-13 10:24:14 UTC
Joost, the GUI is not part of the official Fedora packages yet (we need GWT for that). It is okay if you want to use only the REST API:

https://cal-26.oma.be:8443/api

There you should be able to log-in with user "admin@internal" and the password you gave during installation.

If you want to test the GUI I would recommend you to try the 3.1 alpha packages available here:

http://ovirt.org/releases/beta/fedora/17

Comment 13 Juan Hernández 2012-06-14 13:24:09 UTC
Joost, the apache-juddi package has already been pushed to the Fedora 17 stable repositories. Can we close this bug?

Comment 14 Joost Ringoot 2012-06-14 14:09:47 UTC
Thanks a lot for the info Juan.
I have been occupied a bit with other stuff, 

the username is  thus indeed "admin@internal" and not "admin".

the api is really very limited, I will test that
http://ovirt.org/releases/beta/fedora/17
asap


Yes you may close the bug.