Bug 1121561
| Summary: | vdsm 4.14.11 fails to start | ||
|---|---|---|---|
| Product: | [Retired] oVirt | Reporter: | Andrew Lau <andrew> |
| Component: | vdsm | Assignee: | Yaniv Bronhaim <ybronhei> |
| Status: | CLOSED DUPLICATE | QA Contact: | Gil Klein <gklein> |
| Severity: | urgent | Docs Contact: | |
| Priority: | urgent | ||
| Version: | unspecified | CC: | acathrow, andrew, bazulay, bugs, ecohen, gklein, iheim, lsurette, mgoldboi, movciari, oourfali, sbonazzo, yeylon |
| Target Milestone: | --- | Keywords: | Regression |
| Target Release: | 3.4.3 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | infra | ||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2014-07-22 15:48:19 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 1118689 | ||
I suggest an async release before 3.4.4 as soon as this is fixed. Also I find this a bit weird, since vdsm worked fine on All In One setup, maybe we had libvirt already configured on our test systems. Could it be another case of bug 1120049 ? Would you verify by following https://bugzilla.redhat.com/show_bug.cgi?id=1120049#c12 After adding it to ovirt-engine, it adds the host, fails to install but can later be activated with service vdsmd restart (and running the latest version of vdsm) previously I was running hosted-engine --deploy and it was failing at vdsm connection timed out (which required yum downgrade to get it running). So adding to ovirt-engine does something different to hosted-engine --deploy ? I've successfully been able to run VMs on the newly provisioned host with the latest version of vdsmd. I will need to re-provision and try again to verify your bug 1120049 case. it doesn't look like an issue, it only says to you - "Modules libvirt are not configured (note: this not a spelling mistake, it should specify as a list).... (alot of garbage) ... please run - 'vdsm-tool configure [module_name]' this happens when you install vdsm manually (e.g yum install vdsm) without using ovirt-host-deploy which performs this vdsm-tool call for you can you specify the way you installed vdsm? after running "vdsm-tool configure" does it work? This was my summarised install flow, yum -y install ovirt-hosted-engine-setup # configure NICs and ovirtmgmt hosted-engine --deploy .. failed - connection timed out yum downgrade vdsm* hosted-engine --deploy -- successful I did try running vdsm-tool configure, I do not have logs/exact results.. but it failed in a similar fashion. ok, this is something else that I didn't understand from your previous comments. Sandro, can you verify that "hosted-engine --deploy" runs "vdsm-tool configure --force" as part of installing and configuring vdsm on host? and if it runs it and fails, where are the logs? Yaniv, yes "hosted-engine --deploy" calls "vdsm-tool configure --force" as part of installing and configuring vdsm on host. The logs including stdout and stderr are in /var/log/ovirt-hosted-engine-setup/ the issue is similar to Bug 1114993 and fixed as part of it. closing the bug as duplicated. you can be sure about it when see that the command "vdsm-tool passwd" fails on finding /sbin/saslpasswd2 this command sets the vdsm@ovirt user that allows the communication between vdsm and libvirt. you can run it manually as a workaround "/usr/sbin/saslpasswd2 -p -a libvirt vdsm@ovirt" and then restart vdsm service. *** This bug has been marked as a duplicate of bug 1114993 *** *** Bug 1116070 has been marked as a duplicate of this bug. *** |
Description of problem: Latest version of vdsm (4.14.11) found in ovirt-release 3.4.3 will not start, service vdsmd start vdsm: Running mkdirs vdsm: Running configure_coredump vdsm: Running configure_vdsm_logs vdsm: Running run_init_hooks vdsm: Running gencerts vdsm: Running check_is_configured libvirt is not configured for vdsm yet Modules libvirt are not configured Traceback (most recent call last): File "/usr/bin/vdsm-tool", line 145, in <module> sys.exit(main()) File "/usr/bin/vdsm-tool", line 142, in main return tool_command[cmd]["command"](*args[1:]) File "/usr/lib64/python2.6/site-packages/vdsm/tool/configurator.py", line 282, in isconfigured raise RuntimeError(msg) RuntimeError: One of the modules is not configured to work with VDSM. To configure the module use the following: 'vdsm-tool configure [module_name]'. If all modules are not configured try to use: 'vdsm-tool configure --force' (The force flag will stop the module's service and start it afterwards automatically to load the new configuration.) vdsm: stopped during execute check_is_configured task (task returned with error code 1). vdsm start [FAILED] service vdsmd status VDS daemon is not running, and its watchdog is running The only logs in /var/log/vdsm/ that appear to have any content is /var/log/vdsm/supervdsm.log - everything else is blank MainThread::DEBUG::2014-07-19 18:55:34,793::supervdsmServer::424::SuperVdsm.Server::(main) Terminated normally MainThread::DEBUG::2014-07-19 18:55:38,033::netconfpersistence::134::root::(_getConfigs) Non-existing config set. MainThread::DEBUG::2014-07-19 18:55:38,034::netconfpersistence::134::root::(_getConfigs) Non-existing config set. MainThread::DEBUG::2014-07-19 18:55:38,058::supervdsmServer::384::SuperVdsm.Server::(main) Making sure I'm root - SuperVdsm MainThread::DEBUG::2014-07-19 18:55:38,059::supervdsmServer::393::SuperVdsm.Server::(main) Parsing cmd args MainThread::DEBUG::2014-07-19 18:55:38,059::supervdsmServer::396::SuperVdsm.Server::(main) Cleaning old socket /var/run/vdsm/svdsm.sock MainThread::DEBUG::2014-07-19 18:55:38,059::supervdsmServer::400::SuperVdsm.Server::(main) Setting up keep alive thread MainThread::DEBUG::2014-07-19 18:55:38,059::supervdsmServer::406::SuperVdsm.Server::(main) Creating remote object manager MainThread::DEBUG::2014-07-19 18:55:38,061::supervdsmServer::417::SuperVdsm.Server::(main) Started serving super vdsm object sourceRoute::DEBUG::2014-07-19 18:55:38,062::sourceRouteThread::56::root::(_subscribeToInotifyLoop) sourceRouteThread.subscribeToInotifyLoop started Downgrading yum is the only solution, yum downgrade vdsm* ​Here's the package changes for reference, --> Running transaction check ---> Package vdsm.x86_64 0:4.14.9-0.el6 will be a downgrade ---> Package vdsm.x86_64 0:4.14.11-0.el6 will be erased ---> Package vdsm-cli.noarch 0:4.14.9-0.el6 will be a downgrade ---> Package vdsm-cli.noarch 0:4.14.11-0.el6 will be erased ---> Package vdsm-python.x86_64 0:4.14.9-0.el6 will be a downgrade ---> Package vdsm-python.x86_64 0:4.14.11-0.el6 will be erased ---> Package vdsm-python-zombiereaper.noarch 0:4.14.9-0.el6 will be a downgrade ---> Package vdsm-python-zombiereaper.noarch 0:4.14.11-0.el6 will be erased ---> Package vdsm-xmlrpc.noarch 0:4.14.9-0.el6 will be a downgrade ---> Package vdsm-xmlrpc.noarch 0:4.14.11-0.el6 will be erased service vdsmd start initctl: Job is already running: libvirtd vdsm: Running mkdirs vdsm: Running configure_coredump vdsm: Running configure_vdsm_logs vdsm: Running run_init_hooks vdsm: Running gencerts vdsm: Running check_is_configured libvirt is already configured for vdsm sanlock service is already configured vdsm: Running validate_configuration SUCCESS: ssl configured to true. No conflicts vdsm: Running prepare_transient_repository vdsm: Running syslog_available vdsm: Running nwfilter vdsm: Running dummybr vdsm: Running load_needed_modules vdsm: Running tune_system vdsm: Running test_space vdsm: Running test_lo vdsm: Running unified_network_persistence_upgrade vdsm: Running restore_nets vdsm: Running upgrade_300_nets Starting up vdsm daemon: vdsm start [ OK ] [root@ov-hv1-2a-08-23 ~]# service vdsmd status VDS daemon server is running