Bug 1066509
| Summary: | VDSM fails to start on fresh install of node | ||
|---|---|---|---|
| Product: | [Retired] oVirt | Reporter: | scott |
| Component: | ovirt-node | Assignee: | Douglas Schilling Landgraf <dougsland> |
| Status: | CLOSED DUPLICATE | QA Contact: | bugs <bugs> |
| Severity: | urgent | Docs Contact: | |
| Priority: | urgent | ||
| Version: | 3.4 | CC: | acathrow, bazulay, danken, dougsland, gklein, jboggs, mburns, mgoldboi, ovirt-bugs, ovirt-maint, yeylon |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | vdsm-4.14.2 | Doc Type: | Bug Fix |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2014-02-19 02:28:04 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
scott
2014-02-18 14:48:23 UTC
Hi Scott, Thanks for the help, I found the problem and it's a duplicate bug. Below the steps how I got the point and workaround. A next updated ovirt-node iso should contain the fix. # rpm -qa | grep -i vdsm vdsm-python-zombiereaper-4.14.1-3.el6.noarch vdsm-cli-4.14.1-3.el6.noarch vdsm-xmlrpc-4.14.1-3.el6.noarch vdsm-4.14.1-3.el6.x86_64 vdsm-reg-4.14.1-3.el6.noarch ovirt-node-plugin-vdsm-0.1.1-9.el6.noarch from /var/log/messages I saw: ================================== Feb 19 02:10:33 localhost respawn: slave '/usr/share/vdsm/vdsm --pidfile /var/run/vdsm/vdsmd.pid' died too quickly, respawning slave Feb 19 02:10:34 localhost python: error in accessing vdsm log file # ls -la /var/log/vdsm I identified vdsm.log and metadata.log as root:root (it's ok supervdsm.log as root:root but not vdsm and metadata) Changed to vdsm:kvm (correct owners) # chown vdsm:kvm vdsm.log metadata.log # service vdsmd restart (Host is UP again) Thanks! *** This bug has been marked as a duplicate of bug 1055153 *** |