Bug 1278715
| Summary: | Failed to restart vdsmd service after reboot rhevh after satellite registration | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Liushihui <shihliu> | ||||
| Component: | ovirt-node | Assignee: | Fabian Deutsch <fdeutsch> | ||||
| Status: | CLOSED DUPLICATE | QA Contact: | Liushihui <shihliu> | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | 3.5.5 | CC: | cshao, cwu, ecohen, guillermokmo, gxing, hsun, huiwa, huzhao, leiwang, lsurette, sgao, shihliu, yaniwang, ycui | ||||
| Target Milestone: | ovirt-3.6.1 | ||||||
| Target Release: | --- | ||||||
| Hardware: | x86_64 | ||||||
| OS: | Linux | ||||||
| Whiteboard: | node | ||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2015-11-09 15:16:49 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | Node | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
|
Description
Liushihui
2015-11-06 09:22:20 UTC
Created attachment 1090531 [details]
vdsm.log
Does teh node get up in RHEV-M? In vdsm.log I see: Thread-20::DEBUG::2015-11-06 09:05:23,279::domainMonitor::201::Storage.DomainMonitorThread::(_monitorLoop) Unable to release the host id 1 for domain 55a2a114-13e2-413e-b886-fe25a092d75a Traceback (most recent call last): File "/usr/share/vdsm/storage/domainMonitor.py", line 198, in _monitorLoop File "/usr/share/vdsm/storage/sd.py", line 480, in releaseHostId File "/usr/share/vdsm/storage/clusterlock.py", line 252, in releaseHostId ReleaseHostIdFailure: Cannot release host id: (u'55a2a114-13e2-413e-b886-fe25a092d75a', SanlockException(16, 'Sanlock lockspace remove failure', 'Device or resource busy')) Thus it looks like a dupe of bug 1167074 *** This bug has been marked as a duplicate of bug 1167074 *** (In reply to Fabian Deutsch from comment #2) > Does teh node get up in RHEV-M? When it occurred this problem, rhevh node was down in rhevm. i have this problem when adding host on ovirt. [root@hosts1 ~]# systemctl status vdsmd ● vdsmd.service - Virtual Desktop Server Manager Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor preset: enabled) Active: inactive (dead) Oct 29 22:16:56 hosts1.seinformatica.com.py systemd[1]: Dependency failed for Virtual Desktop Server Manager. Oct 29 22:16:56 hosts1.seinformatica.com.py systemd[1]: Job vdsmd.service/start failed with result 'dependency'. Oct 29 22:17:22 hosts1.seinformatica.com.py systemd[1]: Dependency failed for Virtual Desktop Server Manager. Oct 29 22:17:22 hosts1.seinformatica.com.py systemd[1]: Job vdsmd.service/start failed with result 'dependency'. Oct 29 22:25:59 hosts1.seinformatica.com.py systemd[1]: Dependency failed for Virtual Desktop Server Manager. Oct 29 22:25:59 hosts1.seinformatica.com.py systemd[1]: Job vdsmd.service/start failed with result 'dependency'. how can i resolve? Could you provide your rhevh and ovirt's version? Thanks. I use centos 7, and the version of ovirt4.0 Since we haven't met this problem after this bug resolved, could you report a bug directly? thanks. (In reply to Matias Spaini from comment #7) > I use centos 7, and the version of ovirt4.0 |