Bug 844560
Summary: | monitor sanlock service or restart it when needed | ||
---|---|---|---|
Product: | [Retired] oVirt | Reporter: | Royce Lv <lvroyce> |
Component: | vdsm | Assignee: | Dan Kenigsberg <danken> |
Status: | CLOSED INSUFFICIENT_DATA | QA Contact: | |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | unspecified | CC: | abaron, acathrow, amureini, bazulay, dyasny, fsimonce, iheim, mgoldboi, ykaul |
Target Milestone: | --- | ||
Target Release: | 3.3.4 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | storage | ||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2013-02-17 06:16:30 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Royce Lv
2012-07-31 05:03:39 UTC
(In reply to comment #0) > Description of problem: > when start vm, always failed to connect to the sanlock daemon, sometimes the > sanlock service running but main function exited, sometimes the service dead. > have to mannually restart the service. > we need to monitor the service or restart it with supervdsm when startvm > fails for this reason. The description is fairly vague. It would be great if you could present a reproducible scenario, or at least fill few of the suggested sections (for example having the version of the components vdsm/libvirt/sanlock/etc and some logs would be extremely helpful): > Version-Release number of selected component (if applicable): > > > How reproducible: > > > Steps to Reproduce: > 1. > 2. > 3. > > Actual results: > > > Expected results: > > > Additional info: |