Bug 977926
Summary: | VDSM: Failed in CreateStoragePoolVDS when sanlock service is already running | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Eyal Edri <eedri> | ||||||||
Component: | vdsm | Assignee: | Federico Simoncelli <fsimonce> | ||||||||
Status: | CLOSED ERRATA | QA Contact: | Leonid Natapov <lnatapov> | ||||||||
Severity: | unspecified | Docs Contact: | |||||||||
Priority: | urgent | ||||||||||
Version: | 3.3.0 | CC: | abaron, acanan, amureini, anil.dhingra, bazulay, fsimonce, higkoohk, iheim, jbiddle, jkt, lpeer, mgoldboi, scohen, yeylon | ||||||||
Target Milestone: | --- | Keywords: | Regression, TestBlocker | ||||||||
Target Release: | 3.3.0 | ||||||||||
Hardware: | Unspecified | ||||||||||
OS: | Unspecified | ||||||||||
Whiteboard: | storage | ||||||||||
Fixed In Version: | is6 | Doc Type: | Bug Fix | ||||||||
Doc Text: |
Previously, if sanlock was already running before installing vdsm, the daemon kept running with the same privileges it had before the service started. This prevented some operations on File Domains to be executed (sanlock lockspace add failure).
Now, when vdsm is installed, sanlock is restarted so the daemon can pick up the required privileges.
|
Story Points: | --- | ||||||||
Clone Of: | Environment: | ||||||||||
Last Closed: | 2014-01-21 16:09:34 UTC | Type: | Bug | ||||||||
Regression: | --- | Mount Type: | --- | ||||||||
Documentation: | --- | CRM: | |||||||||
Verified Versions: | Category: | --- | |||||||||
oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||
Embargoed: | |||||||||||
Attachments: |
|
Description
Eyal Edri
2013-06-25 15:19:56 UTC
is7. added 3.3.2 host with vdsm-4.10.2-24.0.el6ev.x86_64 to is7 setup. storage pool was successfully created. I still get this error: oVirt Engine - 3.3.0-0.3.beta1.el6 RHEL - 6 - 4.el6.10 kernel - 2.6.32 - 358.14.2.el6.x86_64 KVM - 0.12.1.2 - 2.355.el6 libvirt - 1.1.1-1.el6 vdsm - 4.12.0-0.1.rc3.el6 SPICE - 0.12.0 - 12.el6 CPU - Intel SandyBridge Family - Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz Created attachment 790874 [details]
The Error Page!
there are general info on the picture.
error message still:
2013-08-27 18:01:22,929 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand] (ajp--127.0.0.1-8702-2) Failed in CreateStoragePoolVDS method
2013-08-27 18:01:22,929 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand] (ajp--127.0.0.1-8702-2) Error code AcquireHostIdFailure and error message VDSGenericException: VDSErrorException: Failed to CreateStoragePoolVDS, error = Cannot acquire host id: ('551319ca-ea70-4cd3-a1df-3bd2bac9c7fa', SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument'))
2013-08-27 18:01:22,932 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand] (ajp--127.0.0.1-8702-2) Command CreateStoragePoolVDS execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateStoragePoolVDS, error = Cannot acquire host id: ('551319ca-ea70-4cd3-a1df-3bd2bac9c7fa', SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument'))
2013-08-27 18:01:22,933 ERROR [org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand] (ajp--127.0.0.1-8702-2) Command org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand throw Vdc Bll exception. With error message VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateStoragePoolVDS, error = Cannot acquire host id: ('551319ca-ea70-4cd3-a1df-3bd2bac9c7fa', SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument')) (Failed with VDSM error AcquireHostIdFailure and code 661)
higkoo, please attach VDSM + engine logs. Created attachment 790941 [details]
engine.log on server console
last 10000 line of engine.log on server.
Created attachment 790943 [details]
vdsm.log on node
Last 10000 of client vdsm.log. thanks!
I restarted sanlock on both ovirt nodes in cluster & attached storgedomain again & it worked fine [root@node1-3-3 ~]# service sanlock status sanlock (pid 11908 11906) is running... [root@node1-3-3 ~]# service sanlock restart Sending stop signal sanlock (11906): [ OK ] Waiting for sanlock (11906) to stop: [ OK ] Starting sanlock: [ OK ] This bug is currently attached to errata RHBA-2013:15291. If this change is not to be documented in the text for this errata please either remove it from the errata, set the requires_doc_text flag to minus (-), or leave a "Doc Text" value of "--no tech note required" if you do not have permission to alter the flag. Otherwise to aid in the development of relevant and accurate release documentation, please fill out the "Doc Text" field above with these four (4) pieces of information: * Cause: What actions or circumstances cause this bug to present. * Consequence: What happens when the bug presents. * Fix: What was done to fix the bug. * Result: What now happens when the actions or circumstances above occur. (NB: this is not the same as 'the bug doesn't present anymore') Once filled out, please set the "Doc Type" field to the appropriate value for the type of change made and submit your edits to the bug. For further details on the Cause, Consequence, Fix, Result format please refer to: https://bugzilla.redhat.com/page.cgi?id=fields.html#cf_release_notes Thanks in advance. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2014-0040.html |