Bug 875458
| Summary: | create storage pool fails on Fedora18 host: SanlockException(-203, 'Sanlock lockspace add failure', 'Sanlock exception')) | ||||||
|---|---|---|---|---|---|---|---|
| Product: | [Retired] oVirt | Reporter: | Ohad Basan <obasan> | ||||
| Component: | vdsm | Assignee: | Federico Simoncelli <fsimonce> | ||||
| Status: | CLOSED CURRENTRELEASE | QA Contact: | |||||
| Severity: | high | Docs Contact: | |||||
| Priority: | urgent | ||||||
| Version: | 3.1 RC | CC: | abaron, acathrow, bazulay, dyasny, eedri, iheim, mgoldboi, teigland, ykaul | ||||
| Target Milestone: | --- | ||||||
| Target Release: | --- | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2012-11-19 14:13:30 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
-203 is SANLK_WD_ERROR. Is wdmd running? Are there wdmd or sanlock errors in /var/log/messages? On fedora 18 this is not specific to sanlock, for what I see the method we currently use to set the sebool options is broken, in fact also other booleans are not set:
virt_use_nfs --> off
virt_use_sanlock --> off
sanlock_use_nfs --> off
The issue I'm hitting is:
# semanage boolean -l | grep virt_use_nfs
Traceback (most recent call last):
File "/usr/sbin/semanage", line 25, in <module>
import seobject
File "/usr/lib64/python2.7/site-packages/seobject.py", line 30, in <module>
import sepolgen.module as module
ImportError: No module named sepolgen.module
(Failure within the pre scriptlet in the spec file)
This should be fixed in:
* Fri Nov 16 2012 Dan Walsh <dwalsh> - 2.1.12-34
- Fix semanage to work without policycoreutils-devel installed
http://koji.fedoraproject.org/koji/buildinfo?buildID=366951
Can you try to update the policycoreutils package and re-trigger the vdsm %pre scriptlet (updating the vdsm package) and check if the sebool are now set?
virt_use_nfs --> on
virt_use_sanlock --> on
sanlock_use_nfs --> on
Thanks.
Federico, Is your f18 instance fully updated? the problem had gone for me and I successfully connected a storage domain. (In reply to comment #3) > Federico, Is your f18 instance fully updated? Yes but I hit the bug on policycoreutils-2.1.12-33 and I was trying to figure out if that could have been also your problem. > the problem had gone for me and I successfully connected a storage domain. Good to know. Closing. |
Created attachment 642755 [details] vdsm.log Description of problem: Attaching a storage domain fails Version-Release number of selected component (if applicable): How reproducible: always Steps to Reproduce: 1. Install ovirt on an f17 machine 2. attach an f18 host - create bridge manually. (if networking daemon doesn't start - try disabling selinux) and reboot the host manually because rebooting doesn't work 3. attach storage. Actual results: storage is failed to attach. vdsm.log: Thread-25376::DEBUG::2012-11-11 13:41:01,435::task::957::TaskManager.Task::(_decref) Task=`2c6bd14b-0795-4ccf-8155-59b2cf1a541b`::ref 0 aborting False Thread-25366::ERROR::2012-11-11 13:41:04,418::task::833::TaskManager.Task::(_setError) Task=`4268d372-c634-498c-82cf-8ed4ea3c5ef1`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 840, in _run return fn(*args, **kargs) File "/usr/share/vdsm/logUtils.py", line 38, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/storage/hsm.py", line 801, in createStoragePool return sp.StoragePool(spUUID, self.taskMng).create(poolName, masterDom, domList, masterVersion, safeLease) File "/usr/share/vdsm/storage/sp.py", line 569, in create self._acquireTemporaryClusterLock(msdUUID, safeLease) File "/usr/share/vdsm/storage/sp.py", line 510, in _acquireTemporaryClusterLock msd.acquireHostId(self.id) File "/usr/share/vdsm/storage/sd.py", line 426, in acquireHostId self._clusterLock.acquireHostId(hostId, async) File "/usr/share/vdsm/storage/safelease.py", line 175, in acquireHostId raise se.AcquireHostIdFailure(self._sdUUID, e) AcquireHostIdFailure: Cannot acquire host id: ('68d3a4af-ef8e-4842-b589-fe602636e692', SanlockException(-203, 'Sanlock lockspace add failure', 'Sanlock exception')) Thread-25366::DEBUG::2012-11-11 13:41:04,448::task::852::TaskManager.Task::(_run) Task=`4268d372-c634-498c-82cf-8ed4ea3c5ef1`::Task._run: 4268d372-c634-498c-82cf-8ed4ea3c5ef1 (None, 'cd697c3 2-2417-11e2-9a5e-001a4a231066', 'Default', '68d3a4af-ef8e-4842-b589-fe602636e692', ['68d3a4af-ef8e-4842-b589-fe602636e692'], 2, None, 5, 60, 10, 3) {} failed - stopping task Thread-25366::DEBUG::2012-11-11 13:41:04,448::task::1177::TaskManager.Task::(stop) Task=`4268d372-c634-498c-82cf-8ed4ea3c5ef1`::stopping in state preparing (force False) Expected results: storage attaches successfully. Additional info: