Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 847515

Summary: Not all required kernel modules are loaded for the sanlock to run (create NFS Storage Pool will fail)
Product: Red Hat Enterprise Linux 6 Reporter: Ilanit Stein <istein>
Component: ovirt-nodeAssignee: Mike Burns <mburns>
Status: CLOSED ERRATA QA Contact: Virtualization Bugs <virt-bugs>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 6.3CC: abaron, acathrow, bazulay, bsarathy, chchen, cpelland, cshao, dyasny, fsimonce, gouyang, hadong, hateya, iheim, jbiddle, jboggs, leiwang, mburns, mjenner, oramraz, ovirt-maint, ycui, ykaul
Target Milestone: rcKeywords: TestBlocker, ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ovirt-node-2.5.0-3.el6 Doc Type: Bug Fix
Doc Text:
Previously, sanlock service would not run because not all required kernel modules were loaded. Creating a storage pool failed as a result. This has been corrected so now creating storage pools does not fail.
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-02-28 16:38:06 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 847773    
Attachments:
Description Flags
engine log
none
/var/log/messages none

Description Ilanit Stein 2012-08-12 13:30:56 UTC
Description of problem:

On rhev-hypervisor6-6.3-20120808.0.rhev31.el6_3, (vdsm-4.9.6-27.0.el6_3.x86_64) sanlock service is not running because not all required kernel modules are loaded. Create storage pool fails as a result.

On host:
=======

[root@white-vdse ~]# /etc/init.d/wdmd start
Loading the softdog kernel module: FATAL: Could not open '/lib/modules/2.6.32-279.5.1.el6.x86_64/kernel/drivers/watchdog/softdog.ko': No such file or directory
                                                           [FAILED]
Starting wdmd:                                             [  OK  ]

Error from vdsm.log:
===================

Thread-276::INFO::2012-08-12 11:05:02,613::safelease::160::SANLock::(acquireHostId) Acquiring host id for domain 95e87338-e784-42e1-a0ec-25901ba7c336 (id: 250)
Thread-276::ERROR::2012-08-12 11:05:02,614::task::853::TaskManager.Task::(_setError) Task=`c2be8ab3-b760-4654-a291-c1f8b179741c`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 861, in _run
  File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
  File "/usr/share/vdsm/storage/hsm.py", line 788, in createStoragePool
  File "/usr/share/vdsm/storage/sp.py", line 569, in create
  File "/usr/share/vdsm/storage/sp.py", line 510, in _acquireTemporaryClusterLock
  File "/usr/share/vdsm/storage/sd.py", line 415, in acquireHostId
  File "/usr/share/vdsm/storage/safelease.py", line 175, in acquireHostId
AcquireHostIdFailure: Cannot acquire host id: ('95e87338-e784-42e1-a0ec-25901ba7c336', SanlockException(2, 'Sanlock lockspace add failure', 'No such file or directory'))



How reproducible:
Always

Comment 1 Ilanit Stein 2012-08-12 13:32:31 UTC
bug marked as test blocker as it fails many testcases in rhev-h rest api test.

Comment 2 Ilanit Stein 2012-08-12 13:36:02 UTC
Created attachment 603775 [details]
engine log

Comment 4 Ilanit Stein 2012-08-12 13:58:19 UTC
Created attachment 603777 [details]
/var/log/messages

Comment 6 Ilanit Stein 2012-08-12 20:09:42 UTC
more info:
whole 'watchdog' driver directory is missing:

[root@white-vdse ~]# ls  /lib/modules/2.6.32-279.5.1.el6.x86_64/kernel/drivers/
ata    cdrom  cpufreq  dma   firmware  infiniband  message  pci   uio  vhost
block  char   dca      edac  idle      md          net      scsi  usb  virtio

Comment 7 Mike Burns 2012-08-13 00:51:42 UTC
Upstream is already fixed to include the watchdog directory

http://gerrit.ovirt.org/#/c/6094/

Patch is already included in 6.4 stream.

Comment 8 cshao 2012-08-13 06:40:08 UTC
Test version:
rhev-hypervisor6-6.3-20120808.0.rhev31.el6_3
vdsm-4.9.6-27.0.el6_3.x86_64
rhevm-si13.2 (rhevm-3.1.0-11.el6ev.noarch)

Test result:
1. If set compatibility version to 3.0 when create data centers, connect NFS/iscsi/FC storage can successful.
2. If set compatibility version to 3.1, can reproduce the same issue.

Comment 16 errata-xmlrpc 2013-02-28 16:38:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-0556.html