Bug 740289
Summary: | Failed to create storage domains in rhev-h | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | Mike Burns <mburns> | ||||||||
Component: | ovirt-node | Assignee: | Mike Burns <mburns> | ||||||||
Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> | ||||||||
Severity: | urgent | Docs Contact: | |||||||||
Priority: | urgent | ||||||||||
Version: | 6.2 | CC: | abaron, apevec, bazulay, cshao, gouyang, iheim, leiwang, mburns, moli, ovirt-maint, ycui, ykaul | ||||||||
Target Milestone: | rc | Keywords: | Reopened | ||||||||
Target Release: | --- | ||||||||||
Hardware: | Unspecified | ||||||||||
OS: | Unspecified | ||||||||||
Whiteboard: | |||||||||||
Fixed In Version: | ovirt-node-2.0.2-0.10.gitee3b50c.el6 | Doc Type: | Bug Fix | ||||||||
Doc Text: | Story Points: | --- | |||||||||
Clone Of: | Environment: | ||||||||||
Last Closed: | 2011-12-06 19:28:18 UTC | Type: | --- | ||||||||
Regression: | --- | Mount Type: | --- | ||||||||
Documentation: | --- | CRM: | |||||||||
Verified Versions: | Category: | --- | |||||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||
Embargoed: | |||||||||||
Attachments: |
|
Description
Mike Burns
2011-09-21 14:47:03 UTC
Created attachment 524214 [details]
vdsm.log
A couple of possibly helpful, but possibly not, comments: This occurred with vdsm builds from -96.1 to latest git (something around -104) It only seems to be a problem with creating storage domains. Adding a rhev-h to an existing datacenter with a storage domain already running works correctly. I've played around with this some more and found a few things: - Need to make /var/db writable (bug 740406) - re-mounting / as rw (mount -o remount,rw /) then restarting vdsmd allows storage domain creation to succeed. Next test: clean iscsi storage fresh boot of rhevh add iscsi storage domain Failed with same error Cleanup storage (vgremove, pvremove) add again failed with different error (can't find vg that i removed above) Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 876, in _run File "/usr/share/vdsm/storage/hsm.py", line 1199, in public_createStorageDomain File "/usr/share/vdsm/storage/sdf.py", line 60, in create File "/usr/share/vdsm/storage/blockSD.py", line 282, in create File "/usr/share/vdsm/storage/lvm.py", line 829, in getVGbyUUID File "/usr/share/vdsm/storage/lvm.py", line 154, in __getattr__ AttributeError: Failed reload: d2bd6b3e-cc04-489a-a42c-26dbcb8929b7 Cleanup again restart vdsmd add storage domain again -- Success I did not see this issue during rhev-h-6.2-0.17.2, and I re-run the test today, also did not see this issue. testing steps: 1. install rhevh 2. configure network. 3. drop to shell, check multipath, vgs, pvs. make sure the lun which rhevh isn't installed is not partitioned. 4. register to rhevm 5. approve and add FC storage. 6. Add FC storage successful. (In reply to comment #5) > I did not see this issue during rhev-h-6.2-0.17.2 This is the issue with "pure" 6.2 RHEV-H builds, -0.17.2 is "hybrid" one (6.1.z + only libvirt/kvm from 6.2) The real reason is wrong LVM behaviour as described in bug 740575 *** This bug has been marked as a duplicate of bug 740575 *** We need a workaround in ovirt-node to make this work. Suggestion is to wrap scsi_id to s/ +/_/ Actually, latest workaround attempt in rhevh is to put this in multipath.conf: defaults { getuid_callout "/lib/udev/scsi_id --replace-whitespace --whitelisted --device=/dev/%n" } --replace-whitespace does s/ +/_/ Patch will do 3 things: 1. put getuid_callout workaround in multipath.conf 2. drop lvm to set verify_udev_operations = 1 (it's 0 by default) 3. remove this workaround: sed -i -e '/^ENV{DM_UDEV_DISABLE_DM_RULES_FLAG}/d' /lib/udev/rules.d/10-dm.rules A node built with these changes can successfully autoinstall and create storage domains. It did uncover a couple bugs in the TUI however. Disk selection was listing /dev/sda instead of /dev/mapper/<wwid>. A patch for this issue is WIP. Created attachment 525841 [details]
Patch
Patch for multipath.conf and removing the workarounds
Created attachment 525938 [details]
Follow-up Patch
Patch to cleanup previously mentioned TUI issues
(Collaborated on by Joey Boggs)
Testing: Install RHEV-H -- verify TUI has the right devices (should show multipath where appropriate) verify booted to multipath device There should be no "falling back to direct device creation" errors (or similar) in boot log lvm.conf should not have verify_udev_operations set ensure rule ENV{DM_UDEV_DISABLE_DM_RULES_FLAG} exists in 10-dm.rules create various storage domains using RHEV-H as the host to create them Verified this bug on RHEV-H 6.2-20111010.2.el6. Creating FC storage domain and soft iSCSI domain successful. No such error in vdsm.log. We can not check this bug on hard iSCSI machine,because it blocked by bug #742433. So change the status to Verified, if the issue is reproduce on hare iSCSI machine, I will reopen it. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2011-1783.html |