Bug 1275381
Summary: | Error while executing action New SAN Storage Domain: Cannot zero out volume | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Product: | [oVirt] ovirt-engine | Reporter: | Devin Bougie <devin.bougie> | ||||||||||||
Component: | General | Assignee: | Ala Hino <ahino> | ||||||||||||
Status: | CLOSED NOTABUG | QA Contact: | Aharon Canan <acanan> | ||||||||||||
Severity: | unspecified | Docs Contact: | |||||||||||||
Priority: | unspecified | ||||||||||||||
Version: | 3.5.5 | CC: | ahino, amureini, bugs, devin.bougie, tnisan, ybronhei, ylavi | ||||||||||||
Target Milestone: | ovirt-3.6.1 | Flags: | tnisan:
ovirt-3.6.z?
tnisan: ovirt-4.0.0? rule-engine: planning_ack? rule-engine: devel_ack? rule-engine: testing_ack? |
||||||||||||
Target Release: | 3.6.1 | ||||||||||||||
Hardware: | x86_64 | ||||||||||||||
OS: | Linux | ||||||||||||||
Whiteboard: | storage | ||||||||||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||||||||||
Doc Text: |
Cause:
vdsm user defined in external identity store and wasn't created by vdsm
Consequence:
udev didn't apply permissions to sd paths hence, access to sd paths resulted in permission deny error
Fix:
Let vdsm create vdsm user locally or manually load udev rules by running "udevadm control --reload"
Result:
Permissions are applied by udev and accessing sd succeeds
IMPORTANT:
The recommended configuration is to let vdsm create vdsm user locally. Otherwise, the results may be unexpected as in this bug for example, permission weren't set appropriately.
|
Story Points: | --- | ||||||||||||
Clone Of: | Environment: | ||||||||||||||
Last Closed: | 2015-11-10 06:37:07 UTC | Type: | Bug | ||||||||||||
Regression: | --- | Mount Type: | --- | ||||||||||||
Documentation: | --- | CRM: | |||||||||||||
Verified Versions: | Category: | --- | |||||||||||||
oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |||||||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||||||
Embargoed: | |||||||||||||||
Attachments: |
|
Description
Devin Bougie
2015-10-26 17:46:04 UTC
Hi Devin, Can you please attach full logs from the engine and the SPM host? Snippets are not always as effective Created attachment 1086913 [details]
engine.log
Created attachment 1086915 [details]
vdsm.log
Hi Tal, engine.log and vdsm.log have now been attached. Please not that these are both running on the same host, although I get the same error if trying to create the iscsi domain from another. I tried adding the domain again today, if that helps narrow down where to look in the logs. Just incase it helps, the storage domain is not listed in the overt-engine gui (web interface) or in ovirt-shell. However, when I try to add it again, it first says "The following LUNs are already in use: ...," but if I "Approve operation" I get the same "Cannot zero out volume" error. If I try to import, I can log into the target but it doesn't show any "Storage Name / Storage ID (VG Name)" to import. Please let me know if there is anything more I can provide or anything else I can do to help. Thanks, Devin Hi Devin, Can you please run the following commands and send us the output? $ ls -lhZ `realpath /dev/d2198dd8-ab81-4c3f-b2a7-21dc6b2e0a10/*` (note the ` character, it is not '. You copy&paste the cmd) $ cat /usr/lib/udev/rules.d/12-vdsm-lvm.rules $ ls /etc/udev/rules.d/ $ cat /etc/udev/rules.d/12-vdsm-lvm.rules (file may not exists) Thanks, Ala Hi Ala, Here you go. Please let me know if the id in the first command may have changed (I keep testing different combinations of hosts, etc.). Thanks! Devin [root@lnx84 ~]# ls -lhZ `realpath /dev/d2198dd8-ab81-4c3f-b2a7-21dc6b2e0a10/*` brw------- root qemu ? /dev/dm-20 brw------- root qemu ? /dev/dm-21 brw-rw---- root sanlock ? /dev/dm-22 brw-rw---- root sanlock ? /dev/dm-23 brw------- root qemu ? /dev/dm-24 [root@lnx84 ~]# cat /usr/lib/udev/rules.d/12-vdsm-lvm.rules # # Copyright 2010 Red Hat, Inc. and/or its affiliates. # # Licensed to you under the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. See the files README and # LICENSE_GPL_v2 which accompany this distribution. # # Udev rules for LVM. # # These rules create symlinks for LVM logical volumes in # /dev/VG directory (VG is an actual VG name). Some udev # environment variables are set (they can be used in later # rules as well): # DM_LV_NAME - logical volume name # DM_VG_NAME - volume group name # DM_LV_LAYER - logical volume layer (blank if not set) # "add" event is processed on coldplug only, so we need "change", too. ACTION!="add|change", GOTO="lvm_end" # Volumes used as vdsm images # WARNING: we cannot use OWNER, GROUP and MODE since using any of them will # change the selinux label to the default, causing vms to pause after extending # disks. https://bugzilla.redhat.com/1147910 ENV{DM_VG_NAME}=="[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]", ENV{DM_LV_NAME}=="[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]", RUN+="/usr/bin/chown vdsm:qemu $env{DEVNAME}", GOTO="lvm_end" # Other volumes used by vdsm ENV{DM_VG_NAME}=="[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]", ENV{DM_LV_NAME}=="[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]_MERGE", OWNER:="vdsm", GROUP:="qemu", GOTO="lvm_end" ENV{DM_VG_NAME}=="[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]", ENV{DM_LV_NAME}=="_remove_me_[a-zA-Z0-9][a-zA-Z0-9][a-zA-Z0-9][a-zA-Z0-9][a-zA-Z0-9][a-zA-Z0-9][a-zA-Z0-9][a-zA-Z0-9]_[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]", OWNER:="vdsm", GROUP:="qemu", GOTO="lvm_end" ENV{DM_VG_NAME}=="[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]", ENV{DM_LV_NAME}=="metadata", MODE:="0600", OWNER:="vdsm", GROUP:="qemu", GOTO="lvm_end" ENV{DM_VG_NAME}=="[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]", ENV{DM_LV_NAME}=="ids", MODE:="0660", OWNER:="vdsm", GROUP:="sanlock", GOTO="lvm_end" ENV{DM_VG_NAME}=="[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]", ENV{DM_LV_NAME}=="inbox", MODE:="0600", OWNER:="vdsm", GROUP:="qemu", GOTO="lvm_end" ENV{DM_VG_NAME}=="[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]", ENV{DM_LV_NAME}=="outbox", MODE:="0600", OWNER:="vdsm", GROUP:="qemu", GOTO="lvm_end" ENV{DM_VG_NAME}=="[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]", ENV{DM_LV_NAME}=="leases", MODE:="0660", OWNER:="vdsm", GROUP:="sanlock", GOTO="lvm_end" # FIXME: make special lvs vdsm-only readable, MODE doesn't work on rhel6 LABEL="lvm_end" [root@lnx84 ~]# ls /etc/udev/rules.d/ 12-ovirt-iosched.rules 60-ipath.rules 70-persistent-ipoib.rules [root@lnx84 ~]# cat /etc/udev/rules.d/12-vdsm-lvm.rules cat: /etc/udev/rules.d/12-vdsm-lvm.rules: No such file or directory Oved, Could you please check whether we have any infra issues here? Thanks. (In reply to Ala Hino from comment #7) > Oved, > > Could you please check whether we have any infra issues here? > > Thanks. To be more specific, seems like there is an installation issue. Yaniv, can you take a look? One more piece of info, there is a permission denied issue: Thread-33390::DEBUG::2015-10-27 12:18:50,223::blockSD::535::Storage.Misc.excCmd::(create) /usr/bin/nice -n 19 /usr/bin/ionice -c 3 /usr/bin/dd if=/dev/zero of=/dev/d2198dd8-ab81-4c3f-b2a7-21dc6b2e0a10/metadata bs=1048576 seek=0 skip=0 conv=notrunc count=40 oflag=direct (cwd None) Thread-33390::DEBUG::2015-10-27 12:18:50,235::blockSD::535::Storage.Misc.excCmd::(create) FAILED: <err> = "/usr/bin/dd: failed to open '/dev/d2198dd8-ab81-4c3f-b2a7-21dc6b2e0a10/metadata': Permission denied\n"; <rc> = 1 Hi Devin, Could you please run following commands: $ cat /etc/passwd | grep vdsm $ cat /etc/group | grep vdsm Vdsm installs vdsm-lvm.rules file (you have it in spec) you should figure how it is gone in this env ... last commit regard this area is http://gerrit.ovirt.org/30869 - check if something changed Restoring needinfo on Devin. Hi Ala, The vdsm user is in our LDAP domain, so it's not in the local /etc/passwd. [root@lnx84 ~]# id vdsm uid=36(vdsm) gid=36(kvm) groups=36(kvm),107(qemu),106(sanlock) [root@lnx84 ~]# cat /etc/passwd | grep vdsm [root@lnx84 ~]# cat /etc/group | grep vdsm kvm:x:36:qemu,sanlock,vdsm qemu:x:107:vdsm,sanlock sanlock:x:106:sanlock,vdsm As for vdsm-lvm.rules, I don't see any packages that provide this. Should I try creating one manually? [root@lnx84 ~]# yum provides */vdsm-lvm.rules Loaded plugins: langpacks, versionlock No matches found Thanks, Devin Sorry, to be clear /usr/lib/udev/rules.d/12-vdsm-lvm.rules exists, but there is no /etc/udev/rules.d/12-vdsm-lvm.rules. A `yum provides */*vdsm-lvm.rules` doesn't show anything that provides an /etc/udev/rules.d/*vdsm-lvm.rules Thanks, Devin Hi Devin, Could you please do the following for me? 1) We want to see all installed (not only vdsm related) packages version. Please run the following command and send the output: $ rpm -qa 2) We want to see udev logs when creating the storage domain: a. Enable udev debug log by running: $ udevadm control --log-priority=debug b. Write a marker to the log before creating the storage domain. Please run: $ logger "BEFORE CREATING SCSI STORAGE DOMAIN" c. Create the storage domain d. Write a marker to the log to indicate end of creating the storage domain: $ logger "AFTER CREATING SCSI STORAGE DOMAIN" Please send us /var/log/messages Thank you, Ala Created attachment 1088660 [details]
installed packages
Here's the output of rpm -qa.
Thanks!
Devin
Created attachment 1088661 [details]
udev logs
And, here are the udev logs.
Thanks again,
Devin
Somehow I don't see udev logs. Could you please do the following? I apologize for asking too much. 1) Enable udev deb logs: $ udevadm control --log-priority=debug 2) Redirect journalctl to a temp file and send the file: $ journalctl -f > temp.log If possible, please do step #2 just before creating the domain and stop it immediately after the failure, this way I can focus on the logs related to creating the domain. Thank you, Ala Created attachment 1089132 [details]
udev logs
Hi Ala, I've updated the "udev logs" attachment, this time with the actual logs. Thanks for taking a look! Devin Thanks Devin. Seems like owner of few directories is set to root rather than to vdsm. For the sake of the test, cold you please add a vdsm user rather than using the one in ldap? It should have the following info: uid=36(vdsm) gid=36(kvm) groups=36(kvm),179(sanlock),107(qemu) After adding the user, please try to create the storage domain again. Thanks, Ala. Unfortunately nothing changed after creating a local vdsm user: ------ [root@lnx84 ~]# grep vdsm /etc/passwd vdsm:x:36:36::/home/vdsm:/bin/bash [root@lnx84 ~]# grep vdsm /etc/group kvm:x:36:qemu,sanlock,vdsm qemu:x:107:vdsm,sanlock sanlock:x:106:sanlock,vdsm,qemu,vdsm [root@lnx84 ~]# id vdsm uid=36(vdsm) gid=36(kvm) groups=36(kvm),107(qemu),106(sanlock) ------ Any more suggestions would be greatly appreciated. Thanks, Devin One thing is obvious, something is bad with udev working with vdsm user. Could you please perform the following: 1) Reload udev rules. Please run: $ udevadm control --reload Try to create the domain and see if any better. If not ... 2) Restart udev service: $ systemctl restart systemd-udevd.service Try now to create the domain. Hi Ala, After "udevadm control --reload", the error changed to "Error while executing action Attach Storage Domain: Internal block device read failure". However, the domain remained in the gui with a status of "Unattached," and I was eventually able to attach and activate it. Thank you! Unless you have any other thoughts on how to avoid this in the future, I will remember to try "udevadm control --reload" the next time this happens. Thanks again, Devin Good news! To help us better understand the issue, could you please elaborate more on your deployment - mainly, explain how you installed oVirt, how you setup your environment to work with ldap, when vdsm user created in ldap, etc. In addition, could you please remove vdsm local user, reload udev rules and see if you are still able to create domains or you get the old original again? Thank you, Ala Hi Ala, I installed ovirt following the quick start guide (install ovirt-release, install ovirt-engine, run engine-setup). This was on top of an EL7.1 host already bound to our AD domain and configured to use sssd for nss, pam, and autofs, and our nsswitch.conf has "files sss" for passwd, shadow, group, initgroups, and automount. The vdsm user had been created in AD some time ago - before the OS was installed on any of our ovirt hosts. It does look like vdsmd requires a local vdsm user at some point in the process, but it's not really consistent. - I setup a clean host without vdsm in /etc/passwd, and all of the /dev/dm-* devices are owned by root. That host could not join the iSCSI storage domain. - I manually changed the ownership of the appropriate devices to vdsm, and everything started working. However, after a "systemctl restart vdsmd", all of the dm- devices would change back to root. - I created a local vdsm user, restarted vdsmd, and the device ownership changed to vdsm. - I deleted the local vdsm user, restarted vdsmd, ran "udevadm control --reload", and rebooted. All of the devices continued to be owned by vdsm. I'm sorry that's probably not very helpful. Please let me know if you have any more questions or requests. Thanks, Devin Indeed, vdsm looks for a local user and if not found it creates it, otherwise a local user is not created. I will close this one as i don't think we have an issue here. I will also try to investigate deeper on an environment where we have vdsm defined in ldap rather than locall created by vdsm and see if this is supported. Thank you, Ala (In reply to Ala Hino from comment #28) > Indeed, vdsm looks for a local user and if not found it creates it, > otherwise a local user is not created. > > I will close this one as i don't think we have an issue here. I will also > try to investigate deeper on an environment where we have vdsm defined in > ldap rather than locall created by vdsm and see if this is supported. Ala, let's document this behavior please. Done |