Bug 1602776
Summary: | Fail to activate FC SD with Permission denied error on metadata file | ||
---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | guy chen <guchen> |
Component: | vdsm | Assignee: | Amit Bawer <abawer> |
Status: | CLOSED INSUFFICIENT_DATA | QA Contact: | Avihai <aefrat> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 4.2.5 | CC: | dagur, guchen, lsurette, nsoffer, srevivo, tnisan, ycui |
Target Milestone: | ovirt-4.4.1 | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2019-12-19 13:48:03 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1163890, 1544370 | ||
Bug Blocks: |
Description
guy chen
2018-07-18 13:10:36 UTC
+1 on my environment. Encountered the same issue. ^ with Fiber Channel Data Store. I had same issue in one of the scale team hosts, fixed by rebooting the host. The error was bad permissions on /dev/dm-N used by the metadata volume. We have udev rules ensuring correct permissions, but maybe there is some issue in the particular way the host was removed that caused this issue. The host was re-provisioned, the old host object was exist on engine. I did a reinstall without success, then removed and re-added the host. Can you reproduce this when vdsm is remove like this? 1.remove host form rhvm 2.uninstall vdsm: service vdsmd stop service supervdsm stop yum remove -y vdsm* libvirt* 3.added host to rhv Not clear why you are removing files manually. Packages should remove their files. There is a good chance that this issue is caused by bug 1562369. Lets retest this when that bug is verified. Oops, wrong bug - fixed to bug 1331978 Reboot helped solve this issue. What udev roles did we added? # udevadm info --query=all --name=/dev/dm-2 P: /devices/virtual/block/dm-2 N: dm-2 L: 10 S: disk/by-id/dm-name-3600a098038304437415d4b6a59676d43 S: disk/by-id/dm-uuid-mpath-3600a098038304437415d4b6a59676d43 S: disk/by-id/lvm-pv-uuid-9dbJgB-a11p-l9OE-cD5d-MJHL-D3Dx-WwdImo S: mapper/3600a098038304437415d4b6a59676d43 E: DEVLINKS=/dev/disk/by-id/dm-name-3600a098038304437415d4b6a59676d43 /dev/disk/by-id/dm-uuid-mpath-3600a098038304437415d4b6a59676d43 /dev/disk/by-id/lvm-pv-uuid-9dbJgB-a11p-l9OE-cD5d-MJHL-D3Dx-WwdImo /dev/mapper/3600a098038304437415d4b6a59676d43 E: DEVNAME=/dev/dm-2 E: DEVPATH=/devices/virtual/block/dm-2 E: DEVTYPE=disk E: DM_ACTIVATION=0 E: DM_MULTIPATH_TIMESTAMP=1532874120 E: DM_NAME=3600a098038304437415d4b6a59676d43 E: DM_SUBSYSTEM_UDEV_FLAG0=1 E: DM_SUSPENDED=0 E: DM_UDEV_DISABLE_LIBRARY_FALLBACK_FLAG=1 E: DM_UDEV_PRIMARY_SOURCE_FLAG=1 E: DM_UDEV_RULES_VSN=2 E: DM_UUID=mpath-3600a098038304437415d4b6a59676d43 E: ID_FS_TYPE=LVM2_member E: ID_FS_USAGE=raid E: ID_FS_UUID=9dbJgB-a11p-l9OE-cD5d-MJHL-D3Dx-WwdImo E: ID_FS_UUID_ENC=9dbJgB-a11p-l9OE-cD5d-MJHL-D3Dx-WwdImo E: ID_FS_VERSION=LVM2 001 E: MAJOR=253 E: MINOR=2 E: MPATH_SBIN_PATH=/sbin E: SUBSYSTEM=block E: TAGS=:systemd: E: USEC_INITIALIZED=48071 Lowering priority, since there is an easy workaround, and the use case is not clear. Regarding udev rules, we install this: /usr/lib/udev/rules.d/12-vdsm-lvm.rules These rules ensure that devices get the correct owner and group (vdsm:kvm) when you their are added. It is possible that existing device lost the owner:group while vdsm were removed manually, and when adding the host back, the permissions were not fixed since the device did not change. So generally this sounds like an edge case that may be possible to fix by triggering udev rules during installation, but I'm not sure it worth the time. I suggest to move this to 4.3 for now. Fixing the depends bug again. Mo pending requests from QE This bug has not been marked as blocker for oVirt 4.3.0. Since we are releasing it tomorrow, January 29th, this bug has been re-targeted to 4.3.1. Comment 6 is still unanswered since 2018-07-29. I think we should close this bug since we don't have enough data to tell if this is a real issue that may affect real use of the system. Closed per comment #15 |