Bug 1655489 - [downstream clone - 4.2.8] RHVH enters emergency mode when updated to the latest version and rebooted twice
Summary: [downstream clone - 4.2.8] RHVH enters emergency mode when updated to the lat...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: imgbased
Version: 4.2.6
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ovirt-4.2.8
: ---
Assignee: Yuval Turgeman
QA Contact: Huijuan Zhao
Rolfe Dlugy-Hegwer
URL:
Whiteboard:
Depends On: 1636028
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-12-03 09:46 UTC by RHV bug bot
Modified: 2020-01-15 21:04 UTC (History)
24 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: Presence of /etc/multipath/wwids with a local disk WWID from when local devices weren't blacklisted. Consequence: RHV-H enters emergency mode when updated to the latest version and rebooted twice. Fix: Removing /etc/multipath/wwids fixes the issue. To do that upon upgrade, imgbased has been changed to call vdsm-tool configure --force in the new layer, using SYSTEMD_IGNORE_CHROOT. Result: The issue has been fixed.
Clone Of: 1636028
Environment:
Last Closed: 2019-01-22 12:44:15 UTC
oVirt Team: Node
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 3640361 0 None None None 2018-12-03 09:52:01 UTC
Red Hat Product Errata RHBA-2019:0116 0 None None None 2019-01-22 12:44:20 UTC
oVirt gerrit 95725 0 'None' MERGED Reconfigure vdsm in upgrade 2021-02-02 09:07:45 UTC
oVirt gerrit 95778 0 'None' MERGED Reconfigure vdsm in upgrade 2021-02-02 09:07:46 UTC

Description RHV bug bot 2018-12-03 09:46:27 UTC
+++ This bug is a downstream clone. The original bug is: +++
+++   bug 1636028 +++
======================================================================

Description of problem:

RHVH enters emergency mode when updated to the latest version and rebooted twice, the problem seems to be a missing boot partition. 

Version-Release number of selected component (if applicable):
4.2.6.1-0.20180907.0

How reproducible:
Always

Steps to Reproduce:
1. Install an RHVH 4.2 version 20180813.0 
2. Upgrade to version 4.2.6.0-0.20180828.2 (next version) and reboot
3. Upgrade to version 20180907.0 (next and latest version) and reboot
4. Reboot again and the hypervisor will enter in emergency mode

Actual results:
The hypervisor enters emergency mode.

Expected results:
The hypervisor boots normally.

Additional info:
All of the hypervisor rpm updates finished correctly and the first time it booted the latest version it worked as expected, the problem appears when the hypervisor is rebooted again. 
The hypervisor enters emergency mode because it's not able to mount '/boot', the boot partition is missing:
~~~
[root@rhvh-42-1 ~]# ls /dev/sda*
/dev/sda /dev/sda2
~~~

It does not appear in sysfs either:
~~~
ls /sys/block/sda/ -l
total 0
-r--r--r--. 1 root root 4096 Oct  4 11:41 alignment_offset
lrwxrwxrwx. 1 root root    0 Oct  4 11:41 bdi -> ../../../../../../../../virtual/bdi/8:0
-r--r--r--. 1 root root 4096 Oct  4 11:41 capability
-r--r--r--. 1 root root 4096 Oct  4 11:41 dev
lrwxrwxrwx. 1 root root    0 Oct  4 09:43 device -> ../../../1:0:0:0
-r--r--r--. 1 root root 4096 Oct  4 11:41 discard_alignment
-r--r--r--. 1 root root 4096 Oct  4 11:41 events
-r--r--r--. 1 root root 4096 Oct  4 11:41 events_async
-rw-r--r--. 1 root root 4096 Oct  4 11:41 events_poll_msecs
-r--r--r--. 1 root root 4096 Oct  4 11:41 ext_range
-r--r--r--. 1 root root 4096 Oct  4 11:41 hidden
drwxr-xr-x. 2 root root    0 Oct  4 09:43 holders
-r--r--r--. 1 root root 4096 Oct  4 11:41 inflight
drwxr-xr-x. 2 root root    0 Oct  4 11:41 integrity
drwxr-xr-x. 2 root root    0 Oct  4 11:41 power
drwxr-xr-x. 3 root root    0 Oct  4 09:43 queue
-r--r--r--. 1 root root 4096 Oct  4 11:41 range
-r--r--r--. 1 root root 4096 Oct  4 11:41 removable
-r--r--r--. 1 root root 4096 Oct  4 11:41 ro
drwxr-xr-x. 5 root root    0 Oct  4 09:43 sda2
-r--r--r--. 1 root root 4096 Oct  4 11:41 size
drwxr-xr-x. 2 root root    0 Oct  4 09:43 slaves
-r--r--r--. 1 root root 4096 Oct  4 11:41 stat
lrwxrwxrwx. 1 root root    0 Oct  4 09:43 subsystem -> ../../../../../../../../../class/block
drwxr-xr-x. 2 root root    0 Oct  4 11:41 trace
-rw-r--r--. 1 root root 4096 Oct  4 11:41 uevent
~~~

Multipath errors can be seen at boot time, but not sure if this has something to do with the actual problem:
~~~
[     4.864707] device-mapper: multipath service-time: version 0.3.0 loaded
[     4.865292] device-mapper: table: 253:5: multipath: error getting device
~~~

Running 'partprobe /dev/sda' makes the device to appear again but it's still not possible to mount /boot. 
~~~
[root@rhvh-42-1 ~]# ls /dev/sda*
/dev/sda /dev/sda1 /dev/sda2

[root@rhvh-42-1 ~]# mount /boot

[root@rhvh-42-1 ~]# mount | grep /boot

[root@rhvh-42-1 ~]# ls -l /boot/
total 72072
drwxr-xr-x. 3 root root     4096 Aug 28 16:37 boom
-rw-r--r--. 1 root root   147859 Aug 10 18:59 config-3.10.0-862.11.6.el7.x86_64
drwxr-xr-x. 3 root root     4096 Aug 28 16:35 efi
-rw-r--r--. 1 root root   192572 Apr  5  2016 elf-memtest86+-5.01
drwxr-xr-x. 2 root root     4096 Aug 28 16:47 extlinux
drwx------. 5 root root     4096 Aug 28 16:48 grub2
-rw-r--r--. 1 root root 62734804 Aug 28 16:52 initramfs-3.10.0-862.11.6.el7.x86_64.img
drwxr-xr-x. 3 root root     4096 Aug 28 16:37 loader
-rw-r--r--. 1 root root   190896 Apr  5  2016 memtest86+-5.01
-rw-r--r--. 1 root root   305158 Aug 10 19:01 symvers-3.10.0-862.11.6.el7.x86_64.gz
-rw-------. 1 root root  3414344 Aug 10 18:59 System.map-3.10.0-862.11.6.el7.x86_64
-rw-r--r--. 1 root root   357715 Jan 29  2018 tboot.gz
-rw-r--r--. 1 root root    13502 Jan 29  2018 tboot-syms
-rwxr-xr-x. 1 root root  6398256 Aug 10 18:59 vmlinuz-3.10.0-862.11.6.el7.x86_64
~~~

But it's possible to mount it in other directories and the correct content is shown:
~~~
[root@rhvh-42-1 ~]# mount /dev/sda1 /mnt
[root@rhvh-42-1 ~]# mount | grep /mnt
/dev/sda1 on /mnt type ext4 (rw,relatime,seclabel,data=ordered)

[root@rhvh-42-1 ~]# ls -l /mnt/
total 115856
drwxr-xr-x. 3 root root     4096 Aug 13 13:14 boom
drwxr-xr-x. 3 root root     4096 Aug 13 13:12 efi
-rw-r--r--. 1 root root   192572 Apr  5  2016 elf-memtest86+-5.01
drwxr-xr-x. 2 root root     4096 Aug 13 13:22 extlinux
drwx------. 5 root root     4096 Oct  3 17:37 grub2
-rw-------. 1 root root 62734205 Oct  4 10:14 initramfs-3.10.0-862.11.6.el7.x86_64.img
-rw-------. 1 root root 20592124 Oct  4 10:15 initramfs-3.10.0-862.11.6.el7.x86_64kdump.img
-rw-------. 1 root root 28093452 Aug 13 13:16 initramfs-3.10.0-862.el7.x86_64.img
drwxr-xr-x. 3 root root     4096 Aug 13 13:14 loader
drwx------. 2 root root    16384 Oct  3 16:15 lost+found
-rw-r--r--. 1 root root   190896 Apr  5  2016 memtest86+-5.01
drwxr-xr-x. 2 root root     4096 Oct  3 17:21 rhvh-4.2.6.0-0.20180828.0+1
drwxr-xr-x. 2 root root     4096 Oct  3 18:11 rhvh-4.2.6.1-0.20180907.0+1
-rw-r--r--. 1 root root   357715 Jan 29  2018 tboot.gz
-rw-r--r--. 1 root root    13502 Jan 29  2018 tboot-syms
-rwxr-xr-x. 1 root root  6398256 Oct  4 10:14 vmlinuz-3.10.0-862.11.6.el7.x86_64
~~~

(Originally by Miguel Martin Villamuelas)

Comment 1 RHV bug bot 2018-12-03 09:46:39 UTC
After digging a bit I figured out what seems to be the problem: multipath configuration changed from version 4.2.6.0-0.20180828.2 to version 4.2.6.1-20180907.0:

diff -uN ../old/multipath.conf multipath.conf
--- ../old/multipath.conf	2018-10-03 17:24:07.662000000 +0200
+++ multipath.conf	2018-10-03 17:41:53.550000000 +0200
@@ -1,4 +1,4 @@
-# VDSM REVISION 1.6
+# VDSM REVISION 1.7
 
 # This file is managed by vdsm.
 #
@@ -100,15 +100,6 @@
     max_fds                     4096
 }
 
-# Whitelist FCP and iSCSI devices.
-blacklist {
-        protocol ".*"
-}
-
-blacklist_exceptions {
-        protocol "(scsi:fcp|scsi:iscsi)"
-}
-
 # Remove devices entries when overrides section is available.
 devices {
     device {

Restoring the old configuration fixes the problem.

(Originally by Miguel Martin Villamuelas)

Comment 3 RHV bug bot 2018-12-03 09:46:47 UTC
Nir, any idea ?

(Originally by Yuval Turgeman)

Comment 4 RHV bug bot 2018-12-03 09:46:56 UTC
(In reply to Yuval Turgeman from comment #2)
The multipath configuration is fine, I don't see how it is related.

Maybe the initramfs contains the old configuration and should be regenerated with
the current configuration?

(Originally by Nir Soffer)

Comment 5 RHV bug bot 2018-12-03 09:47:03 UTC
Exactly,

If the configuration from the previous version is restored it works because both multipath.conf files are the same (within the initramfs and the root filesystem), but in this case, both configurations are filtering the 'non iscsi' or 'non fcp' disks and LVM is choosing /dev/sda2 as the PV. 

As explained in commit b0c1290263e139683f333da8d8387e6a5a6ddd02, this multipath configuration is wrong because we can have our root filesystem backed by a 'sas' or a 'cciss' multipath device. 

Therefore, the correct fix would be to rebuild the initramfs with the correct multipath configuration (removing the blacklist and the blacklist_exceptions).

Indeed, I can confirm that rebuilding the initramfs with the most recent configuration fixes the issue, I will write a KCS tomorrow and provide the workaround to the customer. 

Thanks

(Originally by Miguel Martin Villamuelas)

Comment 6 RHV bug bot 2018-12-03 09:47:11 UTC
imgbased actually rebuilds the initrd for the new layer during during the update process, does this mean it's picking up the wrong multipath.conf ?

(Originally by Yuval Turgeman)

Comment 7 RHV bug bot 2018-12-03 09:47:18 UTC
(In reply to Yuval Turgeman from comment #5)
> imgbased actually rebuilds the initrd for the new layer during during the
> update process, does this mean it's picking up the wrong multipath.conf ?

No, when the update process builds the new layer the "multipath.conf" file is copied from the current layer.
But It turns out that once the hypervisor was rebooted to activate the new layer, the VDSM from the new layer updates the "multipath.conf" file with the new version (but it doesn't rebuild the "initramfs").
Maybe I am wrong but I think we might have different "multipath.conf" versions on "initramfs" and "rootfs" for all the hypervisors images created until now.

(Originally by Miguel Martin Villamuelas)

Comment 8 RHV bug bot 2018-12-03 09:47:26 UTC
According to this, vdsm needs to call dracut and re-generate the initrd after making changes to the multipath.conf file, right ?

(Originally by Yuval Turgeman)

Comment 9 RHV bug bot 2018-12-03 09:47:33 UTC
Doing it that way we would need to reboot the hypervisor again to make sure the configuration is consistent and 'vdsm' would need to be aware if it's running on an RHV-H or RHEL installation because the initrd is generated differently on each case.

I believe it would be better to make the new multipath configuration available within the new image layer in some way, for example including it within the 'vdsm' package in a separate file (e.g. /usr/share/vdsm/multipath.conf or /usr/share/doc/vdsm-$version/multipath.conf.sample).

This way it would be possible to generate the correct initramfs on the upgrade.

(Originally by Miguel Martin Villamuelas)

Comment 10 RHV bug bot 2018-12-03 09:47:41 UTC
Moving to vdsm storage, seems like initrd needs to be regenerated when multipath is reconfigured.

(Originally by Sandro Bonazzola)

Comment 11 RHV bug bot 2018-12-03 09:47:48 UTC
We introduced a multipath.conf changes for VDSM v4.20.39 in gerrit.ovirt.org/#/c/93967 but reverted them for VDSM v4.20.40 in gerrit.ovirt.org/#/c/94200 so basically this bug shouldn't impact for the latest version if I understand correctly.
However I'm keeping it open to track the fact that after multipath.conf changes we need regenerate initrd as Vojtect is working for another fix for bug 1622700 which will change multipath.conf

(Originally by Tal Nisan)

Comment 12 RHV bug bot 2018-12-03 09:47:55 UTC
I'll be opening a case with RH today but we are hitting this issue as well. I'm opening up a case with support but it'd be great to see this fixed as we have roughly 15 hypervisors to update soon.

(Originally by jtriplett)

Comment 13 RHV bug bot 2018-12-03 09:48:01 UTC
*** Bug 1644097 has been marked as a duplicate of this bug. ***

(Originally by Huijuan Zhao)

Comment 14 RHV bug bot 2018-12-03 09:48:08 UTC
(In reply to Miguel Martin from comment #8)
> Doing it that way we would need to reboot the hypervisor again to make sure
> the configuration is consistent and 'vdsm' would need to be aware if it's
> running on an RHV-H or RHEL installation because the initrd is generated
> differently on each case.
> 
> I believe it would be better to make the new multipath configuration
> available within the new image layer in some way, for example including it
> within the 'vdsm' package in a separate file (e.g.
> /usr/share/vdsm/multipath.conf or
> /usr/share/doc/vdsm-$version/multipath.conf.sample).
> 
> This way it would be possible to generate the correct initramfs on the
> upgrade.

vdsm generates this on the fly as part of configuration.

IMO, the best way to do this would simply be to move the initrd generation in RHVH until after vdsm is started

(Originally by Ryan Barry)

Comment 15 RHV bug bot 2018-12-03 09:48:15 UTC
AFAICT dracut runs after all config files are updated. The issue is that during RHVH image build VDSM config is not run at all. Normally, VDSM config runs as part of posttrans rpm script. However, in case of host image, config is run as on-off systemd service after the reboot [1], so this is why initramfs has outdated files which are modified by vdsm config. As per comments under BZ #1534197, my understanding is that this is a workaround because vdsm config for some reason cannot run in chroot.

Anyway, IMHO this is not an issue in VDSM, this is issue in imgbase which defers config changes after the reboot which results into obsolete config files in initramfs.

[1] https://github.com/oVirt/imgbased/blob/master/data/imgbase-config-vdsm.service#L7

(Originally by Vojtech Juranek)

Comment 16 RHV bug bot 2018-12-03 09:48:23 UTC
Well, not exactly - 

1.  Anything that runs in the postinstall script of VDSM runs during the image build process of RHVH.

2.  It is true that in RHVH we defer the vdsm-tool configure to after the reboot, but IIRC vdsm-tool doesn't generate the intird anyway, so it really has nothing to do with the vdsm config service in RHVH.

So actually I do think it is an issue with VDSM if VDSM changes the multipath.conf on a running system and doesn't generate an initramfs after, the next reboot could fail.

(Originally by Yuval Turgeman)

Comment 17 RHV bug bot 2018-12-03 09:48:31 UTC
(In reply to Jesse Triplett from comment #11)
> I'll be opening a case with RH today but we are hitting this issue as well.
> I'm opening up a case with support but it'd be great to see this fixed as we
> have roughly 15 hypervisors to update soon.

Same here (also RH customer).

(Originally by Bugzilla Daemon)

Comment 18 RHV bug bot 2018-12-03 09:48:38 UTC
> 1.  Anything that runs in the postinstall script of VDSM runs during the image > build process of RHVH.

vdsm-tool config is obviously not run


> So actually I do think it is an issue with VDSM if VDSM changes the 
> multipath.conf on a running system and doesn't generate an initramfs after, the > next reboot could fail.

well, fair enough, will try to add initramfs direcly into vdsm

(Originally by Vojtech Juranek)

Comment 19 RHV bug bot 2018-12-03 09:48:52 UTC
> 1.  Anything that runs in the postinstall script of VDSM runs during the image > build process of RHVH.

vdsm-tool config is obviously not run


> So actually I do think it is an issue with VDSM if VDSM changes the 
> multipath.conf on a running system and doesn't generate an initramfs after, the > next reboot could fail.

well, fair enough, will try to add initramfs direcly into vdsm

(Originally by Vojtech Juranek)

Comment 20 RHV bug bot 2018-12-03 09:48:58 UTC
Even if vdsm-tool were run as part of the image build (and if it's part of %post, it is), it would only be aware of the multipath configuration inside lorax/livemedia-creator, which isn't sufficient in any case.

vdsm-tool config IS run after initial boot or upgrades in RHVH, and the multipath configuration in the initrd should be updated there.

(Originally by Ryan Barry)

Comment 21 RHV bug bot 2018-12-03 09:49:07 UTC
When (In reply to Vojtech Juranek from comment #17)
> > 1.  Anything that runs in the postinstall script of VDSM runs during the image > build process of RHVH.
> 
> vdsm-tool config is obviously not run

We build RHVH by doing a 'yum install' of vdsm (and other packages) inside a VM, so %post must run, otherwise it's a bug.

(Originally by Yuval Turgeman)

Comment 22 RHV bug bot 2018-12-03 09:49:13 UTC
(In reply to Ryan Barry from comment #18)
> Even if vdsm-tool were run as part of the image build (and if it's part of
> %post, it is), it would only be aware of the multipath configuration inside
> lorax/livemedia-creator, which isn't sufficient in any case.
> 
> vdsm-tool config IS run after initial boot or upgrades in RHVH, and the
> multipath configuration in the initrd should be updated there.

If we do it after the reboot it might be late.  I think once the conf changes, either by vdsm or by the user, it must be followed immediately by dracut, and letting imgbased handle the move of the new initrd to the right /boot/rhvh-$version dir on shutdown, which we do already.

(Originally by Yuval Turgeman)

Comment 24 RHV bug bot 2018-12-03 09:49:27 UTC
rpm -q --scripts vdsm

[...]

posttrans scriptlet (using /bin/sh):
if  [ -f "/var/lib/vdsm/upgraded_version" ]; then

    [...]

    if ! /usr/bin/vdsm-tool is-configured >/dev/null 2>&1; then
        /usr/bin/vdsm-tool configure --force >/dev/null 2>&1
    fi

I obviously don't understand all the details how image is build, but this doesn't run otherwise I cannot imagine how this bug can happen (or runs, but the changes are not reflected in the build, which would be a bug in imgbase)

> I think once the conf changes, either by vdsm or by the user, it must be 
> followed immediately by dracut, 

agree, VDSM should re-generate initramfs after modification of mutipath.conf

> and letting imgbased handle the move of the new initrd to the right 
> /boot/rhvh-$version

how this is handled? Will imgbase automatically pick up any changes in /boot/rhvh-* ? This won't be handled by VDSM, right?

(Originally by Vojtech Juranek)

Comment 25 RHV bug bot 2018-12-03 09:49:35 UTC
vdsm-tool configure should not regenerate initramfs.

If the user run on the specific system that requires update of initramfs, for 
example system booting from SAN, or maybe ovirt-node, the user should regenerate
iniramfs, and reboot to make sure that the update is correct.

From vdsm side, printing a warning about the need to regenerate initramfs is fine.

ovirt-node should be responsible for the flow running vdsm-tool configure, 
regenerating initramfs if needed, and rebooting the host.

(Originally by Nir Soffer)

Comment 26 RHV bug bot 2018-12-03 09:49:42 UTC
Sorry, I'm not sure I'm following.  I don't know what the this script does, but it gets executed during the image build, and during the upgrade process of RHVH.  However, I'm not sure how this relates to the initrd since vdsm-tool doesn't do anything with dracut anyway.

As for the second part, dracut installs the initrd to /boot/initramfs-$kernelver.img.  RHVH has a service that copies changes from this location to the appropriate /boot/rhvh-$version dir.  We do it this way because we want to be able to boot into the right layer with the right kernel/initrd.

Basically, if anyone touches multipath.conf and runs dracut -f (--hostonly...), the rest will be handled automatically.

(Originally by Yuval Turgeman)

Comment 27 RHV bug bot 2018-12-03 09:49:49 UTC
(In reply to Vojtech Juranek from comment #22)
> I obviously don't understand all the details how image is build, but this
> doesn't run otherwise I cannot imagine how this bug can happen (or runs, but
> the changes are not reflected in the build, which would be a bug in imgbase)

It's built using livemedia-creator.

Imagine that you pass a kickstart to virt-install which basically includes:

%packages
ovirt-host

Then generate a squashfs from this.

That's imgbased (which handles only laying the squashfs down on an LV). If these changes are not reflected, then it's a problem in livemedia-creator somewhere.

But the snippet you referenced was changed in July so vdsm-tool configure --force runs only after a new install or image update. Previously it ran on every boot. Since that file is not present during the image build, vdsm-tool won't be run as part of the image build (but that was functionally meaningless sin any case, since the configuration ran only on a very minimal VM inside brew/koji).

> how this is handled? Will imgbase automatically pick up any changes in
> /boot/rhvh-* ? This won't be handled by VDSM, right?

On reboot, yes, these are moved to the correct place.

(In reply to Nir Soffer from comment #23)
> vdsm-tool configure should not regenerate initramfs.
> 
> If the user run on the specific system that requires update of initramfs,
> for 
> example system booting from SAN, or maybe ovirt-node, the user should
> regenerate
> iniramfs, and reboot to make sure that the update is correct.
> 
> From vdsm side, printing a warning about the need to regenerate initramfs is
> fine.
> 
> ovirt-node should be responsible for the flow running vdsm-tool configure, 
> regenerating initramfs if needed, and rebooting the host.

To be clear, ovirt-node since 4.0 is effectively a RHEL variant. It does not manage the entire system; only the LVs and bootloader configuration on image updates.

If an administrator (or vdsm-tool) modified mulitpath.conf on RHEL, the appropriate user/utility would be responsible for regenerating the initramfs, and rebooting if necessary.

Node runs dracut on every boot specifically to account for new installs where the multipath config on the image (and from anaconda) may not match what's on the system. However, if a 'breaking' change is made by some tool (say, vdsm-tool), Node does not monitor that. 

There could be a service which watches multipath.conf with inotify, but this runs the risk of regenerating with 'bad' changes if an administrator touches it by hand, which has happened despite a header indicating that VDSM manages this.

In general, if vdsm-tool changes multipath.conf, for a seamless experience (both on EL and Node), it should regenerate the initramfs, since it was the last 'owner'.

(Originally by Ryan Barry)

Comment 28 RHV bug bot 2018-12-03 09:49:58 UTC
(In reply to Ryan Barry from comment #25)

> Since that file is not present during the image build, vdsm-tool
> won't be run as part of the image build 

sorry, I got lost. Which file is not present and why vdsm-tool doesn't run as part of the image build?

From other replies I got a feeling that vdsm posttrans rpm script (which calls vdsm-tool [1]) should run during the image build.

Basically this is the root cause of this bug - vdsm-tool doesn't seem to run during image build (and therefore imgbase use mutipath.conf from previous version) and runs after reboot which leads to different version mutipath.conf in initramfs and on the system. The simplest solution IMHO seems to be to run vdsm-tool during image build.

I'm trying to understand if not running vdsm-tool during image build is intentional or not (i.e. it's a bug somewhere) and if intentional, what is the reason.

Thanks!

[1] https://github.com/oVirt/vdsm/blob/master/vdsm.spec.in#L889

(Originally by Vojtech Juranek)

Comment 29 RHV bug bot 2018-12-03 09:50:07 UTC
It is run (we don't change this), but not successfully due to limitations inside the brew build environment which aren't under the control of the Node team. It would probably work with --force, but that's likely bad to do as part of the RPM scripts, since it would configure sasl auth for anyone who installed it.

More importantly, even if it were run, the files would be useless. Imagine that you run "vdsm-tool configure --force" inside a libvirt VM with a virtio disk, then copied that multipath.conf to physical hardware which relied on changes. More specifically, Anaconda rewrites a couple of files (including multipath.conf) on installation anyway, so it doesn't make a practical difference whether this is run as part of the image build or not. It must be done again or risk a system which doesn't boot after install.

In the beginning, NGN didn't run vdsm-tool at all (relying on engine registration to do it), but changes to lvmlocal required it.

The file in /var/lib was added as a trigger in July after changes to vdsm-client. See:

https://bugzilla.redhat.com/show_bug.cgi?id=1594194

https://bugzilla.redhat.com/show_bug.cgi?id=1429288

Hopefully this gives historical background, but this isn't a Node bug. Again, let's say you configured a complete system in a VM, then converted the root filesystem ToA squashfs and passed a kickstart to the RHEL/CentOS/Fedora installer which said:

liveimg --url=/path/to/squashfs

Many of your configuration changes to critical files (like multipath) would be gone when you booted. This is precisely what Node is. You can see that it must be done again after installing (when vdsm configuration runs again)

(Originally by Ryan Barry)

Comment 30 RHV bug bot 2018-12-03 09:50:15 UTC
Thanks for explanation Rayn. In general you are right, however, in case of vdsm and multipath, this file is basically hardcoded in vdsm [1] so any changes done e.g. by Anaconda will be rewritten and other changes done by anything else but vdsm are not supported. Re-generating initramfs from vdsm during next reboot seems to me quite fragile and also not user friendly (requiring possibly another reboot to take effect).

If we provide something like vdsm-tool get-multipath-conf which would write multipath.conf somewhere so that you can include it into image during its build (or anything else which would suite you and allow you to include it into the image), would it be possible to do so?

[1] https://github.com/oVirt/vdsm/blob/master/lib/vdsm/tool/configurators/multipath.py#L74

(Originally by Vojtech Juranek)

Comment 31 RHV bug bot 2018-12-03 09:50:21 UTC
Vojtech, I think I understand the confusion - you're mixing the image build process and image update process perhaps.  During the image build stage, the vdsm %post is run but the multipath.conf is useless (see Ryan's comments).

During the update process we call dracut at the very end to pick up any conf changes from the running system to the new kernel that was just installed on the new layer (similar to new-kernel-pkg).  What you want us to do is, run vdsm's %post during the update process before we run dracut, that way we could pick up the multipath changes into the new initrd.  Is that correct ?

(Originally by Yuval Turgeman)

Comment 32 RHV bug bot 2018-12-03 09:50:29 UTC
(In reply to Yuval Turgeman from comment #29)
> Vojtech, I think I understand the confusion - you're mixing the image build
> process and image update process perhaps.  

Yuval, thanks for clarification and sorry for the confusion, I thought process for build and update is same so I was interchanging these terms.


> What you want us to do is, run
> vdsm's %post during the update process before we run dracut, that way we
> could pick up the multipath changes into the new initrd.  Is that correct ?

yes, this is exactly what I was asking for (and sorry once again for confusion)

(Originally by Vojtech Juranek)

Comment 33 RHV bug bot 2018-12-03 09:50:36 UTC
Well, it's actually a good point - we do run the post scripts for some packages that involve selinux, perhaps we could add some core packages as well like vdsm (or simply run post scripts for all packages).  Ryan, what do you think ?

(Originally by Yuval Turgeman)

Comment 34 RHV bug bot 2018-12-03 09:50:43 UTC
We definitely can do this. It was actually done for one subpackage here: 
https://gerrit.ovirt.org/#/c/92939/

Ultimately, though, this doesn't resolve the initial install problem. vdsm-tool can't be executed inside Anaconda's installroot IIRC (needs retesting on 7.6), so this will work when systems are upgraded, but may not be sufficient on new installs.

(Originally by Ryan Barry)

Comment 35 RHV bug bot 2018-12-03 09:50:50 UTC
I think we have multiple issues:

1. Boot fail because bad multipath conf blacklisting anything but "scsi:iscsi" and
   "scsi:fcp"

2. Bad multipath.conf was added to initramfs by node upgrade, because node is
   regenerating initramfs *before* "vdsm-tool configure" installs
   /etc/mutlipath.conf.

3. "vdsm-tool configure" run after the first boot, but initramfs was not
   regenerated, leaving old (bad) contents of /etc/multipath.conf.

4. vdsm installed /etc/multipath.conf with too strict blacklist. Was fixed
   in 4.2.6.1 async. Node builds including the broken build should not be 
   available to users to avoid this issue.

Issue 1 - This is the most interesting - how multipath blacklist can break node
boot? Does it happen only to people booting from SAN or also for people booting
from local disk?

Maybe node is installed on multipath device created on a local disk? If this is 
the case, node should be fixed to blacklist local devices in
/etc/multipath/conf.d/ovirt-node.conf, and install using /dev/xxx.

Issue 2 - if we cannot run "vdsm-tool configure" during installation, I'm not sure
there is a way to avoid this, since there may be need to regenerate initramfs
because of other changes, unrelated to "vdsm-tool configure".

Issue 3 - I don't think this should be fixed in "vdsm-tool configure", since node
(or the user) may want to control when and how initramfs is generated. vdsm-tool
should recommend to regenerate initramfs when multipath.conf was modified, and
node should use this info to regenerate initramfs.

After regenerating initramfs we should reboot. This flow should be part of node
setup, probably in the service running vdsm-tool, or maybe part of ovirt host
deploy.

(Originally by Nir Soffer)

Comment 36 RHV bug bot 2018-12-03 09:50:58 UTC
(In reply to Nir Soffer from comment #33)
> I think we have multiple issues:
> 
> 1. Boot fail because bad multipath conf blacklisting anything but
> "scsi:iscsi" and
>    "scsi:fcp"
> 
> 2. Bad multipath.conf was added to initramfs by node upgrade, because node is
>    regenerating initramfs *before* "vdsm-tool configure" installs
>    /etc/mutlipath.conf.
> 
> 3. "vdsm-tool configure" run after the first boot, but initramfs was not
>    regenerated, leaving old (bad) contents of /etc/multipath.conf.
> 
> 4. vdsm installed /etc/multipath.conf with too strict blacklist. Was fixed
>    in 4.2.6.1 async. Node builds including the broken build should not be 
>    available to users to avoid this issue.
> 
> Issue 1 - This is the most interesting - how multipath blacklist can break
> node
> boot? Does it happen only to people booting from SAN or also for people
> booting
> from local disk?
> 
> Maybe node is installed on multipath device created on a local disk? If this
> is 
> the case, node should be fixed to blacklist local devices in
> /etc/multipath/conf.d/ovirt-node.conf, and install using /dev/xxx.

Some (many, actually) local disks don't appear as sdX, and are addressed directly by LVM through /dev/mapper names.

This is crazy. Node has not maintained its own installer since 4.0. If a RHEL user required changes in order to multipath in order to continue booting once changes were made, they wouldn't be expected to maintain their own conf file, right? Why should node be different.

We expect Anaconda to handle getting us to a running system, and after that, administration is more or less the same as RHEL except for updates.

> 
> Issue 2 - if we cannot run "vdsm-tool configure" during installation, I'm
> not sure
> there is a way to avoid this, since there may be need to regenerate initramfs
> because of other changes, unrelated to "vdsm-tool configure".

The initramfs is regenerated every boot. But it's regenerated at boot time. If any other thanges are made (such as by vdsm-tool configure), it should be regenerated by the tool/user which made the changes, not tracked by Node.

But Node upgrades keep all modified files in /etc, including multipath.conf. They are not overwritten by files from the new update (if the checksum differs from the generated image), so changes to multipath.conf are kept, and the new multipath is kept twice -- once at the end of Node upgrades, where dracut is invoked, and again after the initial boot into the new image, where dracut is invoked again.

> 
> Issue 3 - I don't think this should be fixed in "vdsm-tool configure", since
> node
> (or the user) may want to control when and how initramfs is generated.
> vdsm-tool
> should recommend to regenerate initramfs when multipath.conf was modified,
> and
> node should use this info to regenerate initramfs.
> 

50/50. If vdsm tool recommends a regeneration on RHEL, the user may want to control it. However, there's already a separate code path in vdsm which handles the "Node" upgrade path from the patch I linked previously. Since vdsm-tool configure is run as part of a service there, with no output visible other than the journal, and node is not aware of whether or not multipath.conf changes (vdsm-tool is, presumably), why would it not be handled in the same VDSM code path which handles schema changes?

> After regenerating initramfs we should reboot. This flow should be part of
> node
> setup, probably in the service running vdsm-tool, or maybe part of ovirt host
> deploy.

Why should we reboot? It's trivial to check whether the multipath.conf in the initrd matches the running system. Rebooting as part of engine registration (or double reboots as part of a Node upgrade) are a bad user experience.

(Originally by Ryan Barry)

Comment 37 RHV bug bot 2018-12-03 09:51:06 UTC
This bug was reported only when upgrade from previous version happened. Also, AFAICT vdsm package runs `vdsm-tool configure` only during upgrade, when previous version was installed (not during initial install), see [1].

So IMHO solution of this bug is just to include vdsm %post scripts into update process.

If vdsm should run vdsm-tool also during initial install (i.e. if current behaviour is not intentional) and how to re-generate initramfs in this case can be tracked in separate bug.

What do you think?

PS. As for Nir's question, how multipath can break boot - it's caused by udev multipath rules (it calls partx which removes /dev/sda1 which is boot partition), but it's not completely clear to me, what exactly triggers this (i.e. why it happens only in specific cases)

[1] https://github.com/oVirt/vdsm/blob/master/vdsm.spec.in#L867

(Originally by Vojtech Juranek)

Comment 38 RHV bug bot 2018-12-03 09:51:12 UTC
so it seems that the root cause is presence of /etc/multipath/wwids with local disk WWID which remained there from time when local devices weren't blacklisted. Once they were blacklisted, it wasn't an issue, but when it's blacklited in initramfs, local device is ignored when initramfs runs, but later on during the boot, multipath tried to connect this device as it thinks it's multipath device (probably triggering udev rules mentioned in my previous comment), which results into this issue.

Removing /etc/multipath/wwids fixes the issue (at least in my reproducer).

So, this is the result of couple of subsequent upgrades and as mentioned in previous comments, including vdsm %post scripts into update process should fix the issue.

(Originally by Vojtech Juranek)

Comment 39 RHV bug bot 2018-12-03 09:51:23 UTC
Ryan, what is your option, will imgbased take care about vdsm %post scripts during update?
Thanks

(Originally by Vojtech Juranek)

Comment 40 RHV bug bot 2018-12-03 09:51:31 UTC
No longer an imgbased maintainer, so this is up to Yuval

(Originally by Ryan Barry)

Comment 41 RHV bug bot 2018-12-03 09:51:38 UTC
Instead of running the entire %post, we just run vdsm-tool configure --force in the new layer, using SYSTEMD_IGNORE_CHROOT

(Originally by Yuval Turgeman)

Comment 42 RHV bug bot 2018-12-03 09:51:45 UTC
The issue is fixed in rhvh-4.2.8.0-0.20181127.0

Test versions:
Build1: redhat-virtualization-host-4.0-20170307.1
Build2: redhat-virtualization-host-4.2-20181121.0
Build3: redhat-virtualization-host-4.2-20181127.0


Steps to Reproduce:
1. Install redhat-virtualization-host-4.0-20170307.1
2. Upgrade rhvh from 4.1 to 4.2.7 redhat-virtualization-host-4.2-20181121.0
3. Reboot rhvh to 4.2.7, then upgrade again to 4.2.8 redhat-virtualization-host-4.2-20181127.0
4. Reboot rhvh, enter to 4.2.8 successful, run "#imgbase rollback"
5. Reboot rhvh, can enter to 4.2.7 successful, run "#imgbase rollback" in rhvh-4.2.6
6. Reboot rhvh
7. Reboot rhvh several times

Test results:
1. After step6, rhvh enter to rhvh-4.2.8 successful
2. After step7, rhvh enter to rhvh-4.2.8 successful every time


So this bug is fixed in redhat-virtualization-host-4.2-20181127.0, I will VERIFY this bug once the status move to ON_QA.

(Originally by Huijuan Zhao)

Comment 44 Huijuan Zhao 2018-12-04 01:49:26 UTC
According to comment 42, move to VERIFIED.

Comment 49 errata-xmlrpc 2019-01-22 12:44:15 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0116


Note You need to log in before you can comment on or make changes to this bug.