Bug 1859792 - upgrading rhv-h fails silently ('mount: unknown filesystem type 'squashfs')
Summary: upgrading rhv-h fails silently ('mount: unknown filesystem type 'squashfs')
Keywords:
Status: CLOSED DUPLICATE of bug 1770893
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: redhat-virtualization-host
Version: 4.3.10
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Nir Levy
QA Contact: peyu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-23 02:41 UTC by Marcus West
Modified: 2020-07-30 07:31 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-07-30 07:31:52 UTC
oVirt Team: Node
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 5243561 0 None None None 2020-07-23 05:26:57 UTC

Description Marcus West 2020-07-23 02:41:33 UTC
Description of problem:

Updating rhv-h fails if the squashfs module can't be mounted (ie, result of os hardening).  However from the GUI, it's not obvious that the process has failed.  Determining the cause is difficult unless you run 'yum update' on the host, or dig through the ansbile logs on the manager.


Version-Release number of selected component (if applicable):

ovirt-engine-4.3.9.4-11.el7.noarch
rhvh--4.3.9.2--0.20200324


How reproducible:

Always

Steps to Reproduce:
1. add "install squashfs /bin/true" under /etc/modprobe.d
2. initiate host upgrade (from the GUI, put host into maintenance and reboot)


Actual results:

Host reboots without error and comes up fine, but it's still in the previous build.  The icon indicating an update is required no longer exists


Expected results:

Some sort of notification that the update has failed. Disable automatic reboot


Additional info:

Resulting package set after failed update:

# rpm -qa |grep virtualization |sort
redhat-release-virtualization-host-4.3.9-2.el7ev.x86_64
redhat-release-virtualization-host-content-4.3.9-2.el7ev.x86_64
redhat-virtualization-host-image-update-4.3.10-20200615.0.el7_8.noarch

Comment 1 Marcus West 2020-07-23 02:45:12 UTC
If I try to update via yum, we get some information:
--------------------------------------------------------------------------
...
Total download size: 713 M
Installed size: 713 M
Is this ok [y/d/N]: y
Downloading packages:
redhat-virtualization-host-image-update-4.3.10-20200615.0.el7_8.noarch.rpm                                                       | 713 MB  00:00:23     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : redhat-virtualization-host-image-update-4.3.10-20200615.0.el7_8.noarch                                                               1/1 
mount: unknown filesystem type 'squashfs'
warning: %post(redhat-virtualization-host-image-update-4.3.10-20200615.0.el7_8.noarch) scriptlet failed, exit status 32
Non-fatal POSTIN scriptlet failure in rpm package redhat-virtualization-host-image-update-4.3.10-20200615.0.el7_8.noarch
Uploading Package Profile
Loaded plugins: product-id, subscription-manager, versionlock
  Verifying  : redhat-virtualization-host-image-update-4.3.10-20200615.0.el7_8.noarch                                                               1/1 

Installed:
  redhat-virtualization-host-image-update.noarch 0:4.3.10-20200615.0.el7_8                                                                              

Complete!
--------------------------------------------------------------------------

The return code is '0', however the image is not there to be booted into - it just boots into the previous one.

Workaround is to remove the blacklist for 'squashfs' (or manually insmod), and yum reinstall.

Comment 2 peyu 2020-07-23 03:29:07 UTC
Would please upload the host log /var/log/imgbased.log ?

Yes, when host upgrade fails, the RHVM GUI shows successful. This issue has been resolved, please refer to the bug: Bug 1770893

Comment 3 Marcus West 2020-07-23 04:59:08 UTC
Ah thanks for that, i'll document a solution linking both BZ's

No /var/log/imgbased.log gets created, so I assume that the scripts ends after the failed 'mount' command:

===
postinstall scriptlet (using /bin/sh):
# Some magic to ensure that imgbase from
# the new image is used for updates
set -e
export IMGBASED_IMAGE_UPDATE_RPM=$(lsof -p $PPID 2>/dev/null | grep image-update | awk '{print $9}')
export MNTDIR="$(mktemp -d)"
mount "/usr/share/redhat-virtualization-host/image//redhat-virtualization-host-4.3.10-20200615.0.el7_8.squashfs.img" "$MNTDIR"
mount "$MNTDIR"/LiveOS/rootfs.img "$MNTDIR"
export PYTHONPATH=$(find $MNTDIR/usr/lib/python* -name imgbased -type d -exec dirname {} \; | sort | tail -1):$PYTHONPATH
imgbase --debug update --format liveimg /usr/share/redhat-virtualization-host/image//redhat-virtualization-host-4.3.10-20200615.0.el7_8.squashfs.img >> /var/log/imgbased.log 2>&1
umount "$MNTDIR"
umount "$MNTDIR"
===

From my test host after a failed update:

===
[root@beef2 ~]# lvs rhvh_beef1
  LV                          VG         Attr       LSize  Pool   Origin                    Data%  Meta%  Move Log Cpy%Sync Convert
  home                        rhvh_beef1 Vwi-aotz--  1.00g pool00                           4.79                                   
  pool00                      rhvh_beef1 twi-aotz-- 58.19g                                  10.16  1.90                            
  rhvh-4.3.9.2-0.20200324.0   rhvh_beef1 Vwi---tz-k 31.19g pool00 root                                                             
  rhvh-4.3.9.2-0.20200324.0+1 rhvh_beef1 Vwi-aotz-- 31.19g pool00 rhvh-4.3.9.2-0.20200324.0 11.85                                  
  root                        rhvh_beef1 Vri---tz-k 31.19g pool00                                                                  
  swap                        rhvh_beef1 -wi-ao---- <4.94g                                                                         
  tmp                         rhvh_beef1 Vwi-aotz--  1.00g pool00                           4.86                                   
  var                         rhvh_beef1 Vwi-aotz-- 15.00g pool00                           7.84                                   
  var_crash                   rhvh_beef1 Vwi-aotz-- 10.00g pool00                           2.86                                   
  var_log                     rhvh_beef1 Vwi-aotz--  8.00g pool00                           3.32                                   
  var_log_audit               rhvh_beef1 Vwi-aotz--  2.00g pool00                           4.84                                   
[root@beef2 ~]# nodectl info
layers: 
  rhvh-4.3.9.2-0.20200324.0: 
    rhvh-4.3.9.2-0.20200324.0+1
bootloader: 
  default: rhvh-4.3.9.2-0.20200324.0 (3.10.0-1127.el7.x86_64)
  entries: 
    rhvh-4.3.9.2-0.20200324.0 (3.10.0-1127.el7.x86_64): 
      index: 0
      title: rhvh-4.3.9.2-0.20200324.0 (3.10.0-1127.el7.x86_64)
      kernel: /boot/rhvh-4.3.9.2-0.20200324.0+1/vmlinuz-3.10.0-1127.el7.x86_64
      args: "ro crashkernel=auto rd.lvm.lv=rhvh_beef1/rhvh-4.3.9.2-0.20200324.0+1 rd.lvm.lv=rhvh_beef1/swap rhgb quiet LANG=en_AU.UTF-8 img.bootid=rhvh-4.3.9.2-0.20200324.0+1"
      initrd: /boot/rhvh-4.3.9.2-0.20200324.0+1/initramfs-3.10.0-1127.el7.x86_64.img
      root: /dev/rhvh_beef1/rhvh-4.3.9.2-0.20200324.0+1
current_layer: rhvh-4.3.9.2-0.20200324.0+1

Comment 7 Sandro Bonazzola 2020-07-30 07:31:52 UTC
Closing as duplicate of bug #1770893. Also users shouldn't break their system by masking critical modules randomly.

*** This bug has been marked as a duplicate of bug 1770893 ***


Note You need to log in before you can comment on or make changes to this bug.