Bug 1859792
Summary: | upgrading rhv-h fails silently ('mount: unknown filesystem type 'squashfs') | ||
---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Marcus West <mwest> |
Component: | redhat-virtualization-host | Assignee: | Nir Levy <nlevy> |
Status: | CLOSED DUPLICATE | QA Contact: | peyu |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 4.3.10 | CC: | cshao, lsurette, lsvaty, mavital, nlevy, peyu, qiyuan, sbonazzo, shlei, weiwang, yaniwang, ycui |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2020-07-30 07:31:52 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | Node | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Marcus West
2020-07-23 02:41:33 UTC
If I try to update via yum, we get some information: -------------------------------------------------------------------------- ... Total download size: 713 M Installed size: 713 M Is this ok [y/d/N]: y Downloading packages: redhat-virtualization-host-image-update-4.3.10-20200615.0.el7_8.noarch.rpm | 713 MB 00:00:23 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : redhat-virtualization-host-image-update-4.3.10-20200615.0.el7_8.noarch 1/1 mount: unknown filesystem type 'squashfs' warning: %post(redhat-virtualization-host-image-update-4.3.10-20200615.0.el7_8.noarch) scriptlet failed, exit status 32 Non-fatal POSTIN scriptlet failure in rpm package redhat-virtualization-host-image-update-4.3.10-20200615.0.el7_8.noarch Uploading Package Profile Loaded plugins: product-id, subscription-manager, versionlock Verifying : redhat-virtualization-host-image-update-4.3.10-20200615.0.el7_8.noarch 1/1 Installed: redhat-virtualization-host-image-update.noarch 0:4.3.10-20200615.0.el7_8 Complete! -------------------------------------------------------------------------- The return code is '0', however the image is not there to be booted into - it just boots into the previous one. Workaround is to remove the blacklist for 'squashfs' (or manually insmod), and yum reinstall. Would please upload the host log /var/log/imgbased.log ? Yes, when host upgrade fails, the RHVM GUI shows successful. This issue has been resolved, please refer to the bug: Bug 1770893 Ah thanks for that, i'll document a solution linking both BZ's No /var/log/imgbased.log gets created, so I assume that the scripts ends after the failed 'mount' command: === postinstall scriptlet (using /bin/sh): # Some magic to ensure that imgbase from # the new image is used for updates set -e export IMGBASED_IMAGE_UPDATE_RPM=$(lsof -p $PPID 2>/dev/null | grep image-update | awk '{print $9}') export MNTDIR="$(mktemp -d)" mount "/usr/share/redhat-virtualization-host/image//redhat-virtualization-host-4.3.10-20200615.0.el7_8.squashfs.img" "$MNTDIR" mount "$MNTDIR"/LiveOS/rootfs.img "$MNTDIR" export PYTHONPATH=$(find $MNTDIR/usr/lib/python* -name imgbased -type d -exec dirname {} \; | sort | tail -1):$PYTHONPATH imgbase --debug update --format liveimg /usr/share/redhat-virtualization-host/image//redhat-virtualization-host-4.3.10-20200615.0.el7_8.squashfs.img >> /var/log/imgbased.log 2>&1 umount "$MNTDIR" umount "$MNTDIR" === From my test host after a failed update: === [root@beef2 ~]# lvs rhvh_beef1 LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home rhvh_beef1 Vwi-aotz-- 1.00g pool00 4.79 pool00 rhvh_beef1 twi-aotz-- 58.19g 10.16 1.90 rhvh-4.3.9.2-0.20200324.0 rhvh_beef1 Vwi---tz-k 31.19g pool00 root rhvh-4.3.9.2-0.20200324.0+1 rhvh_beef1 Vwi-aotz-- 31.19g pool00 rhvh-4.3.9.2-0.20200324.0 11.85 root rhvh_beef1 Vri---tz-k 31.19g pool00 swap rhvh_beef1 -wi-ao---- <4.94g tmp rhvh_beef1 Vwi-aotz-- 1.00g pool00 4.86 var rhvh_beef1 Vwi-aotz-- 15.00g pool00 7.84 var_crash rhvh_beef1 Vwi-aotz-- 10.00g pool00 2.86 var_log rhvh_beef1 Vwi-aotz-- 8.00g pool00 3.32 var_log_audit rhvh_beef1 Vwi-aotz-- 2.00g pool00 4.84 [root@beef2 ~]# nodectl info layers: rhvh-4.3.9.2-0.20200324.0: rhvh-4.3.9.2-0.20200324.0+1 bootloader: default: rhvh-4.3.9.2-0.20200324.0 (3.10.0-1127.el7.x86_64) entries: rhvh-4.3.9.2-0.20200324.0 (3.10.0-1127.el7.x86_64): index: 0 title: rhvh-4.3.9.2-0.20200324.0 (3.10.0-1127.el7.x86_64) kernel: /boot/rhvh-4.3.9.2-0.20200324.0+1/vmlinuz-3.10.0-1127.el7.x86_64 args: "ro crashkernel=auto rd.lvm.lv=rhvh_beef1/rhvh-4.3.9.2-0.20200324.0+1 rd.lvm.lv=rhvh_beef1/swap rhgb quiet LANG=en_AU.UTF-8 img.bootid=rhvh-4.3.9.2-0.20200324.0+1" initrd: /boot/rhvh-4.3.9.2-0.20200324.0+1/initramfs-3.10.0-1127.el7.x86_64.img root: /dev/rhvh_beef1/rhvh-4.3.9.2-0.20200324.0+1 current_layer: rhvh-4.3.9.2-0.20200324.0+1 Closing as duplicate of bug #1770893. Also users shouldn't break their system by masking critical modules randomly. *** This bug has been marked as a duplicate of bug 1770893 *** |