Bug 1747410 - imgbased upgrade fails: Exception! mount: special device /dev/rhvh_name/rhvh-4.3.5.3-0.20190805.0+1 does not exist
Summary: imgbased upgrade fails: Exception! mount: special device /dev/rhvh_name/rhvh-...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: imgbased
Version: 4.3.5
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ovirt-4.3.7
: 4.3.7
Assignee: Yuval Turgeman
QA Contact: peyu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-08-30 11:06 UTC by Juan Orti
Modified: 2019-12-12 10:37 UTC (History)
13 users (show)

Fixed In Version: imgbased-1.1.11-0.1.el7ev
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-12-12 10:37:09 UTC
oVirt Team: Node
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 4384181 0 None None None 2019-08-30 11:24:03 UTC
Red Hat Product Errata RHBA-2019:4231 0 None None None 2019-12-12 10:37:14 UTC
oVirt gerrit 103797 0 'None' MERGED osupdater: stop vdsm services before update 2021-02-09 09:00:12 UTC
oVirt gerrit 104023 0 'None' MERGED osupdater: stop vdsm services before update 2021-02-09 09:00:12 UTC
oVirt gerrit 104341 0 'None' MERGED osupdater: install update rpm if available 2021-02-09 09:00:12 UTC
oVirt gerrit 104343 0 'None' MERGED spec: export IMGBASED_IMAGE_UPDATE_RPM in %post 2021-02-09 09:00:12 UTC
oVirt gerrit 104348 0 'None' MERGED osupdater: install update rpm if available 2021-02-09 09:00:12 UTC
oVirt gerrit 104434 0 'None' MERGED spec: export IMGBASED_IMAGE_UPDATE_RPM in %post 2021-02-09 09:00:13 UTC
oVirt gerrit 104541 0 'None' MERGED osupdater: use the first image-update rpm 2021-02-09 09:00:13 UTC

Description Juan Orti 2019-08-30 11:06:03 UTC
Description of problem:
The upgrade of RHVH is failing for many hosts. If retried several times, it can be installed successfully.

From: rhvh-4.2.7.3-0.20181026.0
To: rhvh-4.3.5.3-0.20190805

Version-Release number of selected component (if applicable):
redhat-virtualization-host-image-update-4.3.5-20190805.0.el7_7.noarch

How reproducible:
Most of the times. If retried several times, it works.

Steps to Reproduce:
1. in a RHVH 4.2.7.3 host: yum update -y

Actual results:
Exception! mount: special device /dev/rhvh_hostname/rhvh-4.3.5.3-0.20190805.0+1 does not exist

Expected results:
To install successfully the first time

Additional info:

yum.log:

Aug 30 11:22:40 Installed: redhat-virtualization-host-image-update-4.3.5-20190805.0.el7_7.noarch

imgbased.log:


2019-08-30 11:05:57,338 [DEBUG] (MainThread) Calling binary: (['lvchange', '--activate', 'y', u'rhvh_hostname/rhvh-4.3.5.3-0.20190805.0', '--ignoreactivationskip'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7faaae84f390>}
2019-08-30 11:05:57,338 [DEBUG] (MainThread) Calling: (['lvchange', '--activate', 'y', u'rhvh_hostname/rhvh-4.3.5.3-0.20190805.0', '--ignoreactivationskip'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7faaae8
4f390>}
2019-08-30 11:05:57,838 [DEBUG] (MainThread) Calling binary: (['lvchange', '--activate', 'y', u'rhvh_hostname/rhvh-4.3.5.3-0.20190805.0+1', '--ignoreactivationskip'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7faaae84f390>}
2019-08-30 11:05:57,839 [DEBUG] (MainThread) Calling: (['lvchange', '--activate', 'y', u'rhvh_hostname/rhvh-4.3.5.3-0.20190805.0+1', '--ignoreactivationskip'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7faaa
e84f390>}

2019-08-30 11:08:18,698 [DEBUG] (MainThread) Calling binary: (['vgs', '--noheadings', '--ignoreskippedcluster', '--select', 'vg_tags = imgbased:vg', '-o', 'vg_name'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7faaae84f4b0>}
2019-08-30 11:08:18,698 [DEBUG] (MainThread) Calling: (['vgs', '--noheadings', '--ignoreskippedcluster', '--select', 'vg_tags = imgbased:vg', '-o', 'vg_name'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7faaae84f4
b0>}
2019-08-30 11:08:18,750 [DEBUG] (MainThread) Returned: rhvh_hostname
2019-08-30 11:08:18,751 [DEBUG] (MainThread) Calling binary: (['lvs', '--noheadings', '--ignoreskippedcluster', '-olv_path', u'rhvh_hostname/rhvh-4.3.5.3-0.20190805.0+1'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7faaae84f4b0
>}
2019-08-30 11:08:18,751 [DEBUG] (MainThread) Calling: (['lvs', '--noheadings', '--ignoreskippedcluster', '-olv_path', u'rhvh_hostname/rhvh-4.3.5.3-0.20190805.0+1'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x
7faaae84f4b0>}
2019-08-30 11:08:18,806 [DEBUG] (MainThread) Returned: /dev/rhvh_hostname/rhvh-4.3.5.3-0.20190805.0+1
2019-08-30 11:08:18,807 [DEBUG] (MainThread) Calling binary: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {}
2019-08-30 11:08:18,807 [DEBUG] (MainThread) Calling: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {'close_fds': True, 'stderr': -2}
2019-08-30 11:08:18,817 [DEBUG] (MainThread) Returned: /tmp/mnt.M1VoE
2019-08-30 11:08:18,817 [DEBUG] (MainThread) Calling binary: (['mount', u'/dev/rhvh_hostname/rhvh-4.3.5.3-0.20190805.0+1', u'/tmp/mnt.M1VoE'],) {}
2019-08-30 11:08:18,817 [DEBUG] (MainThread) Calling: (['mount', u'/dev/rhvh_hostname/rhvh-4.3.5.3-0.20190805.0+1', u'/tmp/mnt.M1VoE'],) {'close_fds': True, 'stderr': -2}
2019-08-30 11:08:18,951 [DEBUG] (MainThread) Exception! mount: special device /dev/rhvh_hostname/rhvh-4.3.5.3-0.20190805.0+1 does not exist

2019-08-30 11:08:18,953 [ERROR] (MainThread) Failed to reinstall persisted RPMs
Traceback (most recent call last):
  File "/tmp/tmp.ve01HtTbea/usr/lib/python2.7/site-packages/imgbased/plugins/rpmpersistence.py", line 60, in on_os_upgraded
    reinstall_rpms(imgbase, new_lv, previous_layer_lv)
  File "/tmp/tmp.ve01HtTbea/usr/lib/python2.7/site-packages/imgbased/plugins/rpmpersistence.py", line 70, in reinstall_rpms
    with mounted(new_lv.path) as new_fs:
  File "/tmp/tmp.ve01HtTbea/usr/lib/python2.7/site-packages/imgbased/utils.py", line 261, in __enter__
    self.mp.mount()
  File "/tmp/tmp.ve01HtTbea/usr/lib/python2.7/site-packages/imgbased/utils.py", line 236, in mount
    self.run.call(cmd)
  File "/tmp/tmp.ve01HtTbea/usr/lib/python2.7/site-packages/imgbased/utils.py", line 407, in call
    stdout = call(*args, **kwargs)
  File "/tmp/tmp.ve01HtTbea/usr/lib/python2.7/site-packages/imgbased/utils.py", line 171, in call
    return subprocess.check_output(*args, **kwargs).strip()
  File "/usr/lib64/python2.7/subprocess.py", line 575, in check_output
    raise CalledProcessError(retcode, cmd, output=output)
CalledProcessError: Command '['mount', u'/dev/rhvh_hostname/rhvh-4.3.5.3-0.20190805.0+1', u'/tmp/mnt.M1VoE']' returned non-zero exit status 32
2019-08-30 11:08:18,956 [ERROR] (MainThread) Failed to update OS
Traceback (most recent call last):
  File "/tmp/tmp.ve01HtTbea/usr/lib/python2.7/site-packages/imgbased/plugins/osupdater.py", line 217, in migrate_boot
    adjust_mounts_and_boot(imgbase, new_lv, previous_layer_lv)
  File "/tmp/tmp.ve01HtTbea/usr/lib/python2.7/site-packages/imgbased/plugins/osupdater.py", line 1137, in adjust_mounts_and_boot
    imgbase.hooks.emit("os-upgraded", previous_lv.lv_name, new_lvm_name)
  File "/tmp/tmp.ve01HtTbea/usr/lib/python2.7/site-packages/imgbased/hooks.py", line 120, in emit
    cb(self.context, *args)
  File "/tmp/tmp.ve01HtTbea/usr/lib/python2.7/site-packages/imgbased/plugins/rpmpersistence.py", line 63, in on_os_upgraded
    raise RpmPersistenceError()
RpmPersistenceError


There are mounted leftovers in /tmp:

# mount 
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=32821076k,nr_inodes=8205269,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,seclabel,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuacct,cpu)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,freezer)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,net_prio,net_cls)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,hugetlb)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,devices)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,memory)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,blkio)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuset)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,perf_event)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,pids)
configfs on /sys/kernel/config type configfs (rw,relatime)
/dev/mapper/rhvh_hostname-rhvh--4.2.7.3--0.20181026.0+1 on / type ext4 (rw,relatime,seclabel,discard,stripe=16,data=ordered)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=32,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=32050)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel)
hugetlbfs on /dev/hugepages1G type hugetlbfs (rw,relatime,seclabel,pagesize=1G)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel)
/dev/mapper/rhvh_hostname-home on /home type ext4 (rw,relatime,seclabel,discard,stripe=16,data=ordered)
/dev/mapper/3600508b1001c5e53dd32346f1c0fc32d1 on /boot type ext4 (rw,relatime,seclabel,data=ordered)
/dev/mapper/rhvh_hostname-tmp on /tmp type ext4 (rw,relatime,seclabel,discard,stripe=16,data=ordered)
/dev/mapper/rhvh_hostname-var on /var type ext4 (rw,relatime,seclabel,discard,stripe=16,data=ordered)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=6577212k,mode=700)
/usr/share/redhat-virtualization-host/image/redhat-virtualization-host-4.3.5-20190805.0.el7_7.squashfs.img on /tmp/tmp.tgChgw1FYC type squashfs (ro,relatime,seclabel)
/tmp/tmp.tgChgw1FYC/LiveOS/rootfs.img on /tmp/tmp.tgChgw1FYC type ext4 (ro,relatime,seclabel,data=ordered)


# nodectl info
layers: 
  rhvh-4.2.4.3-0.20180622.0: 
    rhvh-4.2.4.3-0.20180622.0+1
  rhvh-4.2.7.3-0.20181026.0: 
    rhvh-4.2.7.3-0.20181026.0+1
bootloader: 
  default: rhvh-4.2.7.3-0.20181026.0+1
  entries: 
    rhvh-4.2.7.3-0.20181026.0+1: 
      index: 0
      title: rhvh-4.2.7.3-0.20181026.0
      kernel: /boot/rhvh-4.2.7.3-0.20181026.0+1/vmlinuz-3.10.0-957.el7.x86_64
      args: "ro crashkernel=auto rd.lvm.lv=rhvh_hostname/swap rd.lvm.lv=rhvh_hostname/rhvh-4.2.7.3-0.20181026.0+1 rhgb quiet LANG=en_US.UTF-8 img.bootid=rhvh-4.2.7.3-0.20181026.0+1"
      initrd: /boot/rhvh-4.2.7.3-0.20181026.0+1/initramfs-3.10.0-957.el7.x86_64.img
      root: /dev/rhvh_hostname/rhvh-4.2.7.3-0.20181026.0+1
    rhvh-4.2.4.3-0.20180622.0+1: 
      index: 1
      title: rhvh-4.2.4.3-0.20180622.0
      kernel: /boot/rhvh-4.2.4.3-0.20180622.0+1/vmlinuz-3.10.0-862.6.3.el7.x86_64
      args: "ro crashkernel=auto rd.lvm.lv=rhvh_hostname/swap rd.lvm.lv=rhvh_hostname/rhvh-4.2.4.3-0.20180622.0+1 rhgb quiet LANG=en_US.UTF-8 img.bootid=rhvh-4.2.4.3-0.20180622.0+1"
      initrd: /boot/rhvh-4.2.4.3-0.20180622.0+1/initramfs-3.10.0-862.6.3.el7.x86_64.img
      root: /dev/rhvh_hostname/rhvh-4.2.4.3-0.20180622.0+1
current_layer: rhvh-4.2.7.3-0.20181026.0+1

Comment 20 peyu 2019-11-19 06:21:25 UTC
This bug is not 100% reproducible, it is difficult to reproduce this bug when manually upgrade RHVH. However, when running the upgrade automation job, this bug can often be reproduced.
I have submitted several upgrade automation jobs based on redhat-virtualization-host-4.3.7-20191115.0.el7_7, and this bug no longer occurs.

So I will move the bug status to VERIFIED.

Comment 22 errata-xmlrpc 2019-12-12 10:37:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:4231


Note You need to log in before you can comment on or make changes to this bug.