Created attachment 1260242 [details] 4.0.7 related log about upgrade issue Description of problem: when register old bulid RHVH 4.0 to rhevm 4.0, it cannot be up again after upgrade to RHVH 4.0.7 new build from engine side Version-Release number of selected component (if applicable): Before upgrade: rhvh-4.0-0.20160919.1 After upgrade: redhat-virtualization-host-4.0-20170302.0.x86_64(new build) imgbased-0.8.15-0.1.el7ev.noarch kernel-3.10.0-514.10.2.el7.x86_64 How reproducible: 100% Steps to Reproduce: 1. Install RHVH 4.0 old build via anaconda GUI 2. Reboot and set local repo 3. Register to rhevm 4.0 and upgrade from rhevm side 4. Upgrade successfully and Reboot into new system 5. Check RHVH status in the side of rhevm 6. Check vdsm status # service vdsmd status Actual results: After step 5, RHVH 4.0.7 cannot be up in the side of rhevm 4.0 After step 6, #service vdsmd status vdsmd.service - Virtual Desktop Server Manager Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor preset: enabled) Active: failed (Result: start-limit) since Sun 2017-03-05 19:00:20 CST; 23min ago Process: 5023 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=1/FAILURE) ......................................... Expected results: After step 5, RHVH 4.0.7 can be up in the side of rhevm 4.0 Additional info: 1. The same issue about upgrade via "yum update" way. 2. The RHVH 4.0.7 will be up in the rhevm side when install directly
Created attachment 1260243 [details] related picture about this issue
Created attachment 1260244 [details] another picture about vdsmd.service
This bug blocks upgrade test in RHEVM side, and the same issue happens via "yum update" way ,so I could not check status in rhevm side and related test.
Can you please provide journalctl output?
Created attachment 1260262 [details] journalctl output
(In reply to Ryan Barry from comment #4) > Can you please provide journalctl output? Hi, I have upload attachment 1260262 [details] for journalcl output, please check it Thanks
Thanks jianwu. I'm not sure this is unexpected behavior, though it's not nice. We've carefully avoided adding a 'vdsm-tool configure --force' as part of NGN booting (which vintage RHV-H had). The version of RHV-H used for upgrading was particularly old, but I'm not sure how vdsm handles this behind the scenes. The version of lvmlocal.conf from the old version did not have the necessary configuration. In fact, it appears to have matched the installed version exactly: # mount /dev/mapper/rhvh_dhcp--10--229-rhvh--4.0--0.20160919.0+1 on /tmp/a type xfs (rw,relatime,seclabel,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota) # diff -u /tmp/a/etc/lvm/lvmlocal.conf /tmp/a/usr/share/factory/etc/lvm/lvmlocal.conf In this case, we took the lvmlocal.conf from the new image, but this is also not what vdsm expects. Differences between /etc/lvm/lvmlocal.conf before and after "vdsm-tool configure --force" are significant. I don't see anything in the relevant topic branch which looks like vdsm should keep/handle this on upgrades either. I'm not very familiar with how vdsm handles upgrades, though. A quick glance at the specfile show: if ! %{_bindir}/vdsm-tool is-configured >/dev/null 2>&1; then %{_bindir}/vdsm-tool configure --force >/dev/null 2>&1 fi I'd like to match this if possible. Yuvalt: Can you please add a method in plugins/osupdater which matches this?
Hi, I have verified this issue on redhat-virtualization-host-4.1-20170308.1.x86_6 Version-Release number of selected component (if applicable): Before upgrade: rhvh-4.0-0.20160919.1 After upgrade: redhat-virtualization-host-4.1-20170308.1.x86_6(new build) imgbased-0.9.17-0.1.el7ev Test Results: RHVH new build could be up again in the side of RHEVM 4.0 So I thinks this bug is fixed in this build, I will change status to verified.
Hi Yuval Can you please confirm that this doc text is technically accurate. Previously, VDSM was not configured after upgrading Red Hat Virtualization Host (RHVH). As a result RHVH could not run together with a Manager that was running an older version. In this release, by running vdsm-tool configure --force on boot, the VDSM is successfully configured and RHVH 4.1 can run next to a 4.0 Manager.
Hi Emma, sounds good, but I dont think it's a 4.1 thing, the bug was reported for 4.0.7 RHVH
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:1114