Bug 1814979
| Summary: | [4.3.9-0day] Vdsm lvm2 requirement is not effective, installing vdsm succeeds when required package is missing | ||||||
|---|---|---|---|---|---|---|---|
| Product: | [oVirt] vdsm | Reporter: | Avihai <aefrat> | ||||
| Component: | Core | Assignee: | Amit Bawer <abawer> | ||||
| Status: | CLOSED CURRENTRELEASE | QA Contact: | Lukas Svaty <lsvaty> | ||||
| Severity: | medium | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | 4.30.43 | CC: | amarchuk, bugs, lsvaty, michal.skrivanek, nsednev, nsoffer, vjuranek | ||||
| Target Milestone: | ovirt-4.3.9-1 | Keywords: | Reopened | ||||
| Target Release: | --- | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | vdsm-4.30.44-1.el7ev | Doc Type: | If docs needed, set a value | ||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2020-04-06 06:53:17 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
|
Description
Avihai
2020-03-19 08:54:36 UTC
Seens that lvm on this host does not support pvs command with locking_type=4. Which lvm version is used? [root@storage-ge2-vdsm2 ~]# rpm -qa|grep lvm lvm2-libs-2.02.186-7.el7.x86_64 libblockdev-lvm-2.18-5.el7.x86_64 llvm-private-7.0.1-1.el7.x86_64 udisks2-lvm2-2.8.4-1.el7.x86_64 lvm2-2.02.186-7.el7.x86_64 looks likes fit for BZ #1809660 is not complete/work properly: [root@storage-ge2-vdsm2 ~]# /sbin/lvm pvs --config 'global { locking_type=4 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 }' --select 'pv_name = /dev/mapper/360002ac0000000000000002400021f6b' Read-only locking type set. Write locks are prohibited. Recovery of standalone physical volumes failed. Cannot process standalone physical volumes Read-only locking type set. Write locks are prohibited. Recovery of standalone physical volumes failed. Cannot process standalone physical volumes Read-only locking type set. Write locks are prohibited. Recovery of standalone physical volumes failed. Cannot process standalone physical volumes PV VG Fmt Attr PSize PFree /dev/mapper/360002ac0000000000000002400021f6b 4f68437d-4af1-4249-9a8c-3052031d9715 lvm2 a-- 149,62g 149,62g [root@storage-ge2-vdsm2 ~]# echo $? 5 [root@storage-ge2-vdsm2 ~]# /sbin/lvm pvs --config 'global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 }' --select 'pv_name = /dev/mapper/360002ac0000000000000002400021f6b' PV VG Fmt Attr PSize PFree /dev/mapper/360002ac0000000000000002400021f6b 4f68437d-4af1-4249-9a8c-3052031d9715 lvm2 a-- 149,62g 149,62g [root@storage-ge2-vdsm2 ~]# echo $? 0 Marian, (In reply to Vojtech Juranek from comment #2) > [root@storage-ge2-vdsm2 ~]# rpm -qa|grep lvm > lvm2-2.02.186-7.el7.x86_64 This is not the correct version. We need: lvm2-2.02.186-7.el7_8.1 If we can install vdsm with this version our requirement is not correct. somehow the intended dependency check for lvm2 >= 2.02.186-7.el7_8.1 doesn't work. But that doesn't change the fact that you should be running with a different package. MAke sure you have lvm2-2.02.186-7.el7_8.1 and retest. Thanks (In reply to Michal Skrivanek from comment #5) > somehow the intended dependency check for lvm2 >= 2.02.186-7.el7_8.1 doesn't > work. But that doesn't change the fact that you should be running with a > different package. MAke sure you have lvm2-2.02.186-7.el7_8.1 and retest. > Thanks Me and Shir retested this scenario on 2 ENV's which we saw the issue after installing the correct LVM(lvm2-2.02.186-7.el7_8.1) manually and the issue was not seen. Closing the bug. MAnual deployment from RHEL7.7 has failed over iSCSI:
[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Unexpected exception]". HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Unexpected exception]\". HTTP response code is 400."}
Components were used:
rhvm-appliance.x86_64 2:4.3-20200317.0.el7 @rhv-4.3.9
rhevm-appliance.noarch 1:4.0.20170307.0-1.el7ev
rhvm-appliance.noarch 2:4.2-20190416.1.el7 rhel-7-server-rhv-mgmt-agent-x86-rhv-4.3
lvm2-2.02.185-2.el7_7.2.x86_64
ovirt-hosted-engine-ha-2.3.6-1.el7ev.noarch
ovirt-hosted-engine-setup-2.3.13-1.el7ev.noarch
Linux 3.10.0-1062.18.1.el7.x86_64 #1 SMP Wed Feb 12 14:08:31 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Red Hat Enterprise Linux Server release 7.7 (Maipo)
VDSM log looks like just the same:
2020-03-19 16:23:14,298+0200 ERROR (jsonrpc/4) [storage.LVM] Reloading PVs failed: pvs=['/dev/mapper/20024f40058540497
'] rc=5 out=[' ceQxOW-kUxY-JV3s-9UcN-8aBO-cfWA-casK6R|/dev/mapper/20024f40058540497|214345711616|10ad9cc0-f477-4fbf-8
2b0-f35989212785|Chh8Pw-4OSP-UdGu-av0a-8hgr-3cPn-c1DBEg|135266304|1597|0|2|214748364800|2'] err=[' Read-only locking
type set. Write locks are prohibited.', ' Recovery of standalone physical volumes failed.', ' Cannot process standal
one physical volumes', ' Read-only locking type set. Write locks are prohibited.', ' Recovery of standalone physical
volumes failed.', ' Cannot process standalone physical volumes', ' Read-only locking type set. Write locks are proh
ibited.', ' Recovery of standalone physical volumes failed.', ' Cannot process standalone physical volumes'] (lvm:50
4)
2020-03-19 16:23:14,298+0200 INFO (jsonrpc/4) [vdsm.api] FINISH createStorageDomain error=Volume Group metadata isn't
as expected: "reason=Expected one metadata pv in vg: 10ad9cc0-f477-4fbf-82b0-f35989212785, vg pvs: [Stub(name=u'/dev/
mapper/20024f40058540497', stale=True)]" from=::ffff:192.168.1.167,41338, flow_id=79c44727, task_id=2948cc7c-818e-4fd5
-a398-c47bf3d09aab (api:52)
2020-03-19 16:23:14,298+0200 ERROR (jsonrpc/4) [storage.TaskManager.Task] (Task='2948cc7c-818e-4fd5-a398-c47bf3d09aab'
) Unexpected error (task:875)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
return fn(*args, **kargs)
File "<string>", line 2, in createStorageDomain
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2626, in createStorageDomain
max_hosts=max_hosts)
File "/usr/lib/python2.7/site-packages/vdsm/storage/blockSD.py", line 1146, in create
device=lvm.getVgMetadataPv(vgName),
File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line 1771, in getVgMetadataPv
(vgName, pvs))
UnexpectedVolumeGroupMetadata: Volume Group metadata isn't as expected: "reason=Expected one metadata pv in vg: 10ad9c
c0-f477-4fbf-82b0-f35989212785, vg pvs: [Stub(name=u'/dev/mapper/20024f40058540497', stale=True)]"
2020-03-19 16:23:14,299+0200 INFO (jsonrpc/4) [storage.TaskManager.Task] (Task='2948cc7c-818e-4fd5-a398-c47bf3d09aab'
) aborting: Task is aborted: 'Volume Group metadata isn\'t as expected: "reason=Expected one metadata pv in vg: 10ad9cc0-f477-4fbf-82b0-f35989212785, vg pvs: [Stub(name=u\'/dev/mapper/20024f40058540497\', stale=True)]"' - code 616 (task:1181)
2020-03-19 16:23:14,299+0200 ERROR (jsonrpc/4) [storage.Dispatcher] FINISH createStorageDomain error=Volume Group metadata isn't as expected: "reason=Expected one metadata pv in vg: 10ad9cc0-f477-4fbf-82b0-f35989212785, vg pvs: [Stub(name=u'/dev/mapper/20024f40058540497', stale=True)]" (dispatcher:83)
2020-03-19 16:23:14,299+0200 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call StorageDomain.create failed (error 616) in 2.78 seconds (__init__:312)
(In reply to Nikolai Sednev from comment #10) > MAnual deployment from RHEL7.7 has failed over iSCSI: > > [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is > "[Unexpected exception]". HTTP response code is 400. > [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault > reason is \"Operation Failed\". Fault detail is \"[Unexpected exception]\". > HTTP response code is 400."} > > Components were used: > > rhvm-appliance.x86_64 2:4.3-20200317.0.el7 @rhv-4.3.9 > > rhevm-appliance.noarch 1:4.0.20170307.0-1.el7ev > rhvm-appliance.noarch 2:4.2-20190416.1.el7 > rhel-7-server-rhv-mgmt-agent-x86-rhv-4.3 > lvm2-2.02.185-2.el7_7.2.x86_64 Wrong lvm2 rpm version, you need lvm2-2.02.186-7.el7_8.1 for rhel. If we don't care about versions lets simplify the spec and remove all version requirements. If we do care abut the requirement, it should be correct and we should fix this bug. The issue using lvm version without epoch. This requirement should work:
Requires: lvm2 >= 7:2.02.186-7.el7_8.1
we may respin to pick up rhvh updates, but we're not waiting on this to get in. verified on vdsm-python-4.30.44-1.el7ev.noarch vdsm-api-4.30.44-1.el7ev.noarch [root@lynx25 ~]# rpm -qa | grep lvm2 lvm2-2.02.186-7.el7_8.1.x86_64 This bugzilla is included in oVirt 4.3.9 release, published on March 20th 2020. Since the problem described in this bug report should be resolved in oVirt 4.3.9 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report. |