Bug 1810905

Summary: [blivet]Package libblockdev-nvdimm is missing in Rhel 8
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Gobinda Das <godas>
Component: rhhiAssignee: Gobinda Das <godas>
Status: CLOSED ERRATA QA Contact: SATHEESARAN <sasundar>
Severity: high Docs Contact:
Priority: unspecified    
Version: rhhiv-1.8CC: rhs-bugs
Target Milestone: ---   
Target Release: RHHI-V 1.8   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of:
: 1810910 (view as bug list) Environment:
Last Closed: 2020-08-04 14:51:37 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1810910    
Bug Blocks: 1779977    

Description Gobinda Das 2020-03-06 07:52:41 UTC
Description of problem:
package libblockdev-nvdimm is missing in RHVH 4.4 which is causing disk sync failure.
[root@tendrl25 ~]# rpm -qa | grep "blivet"
python3-blivet-3.1.0-19.el8.noarch
blivet-data-3.1.0-19.el8.noarch

[root@tendrl25 ~]# cat /etc/redhat-release 
Red Hat Enterprise Linux release 8.2


Full log:

[root@tendrl25 ~]# /usr/bin/python3
Python 3.6.8 (default, Dec  5 2019, 15:45:45)
[GCC 8.3.1 20191121 (Red Hat 8.3.1-5)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import blivet
 
** (process:857218): WARNING **: 16:35:31.051: failed to load module lvm: libbd_lvm.so.2: cannot open shared object file: No such file or directory
 
** (process:857218): WARNING **: 16:35:31.059: failed to load module mpath: libbd_mpath.so.2: cannot open shared object file: No such file or directory
 
** (process:857218): WARNING **: 16:35:31.059: failed to load module dm: libbd_dm.so.2: cannot open shared object file: No such file or directory
 
** (process:857218): WARNING **: 16:35:31.060: failed to load module nvdimm: libbd_nvdimm.so.2: cannot open shared object file: No such file or directory
>>> blivetEnv = blivet.Blivet()
>>> blivetEnv.reset()
 
** (process:857218): CRITICAL **: 16:35:49.913: The function 'bd_nvdimm_namespace_get_devname' called, but not implemented!
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/blivet/blivet.py", line 161, in reset
    self.devicetree.populate(cleanup_only=cleanup_only)
  File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/blivet/populator/populator.py", line 413, in populate
    self._populate()
  File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/blivet/populator/populator.py", line 461, in _populate
    self.handle_device(dev)
  File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/blivet/populator/populator.py", line 261, in handle_device
    helper_class = self._get_device_helper(info)
  File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/blivet/populator/populator.py", line 226, in _get_device_helper
    return get_device_helper(info)
  File "/usr/lib/python3.6/site-packages/blivet/populator/helpers/__init__.py", line 53, in get_device_helper
    return _six.next((h for h in _device_helpers if h.match(data)), None)
  File "/usr/lib/python3.6/site-packages/blivet/populator/helpers/__init__.py", line 53, in <genexpr>
    return _six.next((h for h in _device_helpers if h.match(data)), None)
  File "/usr/lib/python3.6/site-packages/blivet/populator/helpers/disk.py", line 228, in match
    udev.device_is_nvdimm_namespace(data))
  File "/usr/lib/python3.6/site-packages/blivet/udev.py", line 966, in device_is_nvdimm_namespace
    ninfo = blockdev.nvdimm_namespace_get_devname(devname)
GLib.Error: g-bd-init-error-quark: The function 'bd_nvdimm_namespace_get_devname' called, but not implemented! (1)



Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 3 Gobinda Das 2020-04-19 06:44:08 UTC
Moving this to ON_QA as  libblockdev-nvdimm pkg is included in build as a workaround, but dependent bug is blivet bug which I don't see a blivet build solving this, so still that bug is in NEW.

$ git tag --contains 3eecdcac1fe502740880478b05bc58c6948b4f03
v4.40.10
v4.40.11
v4.40.12
v4.40.13

so it's already in RHV-H.

Comment 4 SATHEESARAN 2020-04-29 14:30:32 UTC
Verified with RHVH 4.4 ISO - RHVH-4.4-20200417.0-RHVH-x86_64-dvd1.iso	
This ISO includes the package - libblockdev-plugins-all

[root@ ~]# imgbase w
You are on rhvh-4.4.0.18-0.20200417.0+1

[root@ ~]# rpm -qa | grep blockdev-plugin
libblockdev-plugins-all-2.19-12.el8.x86_64
[root@dhcp35-151 ~]# rpm -qa | grep blockdev
libblockdev-loop-2.19-12.el8.x86_64
libblockdev-dm-2.19-12.el8.x86_64
libblockdev-lvm-2.19-12.el8.x86_64
libblockdev-mpath-2.19-12.el8.x86_64
libblockdev-2.19-12.el8.x86_64
libblockdev-mdraid-2.19-12.el8.x86_64
libblockdev-plugins-all-2.19-12.el8.x86_64
python3-blockdev-2.19-12.el8.x86_64
libblockdev-crypto-2.19-12.el8.x86_64
libblockdev-fs-2.19-12.el8.x86_64
libblockdev-part-2.19-12.el8.x86_64
libblockdev-kbd-2.19-12.el8.x86_64
libblockdev-nvdimm-2.19-12.el8.x86_64
libblockdev-vdo-2.19-12.el8.x86_64
libblockdev-utils-2.19-12.el8.x86_64
libblockdev-swap-2.19-12.el8.x86_64

Comment 6 errata-xmlrpc 2020-08-04 14:51:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (RHHI for Virtualization 1.8 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:3314