RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1334448 - Installer changes required for package: ndctl
Summary: Installer changes required for package: ndctl
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: python-blivet
Version: 7.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Blivet Maintenance Team
QA Contact: Release Test Team
URL:
Whiteboard:
Depends On:
Blocks: 1271425 1275808
TreeView+ depends on / blocked
 
Reported: 2016-05-09 15:59 UTC by Eng Ops Maitai User
Modified: 2021-09-03 14:12 UTC (History)
7 users (show)

Fixed In Version: python-blivet-0.61.15.57-1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-03 23:52:58 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:2168 0 normal SHIPPED_LIVE python-blivet bug fix and enhancement update 2016-11-03 13:15:34 UTC

Comment 1 Brian Lane 2016-05-12 18:12:32 UTC
This is for NVDIMM support (https://git.kernel.org/cgit/linux/kernel/git/nvdimm/nvdimm.git/tree/Documentation/nvdimm/nvdimm.txt?h=libnvdimm-for-next)

Support for new devices needs to be added to pythin-blivet before Anaconda can take advantage of it.

Comment 2 David Cantrell 2016-07-01 13:55:14 UTC
This bug was opened because it was unknown whether or not we need the ndctl utility at install time in order to enumerate nvdimm devices.  Not sure how they show up to userspace.  Reassigning to blivet for further investigation.

Comment 4 David Lehman 2016-07-01 17:50:17 UTC
Jeff, can you offer some insight to what might be required to support these devices at OS installation time? I'm most interested in devices configured such that they can be used as disks, but I also need to know about the other configuration options, whether/how they're usable at installation time, how to tell the difference, &c. Of course, if this information is already written down somewhere a URL would suffice.

Comment 5 Jeff Moyer 2016-07-05 20:20:19 UTC
There are two types of persistent memory devices: pmem and block mode.  pmem devices appear in /dev as pmem# and can be used for direct access (DAX) from userspace.  block mode devices appear as /dev/ndblk#.# and are intended to be used with a block translation table (btt) to provide a legacy block device interface.  I'll address each type of device individually.  It is important to understand that some NVDIMMs can be configured to operate in one mode or the other, or even a combination of both modes.  Other NVDIMMs can only operate in the pmem mode.  Where available, configuration can be performed by the platform firmware (BIOS or UEFI) or via operating system utilities.  Thus, it is conceivable that the OS installer could provide an interface for dividing up the NVDIMMs into block mode and pmem mode regions.  The utilities required to perform this management are not yet part of the OS, but you can track their progress in bug 1270993.  Note that, right now, there are no NVDIMMs on the market that support block mode access.  We can emulate such systems if you need to test code before general hardware availability.

A block mode persistent memory device does not expose storage to the operating system via the system physical address space.  Instead, it provides windowed access (for example, writing a sector at a time), much like a regular block device.  There are a couple of reasons why you would want to configure a device in this manner, but I won't get into that here.  In order to provide power-fail write atomicity of a single sector, a block translation table should be used on top of the block mode device.  The btt can be configured using the ndctl utility.  If the installer is responsible for formatting a block mode persistent memory device, it is imperative that a btt be created on that device.  Once a btt is created on top of the block mode device, it's device name will change from ndblk#.# to ndblk#.#s, where the 's' stands for 'sector' mode.

A pmem mode device exposes the persistent memory to the operating system via the system physical address space.  The pmem driver in the kernel will present the configured devices as block devices.  In order to use such devices for DAX, some configuration is required.  When an NVDIMM comes new from the factory, it will be in a "raw" mode.  This means that all of the persistent memory is available for use.  This mode should never be used, as it does not allow for DMA-ing into directly mapped pmem.  Instead, the pmem device should be formatted as a "memory" device.  A memory device sets aside storage for kernel data structure that are required for DMA.  The overhead is 64 bytes per 4k page of persistent memory.  This equates to 16GB/1TB.  For smaller persistent memory devices, it may make sense to store this kernel data in DRAM.  That would allow for the maximum storage space possible on the NVDIMMs.  However, for larger devices, the storage space required may exceed the amount of DRAM in the system, and so the kernel data structures must be stored on the NVDIMMs themselves.  Whether to store data on the NVDIMMs or in DRAM is configured using ndctl's "--map" option, where "memory" indicated the kernel data structures should live in DRAM, and "device" indicates that the NVDIMMs should be used.

An NVDIMM device used for DAX would require that the file system be mounted with the "-o dax" mount option.  File systems that support DAX include both ext4 and xfs.  Note that the partitions must be aligned at least on 4k boundaries, and ideally at 2MB (or even 1GB) boundaries.  The larger alignment allows for more optimal memory mapping.  Ext4 will fail a mount if the alignment restrictions are not met.  XFS, on the other hand, will simply print a kernel warning.

It is also possible to use a pmem device as a legacy block device.  To do this, instead of formatting the pmem device as a "memory" device, it can be formatted with a btt.  It is always recommended to place a btt on a pmem device that will be accessed using the legacy block mode.

You may find the following presentations helpful in understanding how to configure persistent memory.  Please also feel free to ask any specific questions you have.  Worst case, I can point you at the relevant documentation.  Best case I can save you days of searching.

2016 Vault, Managing Persistent Memory:
http://events.linuxfoundation.org/sites/events/files/slides/Managing%20Persistent%20Memory_0.pdf

2016 Red Hat Summit, Persistent Memory in RHEL:
http://people.redhat.com/jmoyer/SS42192_Moyer.pdf

Linux Kernel Documentation (upstream)
Documentation/nvdimm/nvdimm.txt
Documentation/nvdimm/btt.txt

For RHEL 7.3, we will not support installing to or booting from persistent memory.  Our primary focus for 7.3 is making sure that using a pmem mode device as a disk (with a btt) works.  That will be the only mode officially supported for the time being.

Comment 6 Jeff Moyer 2016-07-05 20:23:57 UTC
I forgot to mention that ndctl has a library that you could link to.

Comment 7 Marek Hruscak 2016-07-21 12:25:51 UTC
Hi Jeffrey,
I see that You are quite involved in finding solution for this request.
Do we own machine, which contain the NVDIMMs? If there is not any, will Intel ships some? 
QA will need some in order to test implementation. Without it, we could do only basic SanityOnly check.

Comment 10 David Lehman 2016-09-06 16:32:42 UTC
https://github.com/rhinstaller/blivet/pull/501

Comment 12 Jakub Vavra 2016-09-16 07:26:50 UTC
I have checked the text installer in Snapshot 3 and it contains/uses the NVDIMM devices as a drives.
The vnc install seemed to get locked in endless loop or freeze.
Automated install of this compose seems to be a bit unhealthy because it tries to use NVDIMM as a drive to install system and ends up with broken lvm later on:
Progress
Setting up the installation environment
.
Creating disklabel on /dev/pmem0s
.
Creating lvmpv on /dev/pmem0s1
...

On snapshot4 (RHEL-7.3-20160914.1) was vnc install working fine not showing NVDIMM as a drives. The automated install works as well ignoring the NVDIMM devices.

Note: Used hp-dl380gen9-01.lab.eng.bos.redhat.com.

Comment 14 errata-xmlrpc 2016-11-03 23:52:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-2168.html


Note You need to log in before you can comment on or make changes to this bug.