RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2101665 - "/dev/nvme0n1" is not remapped to "/dev/vda" (etc) in boot config files such as "/boot/grub2/device.map"
Summary: "/dev/nvme0n1" is not remapped to "/dev/vda" (etc) in boot config files such ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: virt-v2v
Version: 9.1
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Laszlo Ersek
QA Contact: Vera
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-06-28 05:28 UTC by Vera
Modified: 2022-11-15 10:24 UTC (History)
9 users (show)

Fixed In Version: virt-v2v-2.0.7-2.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-11-15 09:56:15 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
The diff between 2 VMs(left: original / right: after-converting) (151.94 KB, image/png)
2022-06-28 05:28 UTC, Vera
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-126428 0 None None None 2022-06-28 05:30:40 UTC
Red Hat Product Errata RHSA-2022:7968 0 None None None 2022-11-15 09:56:24 UTC

Description Vera 2022-06-28 05:28:39 UTC
Created attachment 1893086 [details]
The diff between 2 VMs(left: original / right: after-converting)

Description of problem:
No nvme disks in VM after converting ESX guest which is installed on nvme disk via virt-v2v

Version-Release number of selected component (if applicable):
virt-v2v-2.0.6-2.el9.x86_64
qemu-img-7.0.0-6.el9.x86_64
libnbd-1.12.4-1.el9.x86_64
guestfs-tools-1.48.2-2.el9.x86_64
nbdkit-1.30.6-1.el9.x86_64
libguestfs-1.48.3-3.el9.x86_64
libvirt-8.4.0-3.el9.x86_64

How reproducible:
100%

Steps to Reproduce:
1.Prepare a VMware guest and its OS is installed on nvme disk;

2.Convert it from VMware to libvirt/rhv via vmx+ssh by v2v; (check bz#2070530)
# virt-v2v -i vmx -it ssh ssh://root.75.219/vmfs/volumes/esx6.7-function/esx6.7-rhel8.6-nvme-disk/esx6.7-rhel8.6-nvme-disk.vmx -ip /v2v-ops/esx_data_pwd 
[   0.0] Setting up the source: -i vmx ssh://root.75.219/vmfs/volumes/esx6.7-function/esx6.7-rhel8.6-nvme-disk/esx6.7-rhel8.6-nvme-disk.vmx
(root.75.219) Password: 
(root.75.219) Password: 
[   7.2] Opening the source
[  11.6] Inspecting the source
[  20.8] Checking for sufficient free disk space in the guest
[  20.8] Converting Red Hat Enterprise Linux 8.6 Beta (Ootpa) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[ 114.0] Mapping filesystem data to avoid copying unused and blank areas
[ 115.1] Closing the overlay
[ 115.2] Assigning disks to buses
[ 115.2] Checking if the guest needs BIOS or UEFI to boot
[ 115.2] Setting up the destination: -o libvirt
[ 117.4] Copying disk 1/1
█ 100% [****************************************]
[ 222.6] Creating output metadata
[ 222.7] Finishing off

3.Start the guest and check the checkpoints,please check the attachment for the details.

[root@localhost ~]# cat /boot/grub2/device.map 
# this device map was generated by anaconda
(hd0)      /dev/nvme0n1
[root@localhost ~]# nvme list
Node                  SN                   Model                                    Namespace Usage                      Format           FW Rev  
--------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
[root@localhost ~]# 
[root@localhost ~]# lsblk
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda           252:0    0   14G  0 disk 
├─vda1        252:1    0    1G  0 part /boot
└─vda2        252:2    0   13G  0 part 
  ├─rhel-root 253:0    0 11.6G  0 lvm  /
  └─rhel-swap 253:1    0  1.4G  0 lvm  [SWAP]
[root@localhost ~]# 
[root@localhost ~]# ls /dev/nvme*
ls: cannot access '/dev/nvme*': No such file or directory
[root@localhost ~]# 
[root@localhost ~]# ls /dev/nv*
/dev/nvram
[root@localhost ~]# 
[root@localhost ~]# ls /dev/vd*
/dev/vda  /dev/vda1  /dev/vda2
[root@localhost ~]# 


Actual results:
No nvme disks in guest after converting.

Expected results:
nvme disks in after-converting guest should be kept the same as the original VM ones.

Additional info:

Comment 1 Laszlo Ersek 2022-06-28 10:00:25 UTC
(In reply to Vera from comment #0)

> Expected results:
> nvme disks in after-converting guest should be kept the same as the original
> VM ones.

This expectation is wrong. Virt-v2v intends to make the converted guest bootable and functional enough for the sysadmin to log in and implement further customizations. Sticking precisely with the original hardware configuration has never been the intent. Currently all hard disks are mapped to virtio-blk, or -- if the guest OS is so old that it does not support virtio-blk -- IDE.

Please refer to <https://bugzilla.redhat.com/show_bug.cgi?id=2070530#c3>: one of the upstream commits it references is <https://github.com/libguestfs/virt-v2v/commit/75872bf282d7f2322110caca70963717b43806b1>, and that commit explicitly says,

> The devices are mapped to virtio-blk, so in the target the device name
> has to change from /dev/nvme0 to /dev/vda (etc.)

If your focus is on "/boot/grub2/device.map" instead, *that* could be considered a problem. Apparently virt-v2v does not replace "/dev/nvme0n1" with "/dev/vda" in it.

.... And I have a suspect for that, actually: in commit 75872bf282d7 ("input: -i vmx: Add support for NVMe devices", 2022-04-08), we missed extending the following lines:

    and rex_device_cciss = PCRE.compile "^/dev/(cciss/c\\d+d\\d+)(?:p(\\d+))?$"
    and rex_device = PCRE.compile "^/dev/([a-z]+)(\\d*)?$" in

Neither regular expression matches

  /dev/nvme0n1

In fact, the CCIS pattern is almost good -- we can reuse the partition suffix from it. We just need to insert an alternative pattern in the device name sub-expression:

  nvme\\d+n1

Proposed patch (untested):

diff --git a/convert/convert_linux.ml b/convert/convert_linux.ml
index 59d143bdda4b..bea1e6d5ecfd 100644
--- a/convert/convert_linux.ml
+++ b/convert/convert_linux.ml
@@ -1198,7 +1198,7 @@ let convert (g : G.guestfs) source inspect keep_serial_console _ =
 
     (* Map device names for each entry. *)
     let rex_resume = PCRE.compile "^resume=(/dev/[-a-z\\d/_]+)(.*)$"
-    and rex_device_cciss = PCRE.compile "^/dev/(cciss/c\\d+d\\d+)(?:p(\\d+))?$"
+    and rex_device_cciss_or_nvme = PCRE.compile "^/dev/(cciss/c\\d+d\\d+|nvme\\d+n1)(?:p(\\d+))?$"
     and rex_device = PCRE.compile "^/dev/([a-z]+)(\\d*)?$" in
 
     let rec replace_if_device path value =
@@ -1216,7 +1216,7 @@ let convert (g : G.guestfs) source inspect keep_serial_console _ =
           device
       in
 
-      if PCRE.matches rex_device_cciss value then (
+      if PCRE.matches rex_device_cciss_or_nvme value then (
         let device = PCRE.sub 1
         and part = try PCRE.sub 2 with Not_found -> "" in
         "/dev/" ^ replace device ^ part

Comment 2 Richard W.M. Jones 2022-06-28 10:13:43 UTC
Do you think it's better (for maintainability) to just add a new regular
expression there for matching nvme?  Anyway I agree with the analysis.

Comment 3 Laszlo Ersek 2022-07-05 13:29:53 UTC
Right, we can add a new regex too. In fact that was what I started to write, but then noticed it was mostly identical to the ccis one, and that the ccis one could be reused by inserting a small alternative.

Anyway... I've now tried to reproduce this, installing a RHEL-8.6 guest on ESXi -- and I find that "/boot/grub2/device.map" does not exist in the installed guest at all. Is that file firmware-specific perhaps? ESXi selected EFI automatically, I didn't change the firmware type.

Vera, did you use EFI or BIOS for the vmware guest? Thanks.

Comment 4 Laszlo Ersek 2022-07-05 13:44:23 UTC
... I've checked in a SeaBIOS RHEL9 guest that I've had lying around -- it does have "/boot/grub2/device.map" (and in fact it makes sense for that file not to exist in an EFI installation, as booting under EFI ought to have no use for such a "device map"). I'll reinstall the vmw guest.

Comment 5 Laszlo Ersek 2022-07-05 13:53:41 UTC
Confirmed, /boot/grub2/device.map (created by anaconda) is BIOS specific.

Comment 6 Laszlo Ersek 2022-07-05 18:00:54 UTC
(I'll post the patch tomorrow.)

Comment 7 Vera 2022-07-06 01:33:53 UTC
(In reply to Laszlo Ersek from comment #3)
> Right, we can add a new regex too. In fact that was what I started to write,
> but then noticed it was mostly identical to the ccis one, and that the ccis
> one could be reused by inserting a small alternative.
> 
> Anyway... I've now tried to reproduce this, installing a RHEL-8.6 guest on
> ESXi -- and I find that "/boot/grub2/device.map" does not exist in the
> installed guest at all. Is that file firmware-specific perhaps? ESXi
> selected EFI automatically, I didn't change the firmware type.
> 
> Vera, did you use EFI or BIOS for the vmware guest? Thanks.

Laszlo,Right. The VMware guest uses BIOS.

Comment 8 Laszlo Ersek 2022-07-06 10:33:44 UTC
[v2v PATCH] convert/convert_linux: complete the remapping of NVMe devices
Message-Id: <20220706103215.5607-1-lersek>
https://listman.redhat.com/archives/libguestfs/2022-July/029408.html

Comment 9 Laszlo Ersek 2022-07-06 13:55:15 UTC
(In reply to Laszlo Ersek from comment #8)
> [v2v PATCH] convert/convert_linux: complete the remapping of NVMe devices
> Message-Id: <20220706103215.5607-1-lersek>
> https://listman.redhat.com/archives/libguestfs/2022-July/029408.html

Upstream commit 4368b94ee172.

Comment 10 Vera 2022-07-18 08:19:44 UTC
Verified with the versions:
libguestfs-1.48.4-1.el9.x86_64
qemu-kvm-7.0.0-8.el9.x86_64
libnbd-1.12.5-1.el9.x86_64
virt-v2v-2.0.7-2.el9.x86_64
libvirt-8.5.0-2.el9.x86_64


Steps:
1.Prepare a VMware guest and its OS is installed on nvme disk;

2.Convert it from VMware to libvirt/rhv via vmx+ssh by v2v;
# virt-v2v -i vmx -it ssh ssh://root.75.219/vmfs/volumes/esx6.7-function/esx6.7-rhel8.6-nvme-disk/esx6.7-rhel8.6-nvme-disk.vmx -ip /v2v-ops/esx_data_pwd
[   0.0] Setting up the source: -i vmx ssh://root.75.219/vmfs/volumes/esx6.7-function/esx6.7-rhel8.6-nvme-disk/esx6.7-rhel8.6-nvme-disk.vmx
(root.75.219) Password: 
(root.75.219) Password: 
[  12.8] Opening the source
[  17.8] Inspecting the source
[  35.6] Checking for sufficient free disk space in the guest
[  35.6] Converting Red Hat Enterprise Linux 8.6 Beta (Ootpa) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[ 178.1] Mapping filesystem data to avoid copying unused and blank areas
[ 179.8] Closing the overlay
[ 180.0] Assigning disks to buses
[ 180.0] Checking if the guest needs BIOS or UEFI to boot
[ 180.0] Setting up the destination: -o libvirt
[ 181.3] Copying disk 1/1
█ 100% [****************************************]
[ 288.1] Creating output metadata
[ 288.2] Finishing off

3. Start the guest and check the checkpoints,please check the attachment for the details.

# virsh start esx6.7-rhel8.6-nvme-disk 
Domain 'esx6.7-rhel8.6-nvme-disk' started

[root@localhost ~]# cat /boot/grub2/device.map
# this device map was generated by anaconda
(hd0)      /dev/vda
[root@localhost ~]# 
[root@localhost ~]# ls /dev/vd*
/dev/vda  /dev/vda1  /dev/vda2
[root@localhost ~]# 
[root@localhost ~]# ls /dev/nv*
/dev/nvram
[root@localhost ~]# nvme list
Node                  SN                   Model                                    Namespace Usage                      Format           FW Rev  
--------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
[root@localhost ~]# 
[root@localhost ~]# lsblk
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda           252:0    0   14G  0 disk 
├─vda1        252:1    0    1G  0 part /boot
└─vda2        252:2    0   13G  0 part 
  ├─rhel-root 253:0    0 11.6G  0 lvm  /
  └─rhel-swap 253:1    0  1.4G  0 lvm  [SWAP]
[root@localhost ~]# 

Marking as Verified:Tested.

Comment 11 mxie@redhat.com 2022-07-19 06:17:45 UTC
Hi Laszlo,
  
   I have a question to confirm, I noticed that there are four nvme devices in dev path of original guest, but there will be three devices in dev path after v2v conversion, details pls check attached screenshot, which means '/dev/nvme0n1' is changed to '/dev/vda',  '/dev/nvme0n1p1' is changed to '/dev/vda1' and     '/dev/nvme0n1p2'is changed to /dev/vda2, but dev/nvme0 is disappeared, is it expected?

   Before:

   # ls /dev/nvme*
    /dev/nvme0  /dev/nvme0n1  /dev/nvme0n1p1  /dev/nvme0n1p2
   
   After:

   # ls /dev/vd*
    /dev/vda  /dev/vda1  /dev/vda2

Comment 12 Laszlo Ersek 2022-07-19 13:29:46 UTC
I've had some vague memories here, and the following stackoverflow discussion confirms them:

https://serverfault.com/questions/892134/why-is-there-both-character-device-and-block-device-for-nvme

/dev/nvme0     -- character device, for manipulating the whole NVMe controller
/dev/nvme0n1   -- namespace #1, block device ("whole disk")
/dev/nvme0n1p1 -- namespace #1 partition #1, block device ("partition")
/dev/nvme0n1p2 -- namespace #1 partition #2, block device ("partition")

/dev/nvme0 is irrelevant for virt-v2v, only the storage devices need to be converted. Plus on the target side, we have no NVMe controller at all.

Comment 16 Vera 2022-08-03 07:22:00 UTC
Verified with the following versions:
libguestfs-1.48.4-1.el9.x86_64
qemu-kvm-7.0.0-9.el9.x86_64
libnbd-1.12.6-1.el9.x86_64
virt-v2v-2.0.7-4.el9.x86_64
libvirt-8.5.0-4.el9.x86_64


Steps:
1. 1.Prepare a VMware guest and its OS is installed on nvme disk;

2.Convert it from VMware to libvirt/rhv via vmx+ssh by v2v;
# virt-v2v -i vmx -it ssh ssh://root.75.219/vmfs/volumes/esx6.7-function/esx6.7-rhel8.6-nvme-disk/esx6.7-rhel8.6-nvme-disk.vmx -ip /v2v-ops/esx_data_pwd -on esx6.7-rhel8.6-nvme-disk-1
[   0.1] Setting up the source: -i vmx ssh://root.75.219/vmfs/volumes/esx6.7-function/esx6.7-rhel8.6-nvme-disk/esx6.7-rhel8.6-nvme-disk.vmx
(root.75.219) Password: 
(root.75.219) Password: 
[  11.6] Opening the source
[  19.4] Inspecting the source
[  37.0] Checking for sufficient free disk space in the guest
[  37.0] Converting Red Hat Enterprise Linux 8.6 Beta (Ootpa) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[ 178.3] Mapping filesystem data to avoid copying unused and blank areas
[ 179.9] Closing the overlay
[ 180.2] Assigning disks to buses
[ 180.2] Checking if the guest needs BIOS or UEFI to boot
[ 180.2] Setting up the destination: -o libvirt
[ 181.4] Copying disk 1/1
█ 100% [****************************************]
[ 290.5] Creating output metadata
[ 290.5] Finishing off


3. Start the guest and check the checkpoints,please check the attachment for the details.

# virsh start esx6.7-rhel8.6-nvme-disk-1
Domain 'esx6.7-rhel8.6-nvme-disk-1' started

[root@localhost ~]# cat /boot/grub2/device.map
# this device map was generated by anaconda
(hd0)      /dev/vda
[root@localhost ~]# 
[root@localhost ~]# ls /dev/vd*
/dev/vda  /dev/vda1  /dev/vda2

[root@localhost ~]# nvme list
Node                  SN                   Model                                    Namespace Usage                      Format           FW Rev  
--------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
[root@localhost ~]# 
[root@localhost ~]# lsblk
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda           252:0    0   14G  0 disk 
├─vda1        252:1    0    1G  0 part /boot
└─vda2        252:2    0   13G  0 part 
  ├─rhel-root 253:0    0 11.6G  0 lvm  /
  └─rhel-swap 253:1    0  1.4G  0 lvm  [SWAP]
[root@localhost ~]# 

Moving to Verified.

Comment 18 errata-xmlrpc 2022-11-15 09:56:15 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Low: virt-v2v security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:7968


Note You need to log in before you can comment on or make changes to this bug.