Bug 2101665
Summary: | "/dev/nvme0n1" is not remapped to "/dev/vda" (etc) in boot config files such as "/boot/grub2/device.map" | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 9 | Reporter: | Vera <vwu> | ||||
Component: | virt-v2v | Assignee: | Laszlo Ersek <lersek> | ||||
Status: | CLOSED ERRATA | QA Contact: | Vera <vwu> | ||||
Severity: | medium | Docs Contact: | |||||
Priority: | medium | ||||||
Version: | 9.1 | CC: | chhu, hongzliu, juzhou, lersek, mxie, rjones, tyan, tzheng, xiaodwan | ||||
Target Milestone: | rc | Keywords: | Triaged | ||||
Target Release: | --- | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | virt-v2v-2.0.7-2.el9 | Doc Type: | If docs needed, set a value | ||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2022-11-15 09:56:15 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
Description
Vera
2022-06-28 05:28:39 UTC
(In reply to Vera from comment #0) > Expected results: > nvme disks in after-converting guest should be kept the same as the original > VM ones. This expectation is wrong. Virt-v2v intends to make the converted guest bootable and functional enough for the sysadmin to log in and implement further customizations. Sticking precisely with the original hardware configuration has never been the intent. Currently all hard disks are mapped to virtio-blk, or -- if the guest OS is so old that it does not support virtio-blk -- IDE. Please refer to <https://bugzilla.redhat.com/show_bug.cgi?id=2070530#c3>: one of the upstream commits it references is <https://github.com/libguestfs/virt-v2v/commit/75872bf282d7f2322110caca70963717b43806b1>, and that commit explicitly says, > The devices are mapped to virtio-blk, so in the target the device name > has to change from /dev/nvme0 to /dev/vda (etc.) If your focus is on "/boot/grub2/device.map" instead, *that* could be considered a problem. Apparently virt-v2v does not replace "/dev/nvme0n1" with "/dev/vda" in it. .... And I have a suspect for that, actually: in commit 75872bf282d7 ("input: -i vmx: Add support for NVMe devices", 2022-04-08), we missed extending the following lines: and rex_device_cciss = PCRE.compile "^/dev/(cciss/c\\d+d\\d+)(?:p(\\d+))?$" and rex_device = PCRE.compile "^/dev/([a-z]+)(\\d*)?$" in Neither regular expression matches /dev/nvme0n1 In fact, the CCIS pattern is almost good -- we can reuse the partition suffix from it. We just need to insert an alternative pattern in the device name sub-expression: nvme\\d+n1 Proposed patch (untested): diff --git a/convert/convert_linux.ml b/convert/convert_linux.ml index 59d143bdda4b..bea1e6d5ecfd 100644 --- a/convert/convert_linux.ml +++ b/convert/convert_linux.ml @@ -1198,7 +1198,7 @@ let convert (g : G.guestfs) source inspect keep_serial_console _ = (* Map device names for each entry. *) let rex_resume = PCRE.compile "^resume=(/dev/[-a-z\\d/_]+)(.*)$" - and rex_device_cciss = PCRE.compile "^/dev/(cciss/c\\d+d\\d+)(?:p(\\d+))?$" + and rex_device_cciss_or_nvme = PCRE.compile "^/dev/(cciss/c\\d+d\\d+|nvme\\d+n1)(?:p(\\d+))?$" and rex_device = PCRE.compile "^/dev/([a-z]+)(\\d*)?$" in let rec replace_if_device path value = @@ -1216,7 +1216,7 @@ let convert (g : G.guestfs) source inspect keep_serial_console _ = device in - if PCRE.matches rex_device_cciss value then ( + if PCRE.matches rex_device_cciss_or_nvme value then ( let device = PCRE.sub 1 and part = try PCRE.sub 2 with Not_found -> "" in "/dev/" ^ replace device ^ part Do you think it's better (for maintainability) to just add a new regular expression there for matching nvme? Anyway I agree with the analysis. Right, we can add a new regex too. In fact that was what I started to write, but then noticed it was mostly identical to the ccis one, and that the ccis one could be reused by inserting a small alternative. Anyway... I've now tried to reproduce this, installing a RHEL-8.6 guest on ESXi -- and I find that "/boot/grub2/device.map" does not exist in the installed guest at all. Is that file firmware-specific perhaps? ESXi selected EFI automatically, I didn't change the firmware type. Vera, did you use EFI or BIOS for the vmware guest? Thanks. ... I've checked in a SeaBIOS RHEL9 guest that I've had lying around -- it does have "/boot/grub2/device.map" (and in fact it makes sense for that file not to exist in an EFI installation, as booting under EFI ought to have no use for such a "device map"). I'll reinstall the vmw guest. Confirmed, /boot/grub2/device.map (created by anaconda) is BIOS specific. (I'll post the patch tomorrow.) (In reply to Laszlo Ersek from comment #3) > Right, we can add a new regex too. In fact that was what I started to write, > but then noticed it was mostly identical to the ccis one, and that the ccis > one could be reused by inserting a small alternative. > > Anyway... I've now tried to reproduce this, installing a RHEL-8.6 guest on > ESXi -- and I find that "/boot/grub2/device.map" does not exist in the > installed guest at all. Is that file firmware-specific perhaps? ESXi > selected EFI automatically, I didn't change the firmware type. > > Vera, did you use EFI or BIOS for the vmware guest? Thanks. Laszlo,Right. The VMware guest uses BIOS. [v2v PATCH] convert/convert_linux: complete the remapping of NVMe devices Message-Id: <20220706103215.5607-1-lersek> https://listman.redhat.com/archives/libguestfs/2022-July/029408.html (In reply to Laszlo Ersek from comment #8) > [v2v PATCH] convert/convert_linux: complete the remapping of NVMe devices > Message-Id: <20220706103215.5607-1-lersek> > https://listman.redhat.com/archives/libguestfs/2022-July/029408.html Upstream commit 4368b94ee172. Verified with the versions: libguestfs-1.48.4-1.el9.x86_64 qemu-kvm-7.0.0-8.el9.x86_64 libnbd-1.12.5-1.el9.x86_64 virt-v2v-2.0.7-2.el9.x86_64 libvirt-8.5.0-2.el9.x86_64 Steps: 1.Prepare a VMware guest and its OS is installed on nvme disk; 2.Convert it from VMware to libvirt/rhv via vmx+ssh by v2v; # virt-v2v -i vmx -it ssh ssh://root.75.219/vmfs/volumes/esx6.7-function/esx6.7-rhel8.6-nvme-disk/esx6.7-rhel8.6-nvme-disk.vmx -ip /v2v-ops/esx_data_pwd [ 0.0] Setting up the source: -i vmx ssh://root.75.219/vmfs/volumes/esx6.7-function/esx6.7-rhel8.6-nvme-disk/esx6.7-rhel8.6-nvme-disk.vmx (root.75.219) Password: (root.75.219) Password: [ 12.8] Opening the source [ 17.8] Inspecting the source [ 35.6] Checking for sufficient free disk space in the guest [ 35.6] Converting Red Hat Enterprise Linux 8.6 Beta (Ootpa) to run on KVM virt-v2v: This guest has virtio drivers installed. [ 178.1] Mapping filesystem data to avoid copying unused and blank areas [ 179.8] Closing the overlay [ 180.0] Assigning disks to buses [ 180.0] Checking if the guest needs BIOS or UEFI to boot [ 180.0] Setting up the destination: -o libvirt [ 181.3] Copying disk 1/1 █ 100% [****************************************] [ 288.1] Creating output metadata [ 288.2] Finishing off 3. Start the guest and check the checkpoints,please check the attachment for the details. # virsh start esx6.7-rhel8.6-nvme-disk Domain 'esx6.7-rhel8.6-nvme-disk' started [root@localhost ~]# cat /boot/grub2/device.map # this device map was generated by anaconda (hd0) /dev/vda [root@localhost ~]# [root@localhost ~]# ls /dev/vd* /dev/vda /dev/vda1 /dev/vda2 [root@localhost ~]# [root@localhost ~]# ls /dev/nv* /dev/nvram [root@localhost ~]# nvme list Node SN Model Namespace Usage Format FW Rev --------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- -------- [root@localhost ~]# [root@localhost ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 252:0 0 14G 0 disk ├─vda1 252:1 0 1G 0 part /boot └─vda2 252:2 0 13G 0 part ├─rhel-root 253:0 0 11.6G 0 lvm / └─rhel-swap 253:1 0 1.4G 0 lvm [SWAP] [root@localhost ~]# Marking as Verified:Tested. Hi Laszlo, I have a question to confirm, I noticed that there are four nvme devices in dev path of original guest, but there will be three devices in dev path after v2v conversion, details pls check attached screenshot, which means '/dev/nvme0n1' is changed to '/dev/vda', '/dev/nvme0n1p1' is changed to '/dev/vda1' and '/dev/nvme0n1p2'is changed to /dev/vda2, but dev/nvme0 is disappeared, is it expected? Before: # ls /dev/nvme* /dev/nvme0 /dev/nvme0n1 /dev/nvme0n1p1 /dev/nvme0n1p2 After: # ls /dev/vd* /dev/vda /dev/vda1 /dev/vda2 I've had some vague memories here, and the following stackoverflow discussion confirms them: https://serverfault.com/questions/892134/why-is-there-both-character-device-and-block-device-for-nvme /dev/nvme0 -- character device, for manipulating the whole NVMe controller /dev/nvme0n1 -- namespace #1, block device ("whole disk") /dev/nvme0n1p1 -- namespace #1 partition #1, block device ("partition") /dev/nvme0n1p2 -- namespace #1 partition #2, block device ("partition") /dev/nvme0 is irrelevant for virt-v2v, only the storage devices need to be converted. Plus on the target side, we have no NVMe controller at all. Verified with the following versions: libguestfs-1.48.4-1.el9.x86_64 qemu-kvm-7.0.0-9.el9.x86_64 libnbd-1.12.6-1.el9.x86_64 virt-v2v-2.0.7-4.el9.x86_64 libvirt-8.5.0-4.el9.x86_64 Steps: 1. 1.Prepare a VMware guest and its OS is installed on nvme disk; 2.Convert it from VMware to libvirt/rhv via vmx+ssh by v2v; # virt-v2v -i vmx -it ssh ssh://root.75.219/vmfs/volumes/esx6.7-function/esx6.7-rhel8.6-nvme-disk/esx6.7-rhel8.6-nvme-disk.vmx -ip /v2v-ops/esx_data_pwd -on esx6.7-rhel8.6-nvme-disk-1 [ 0.1] Setting up the source: -i vmx ssh://root.75.219/vmfs/volumes/esx6.7-function/esx6.7-rhel8.6-nvme-disk/esx6.7-rhel8.6-nvme-disk.vmx (root.75.219) Password: (root.75.219) Password: [ 11.6] Opening the source [ 19.4] Inspecting the source [ 37.0] Checking for sufficient free disk space in the guest [ 37.0] Converting Red Hat Enterprise Linux 8.6 Beta (Ootpa) to run on KVM virt-v2v: This guest has virtio drivers installed. [ 178.3] Mapping filesystem data to avoid copying unused and blank areas [ 179.9] Closing the overlay [ 180.2] Assigning disks to buses [ 180.2] Checking if the guest needs BIOS or UEFI to boot [ 180.2] Setting up the destination: -o libvirt [ 181.4] Copying disk 1/1 █ 100% [****************************************] [ 290.5] Creating output metadata [ 290.5] Finishing off 3. Start the guest and check the checkpoints,please check the attachment for the details. # virsh start esx6.7-rhel8.6-nvme-disk-1 Domain 'esx6.7-rhel8.6-nvme-disk-1' started [root@localhost ~]# cat /boot/grub2/device.map # this device map was generated by anaconda (hd0) /dev/vda [root@localhost ~]# [root@localhost ~]# ls /dev/vd* /dev/vda /dev/vda1 /dev/vda2 [root@localhost ~]# nvme list Node SN Model Namespace Usage Format FW Rev --------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- -------- [root@localhost ~]# [root@localhost ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 252:0 0 14G 0 disk ├─vda1 252:1 0 1G 0 part /boot └─vda2 252:2 0 13G 0 part ├─rhel-root 253:0 0 11.6G 0 lvm / └─rhel-swap 253:1 0 1.4G 0 lvm [SWAP] [root@localhost ~]# Moving to Verified. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Low: virt-v2v security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:7968 |