RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2124538 - virt-p2v can't detect soft iSCSI storage where the os is installed
Summary: virt-p2v can't detect soft iSCSI storage where the os is installed
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: virt-p2v
Version: 9.1
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Laszlo Ersek
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-09-06 12:53 UTC by Vera
Modified: 2022-09-28 02:57 UTC (History)
10 users (show)

Fixed In Version: upstream 28d7ce8c9db9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-09-28 02:57:12 UTC
Type: Feature Request
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)
p2v checking iscsi storage for guest rhel9.1 (281.78 KB, image/png)
2022-09-06 12:53 UTC, Vera
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-133310 0 None None None 2022-09-06 12:57:30 UTC

Description Vera 2022-09-06 12:53:18 UTC
Created attachment 1909792 [details]
p2v checking iscsi storage for guest rhel9.1

Description of problem:
virt-p2v can't detect soft iSCSI storage where the os is installed

Version-Release number of selected component (if applicable):
virt-v2v-2.0.7-6.el9.x86_64
livecd-p2v-202207151220.iso

How reproducible:
100%

Steps to Reproduce:
1. Install the OS(tried on both RHEL9.1 and windows2022) UEFI on 60G soft iSCSI storage on DELL740 server with Network integrated card: Broadcom Gigabit Ethernet BCM5720;

The OS (both RHEL9.1 and windows2022) which is installed on iSCSI storage can be booted successfully.

[root@vm-212-169 ~]# lsblk
NAME                       MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda                          8:0    0  1.1T  0 disk 
sdc                          8:32   0   60G  0 disk 
├─sdc1                       8:33   0  600M  0 part /boot/efi
├─sdc2                       8:34   0    1G  0 part /boot
└─sdc3                       8:35   0 58.4G  0 part 
  ├─rhel_vm--212--169-swap 253:0    0    4G  0 lvm  [SWAP]
  ├─rhel_vm--212--169-home 253:1    0 17.8G  0 lvm  /home
  └─rhel_vm--212--169-root 253:2    0 36.6G  0 lvm  /
sr0                         11:0    1 1024M  0 rom  


2. Boot the server with virt-p2v iso, input virt-v2v server ip and connect to the server.

3. Check the information on the "Fixed Hard Disks" and also x-term, no iSCSI storage shows.
Tried to login iSCSI target and check, also none.

Actual results:
iSCSI storage can't be detected by virt-p2v.

Expected results:
iSCSI storage can be detected successfully by virt-p2v.

Additional info:
Please check attachments on the detailed info of iscsiadm cmd and results.

Comment 4 Richard W.M. Jones 2022-09-06 14:39:38 UTC
Hi Vera, just because I'm not very clear what "soft iSCSI" means, is
the iSCSI a feature of the Broadcom BCM5720 card?  And what tool would
normally be used to enable & configure iSCSI in a RHEL guest?  (Presumably
not iscsiadm from the screenshot.)

Comment 5 Vera 2022-09-07 10:25:33 UTC
(In reply to Richard W.M. Jones from comment #4)
> Hi Vera, just because I'm not very clear what "soft iSCSI" means, is
> the iSCSI a feature of the Broadcom BCM5720 card? 

Hi rjones,

"Soft iSCSI" means that iSCSI adapter is integrated NIC card, not HBA card (hard iSCSI). 

And our new server DELL poweredge 740 with the integrated Broadcom BCM5720 network card can support iSCSI boot.
So we didn't purchase HBA card from vendor.

> And what tool would
> normally be used to enable & configure iSCSI in a RHEL guest?  (Presumably
> not iscsiadm from the screenshot.)

After it is configured/enabled on the server, we can use it as normal one. 

That means during the RHEL installation, we can "Add disks" to discover/login iSCSI storage with initiator name/target IP
and choose this disk to install OS.

However, I don't know if the handling/connection methods for this type of iSCSI connection may differ from the hard one.

Comment 6 Richard W.M. Jones 2022-09-07 16:09:05 UTC
Apologies, I have to triage bugs, so assigning to Laszlo.

Comment 7 Laszlo Ersek 2022-09-08 13:41:18 UTC
Hi Vera,

my understanding is the following, regarding the normal OS boot (or normal OS installation):

(1) You first enter the UEFI firmware setup on the machine.

(2) Using various dialogs there, you configure the iSCSI connection there (target name, IP address, username, password, etc).

(3) You boot Linux or Windows from the iSCSI target.

(4) The OS just booted can use the iSCSI disk.

Is this correct? Please confirm.

If the list of steps is correct, then I can describe a further piece of information. When your UEFI host firmware boots the OS (installer, or installed OS) from the iSCSI target, the host firmware also installs a new ACPI table that is called "iBFT". This ACPI table describes the iSCSI connection parameters for the OS, so that the OS can continue using the same iSCSI connection / target.

When you boot the P2V ISO on the physical machine, the host firmware does *not* create an iBFT ACPI table (because this is no "iscsi boot", so there is no "iscsi boot firmware table"). Therefore the OS will not know how to access the iSCSI target. Because iBFT is missing, at best the admin can supply the same set of information with iscsiadm. If that does not work, then I don't think this card can work well for P2V purposes.

Please provide the following information:

(a) How did you install RHEL-9.1 on the iSCSI storage, using this computer?

For example, did you boot a Live image from a USB stick, and Anaconda allowed you to specify the iSCSI connection parameters?

(b) Please install RHEL-9.1 to /dev/sda (a 1TB local disk, if I understand correctly), and then boot RHEL-9.1 from /dev/sda.

After having booted the system like this, can you access the iSCSI target at all, with iscsiadm commands?

Basically what I'm trying to figure out is whether *without P2V*, RHEL-9.1 is capable of accessing the iSCSI target, when booted off of a *local* disk. I'm trying to see if RHEL-9.1 has any means to connect to the iSCSI target *without* the iBFT ACPI table from the host firmware. Thanks.

(You can also give me the login details for the machine, so I could take a look myself. However, I will need a locally installed OS (/dev/sda) for investigating; or at least a USB live image inserted in a USB slot (and the ability to remotely reboot the computer). Thanks.)

Comment 10 Laszlo Ersek 2022-09-09 11:21:37 UTC
Phew, this wasn't easy to crack open, but here goes.

There is a bug in Vera's configuration (I'll write a private comment about that first, due to the internal host names), and there is a bug (or limitation) in virt-p2v too. I'll write a public comment about that, second.

Comment 12 Laszlo Ersek 2022-09-09 12:05:53 UTC
Assuming you launched XTerm from the Connection dialog of virt-p2v, set a proper initiator name, issued the proper iscsiadm commands, and confirmed with "lsblk" (e.g.) that the iSCSI disk has been attached as another /dev/sd* device node, you will *still* not see the iSCSI device in the Target dialog of virt-p2v.

That's due to a genuine bug in p2v. The list of hard disks is collected early, and the first dialog (where you can launch XTerm, and run iscsiadm commands) is shown afterwards:

main()                       [main.c]
  set_config_defaults()      [main.c]
    find_all_disks()         [main.c]
  gui_conversion()           [gui.c]
    show_connection_dialog() [gui.c]

Therefore, the list of hard disks ("all_disks") is stale by the time we get to the Target dialog.

This is not a trivial problem. The XTerm window remains open even if we click Next and advance from Connection to Target, so we can't delay the fetching of all_disks until the XTerm window is closed. We also can't simply (re-)enumerate all_disks upon showing the Target dialog, because that would break the "--test-disk" option.

Basically virt-p2v has no built-in support for iSCSI block devices at the moment; that much is clear from both the UI and the data flow (= the fact that whatever we do in XTerm does not influence "all_disks" after startup). Currently we expect all disks to be available at virt-p2v launch.

So this is a feature request. We should think more on when it would be best to (re-)enumerate the hard disks.

Comment 13 Richard W.M. Jones 2022-09-09 12:34:40 UTC
Could we have a "refresh" (or reload or whatever) button on the second page?

Also I think we should document the correct way to use XTerm to set up
an iSCSI device, since it's clearly not obvious.  I don't think I would
have discovered by myself that just logging into the initiator with the
correct login is insufficient to find the filesystems.  But we can put all of
that knowledge into the manual.

(Also, this is why I prefer NBD over iSCSI.  Every time I have used or
interacted with iSCSI I have become more firm in my belief that it is insane.)

Comment 14 Laszlo Ersek 2022-09-11 10:04:56 UTC
FWIW the RHEL9 manual

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/managing_storage_devices/configuring-an-iscsi-initiator_managing-storage-devices

still only says

> Find the iSCSI disk name and create a file system on this iSCSI disk:
>
> # grep "Attached SCSI" /var/log/messages
>
> # mkfs.ext4 /dev/disk_name

This is of course unusable programmatically, and not too easy to identify for a human either.

One step closer to that could be the following device nodes symlinks, created upon successful iSCSI target scan:

lrwxrwxrwx. 1 root root  9 Sep 11 17:56 /dev/disk/by-path/ip-10.73.5.1:3260-iscsi-iqn.1986-03.com.ibm:2145.clusterv3700v2.node1-lun-0 -> ../../sdb
lrwxrwxrwx. 1 root root 10 Sep 11 17:56 /dev/disk/by-path/ip-10.73.5.1:3260-iscsi-iqn.1986-03.com.ibm:2145.clusterv3700v2.node1-lun-0-part1 -> ../../sdb1
lrwxrwxrwx. 1 root root 10 Sep 11 17:56 /dev/disk/by-path/ip-10.73.5.1:3260-iscsi-iqn.1986-03.com.ibm:2145.clusterv3700v2.node1-lun-0-part2 -> ../../sdb2
lrwxrwxrwx. 1 root root 10 Sep 11 17:56 /dev/disk/by-path/ip-10.73.5.1:3260-iscsi-iqn.1986-03.com.ibm:2145.clusterv3700v2.node1-lun-0-part3 -> ../../sdb3
lrwxrwxrwx. 1 root root 10 Sep 11 17:56 /dev/disk/by-path/ip-10.73.5.1:3260-iscsi-iqn.1986-03.com.ibm:2145.clusterv3700v2.node1-lun-0-part4 -> ../../sdb4

The symlink filenames contain the IP address and TCP port of the target, the IQN of the target. Furthermore, presumably for *each* LUN detected on the target, a whole-blockdev symlink, and one symlink per partition, exist.

I'll need to analyze the data flow around "all_disks"; in case we refresh it, I think some dependent data needs to be invalidated (or re-inited) too.

Comment 18 Laszlo Ersek 2022-09-16 08:14:13 UTC
Hi Vera,

can you please check these out:

* virt-p2v ISO:

- URL: http://lacos.interhost.hu/rhbz-2124538/livecd-p2v-202209160911-git-39710202acb6.iso
- sha256sum: c428558323cba2f9948a4037b8a57db578f6bfd77a1e32848b9aca97efecc617

* documentation:

- view live: http://lacos.interhost.hu/rhbz-2124538/docs/virt-p2v.1.html

- updates highlighted (openoffice document): http://lacos.interhost.hu/rhbz-2124538/docs/virt-p2v.1.html.odt

(I've noticed three typos in the documentation right now; I've corrected them locally, but I'm not regenerating / reuploading the above artifacts just for these.)

Thanks!

Comment 19 Vera 2022-09-19 00:48:34 UTC
Tried with the comment18 build and the other versions:

virt-v2v-2.0.7-6.el9.x86_64
libnbd-1.12.6-1.el9.x86_64
libguestfs-1.48.4-2.el9.x86_64
nbdkit-1.30.8-1.el9.x86_64

OS: Windows2022-x86_64

Steps:
1. Boot host into virt-p2v client with 
2. Open Xterm and detect the iSCSI storage following the "ACCESSING ISCSI DEVICES" part in virt-p2v document
3. Input conversion server info and test connection
4. Check conversion info interface, No iSCSI storage shows on the interface now
5. Click "Refresh disks" button at the bottom of the dialogue. iSCSI storage show in the "Fixed hard disks"
6. Choose output to RHV and "Start conversion"
7. Check no errors msg shows during conversion
8. Go to RHV page, check the basic info of VM and import/start the VM 

Results:
With "Refresh disks" button, p2v can detect iSCSI LUN.


Thanks Laszlo.

Comment 20 Laszlo Ersek 2022-09-19 13:35:50 UTC
Thanks for the test, Vera!

[p2v PATCH 00/15] recognize block device nodes (such as iSCSI /dev/sdX) added via XTerm
Message-Id: <20220919133511.18288-1-lersek>
https://listman.redhat.com/archives/libguestfs/2022-September/029891.html

Comment 21 Vera 2022-09-22 09:02:46 UTC
Hi Lazsole,

I tried the comment18 build on RHEL9.1-x86_64 UEFI with the versions:

virt-v2v-2.0.7-6.el9.x86_64
libnbd-1.12.6-1.el9.x86_64
libguestfs-1.48.4-2.el9.x86_64
nbdkit-1.30.8-1.el9.x86_64


Steps:
Scenario 1: To RHEV (Pass)
1. conversion can be successful without errors
2. VM can be started into OS

Scenario 2: To Libvirt (Fail)
1. conversion can be successful without errors
2. start VM. Guest console turns into the black after inputting user/password. 
The issue is similar with bz2124193

Comment 22 Laszlo Ersek 2022-09-22 09:57:56 UTC
Hi Vera,

yes, I think you have hit bz2124193 when outputting to libvirt. The conversion causes the guest kernel to use the bochsdrmfb driver, and the VGA initialization problem affects both the edk2 guest driver and the Linux guest driver. (Gerd has fixed both, see <https://issues.redhat.com/browse/RHELX-58> and bug 2124193, respectively.)

Now, bug 2124193 is still in POST status and not MODIFIED status, so there's no official development kernel that you could use. But, you could grab an RPM from <https://bugzilla.redhat.com/show_bug.cgi?id=2124193#c22>, install such a kernel in the original RHEL-9.1 guest, and perform the conversion afterwards. Thanks!

Comment 23 Laszlo Ersek 2022-09-23 11:49:36 UTC
(In reply to Laszlo Ersek from comment #20)
> [p2v PATCH 00/15] recognize block device nodes (such as iSCSI /dev/sdX) added via XTerm
> Message-Id: <20220919133511.18288-1-lersek>
> https://listman.redhat.com/archives/libguestfs/2022-September/029891.html

Merged upstream as commit range aa36551e515f..28d7ce8c9db9, with small updates to the "virt-p2v.pod" hunks in patch#14.

Comment 24 Laszlo Ersek 2022-09-23 12:26:32 UTC
ISO built at upstream commit 28d7ce8c9db9, sha256sum ecbf20f6d1d45e92a67bef64801852f62aa519b91fb79c5ac68a9d5a239edbc5:

http://lacos.interhost.hu/livecd-p2v-202209231416.iso

Comment 25 Vera 2022-09-26 12:03:31 UTC
Tested with comment24 iso and the versions:

virt-p2v-1.42.2
virt-v2v-2.0.7-6.el9.x86_64
libnbd-1.12.6-1.el9.x86_64
libguestfs-1.48.4-2.el9.x86_64
nbdkit-1.30.8-1.el9.x86_64

OS: RHEL9.1-x86_64 UEFI with comment22 rpms

Scenario: Convert to libvirt 
1. convert successfully 
2. VM after conversion can be started and the console can be open without black

Comment 26 Vera 2022-09-28 02:57:12 UTC
Based on comment 25, the bug has been fixed and close the bug.


Note You need to log in before you can comment on or make changes to this bug.