Bug 1662286 - [virt-p2v] Windows 2019 with Dynamic Disks, set to "offline" after p2v conversion to RHV
Summary: [virt-p2v] Windows 2019 with Dynamic Disks, set to "offline" after p2v conver...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: virt-p2v
Version: 9.0
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: rc
: ---
Assignee: Laszlo Ersek
QA Contact: tingting zheng
URL:
Whiteboard: P2V
Depends On:
Blocks: 1746622
TreeView+ depends on / blocked
 
Reported: 2018-12-27 10:10 UTC by liuzi
Modified: 2022-05-19 11:08 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-05-03 11:21:45 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Conversion log (1.44 MB, application/x-tar)
2018-12-27 10:10 UTC, liuzi
no flags Details
screenshot1 (99.30 KB, image/png)
2018-12-27 10:11 UTC, liuzi
no flags Details
screenshot2 (274.95 KB, image/png)
2018-12-27 10:12 UTC, liuzi
no flags Details

Description liuzi 2018-12-27 10:10:09 UTC
Created attachment 1517034 [details]
Conversion log

Description of problem:
Software RAID will be offline on win2019 MD host after virt-p2v conversion

Version-Release number of selected component (if applicable):
virt-v2v-1.38.4-8.module+el8+2564+79985af8.x86_64
virt-p2v-1.38.2-5.el7.iso
qemu-kvm-2.12.0-51.module+el8+2608+a17c4bfe.x86_64
libvirt-4.5.0-16.module+el8+2586+bf759444.x86_64

How reproducible:
100%

Steps to Reproduce:
1.Prepare a physical machine which has 3 disks and install win2019.
  Both Disk (C:) and System reserve are all this Disk 0.
2.Create dynamic disk with Disk 1 and Disk 2 and named RAID(R:)
 2.1 Right click Unallocated Disk 1 which you want included in your RAID and select new spanned volume.
 2.2 New Spanned Volume Wizard in Windows opened, Click next and then select which disks you want included in your new volume (a.k.a. software RAID).
 2.3 Assign the new volume a mount letter or mount point, such as "R":
 2.4 Name and format the volume and click next.
       File system: NTFS
       Allocation unit size: Default
       Volume Label: RAID
 2.5  Review all settings before the disks are formatted and the new volume is mounted.
3. Use virt-p2v to convert the host to rhv
4. After virt-p2v conversion, login in to the guest and check it.

Actual results:
After step4, RAID (R:) cannot show on "Computer",pls refer to screenshot1
Check in "Disk Management",Disk1 and Disk2 are offline.pls refer to screenshot2

Expected results:
All disk should online and show on "Computer" after conversion.

Additional info:
1.Can reproduce on Win2019 MD host with rhel7 conversion server,the version-release number are as belows:
  virt-p2v-1.38.2-5.el7.iso
  virt-v2v-1.38.2-12.el7.x86_64
  libvirt-4.5.0-10.el7_6.3.x86_64
  qemu-kvm-rhev-2.12.0-19.el7_6.2.x86_64

2.Cannot reproduce on Win10 guest with rhel8 conversion server,the version-release number are as belows:
 virt-v2v-1.38.4-8.module+el8+2564+79985af8.x86_64
 virt-p2v-1.38.2-5.el7.iso
 qemu-kvm-2.12.0-51.module+el8+2608+a17c4bfe.x86_64
 libvirt-4.5.0-16.module+el8+2586+bf759444.x86_64

Comment 1 liuzi 2018-12-27 10:11:34 UTC
Created attachment 1517035 [details]
screenshot1

Comment 2 liuzi 2018-12-27 10:12:00 UTC
Created attachment 1517036 [details]
screenshot2

Comment 3 Richard W.M. Jones 2019-01-14 16:29:27 UTC
Yes - interesting.  3 disks.  The system disk is copied and converted
successfully.  The 2 remaining disks are copied, but for some reason
Windows doesn't see them on the target.  RHV metadata on the target
looks fine.

Comment 4 jason.vanvalkenburgh 2019-03-15 13:48:01 UTC
FYI, have seen these symptoms with other versions of libguestfs and OS, as a function of the conversion process and not a bug but a Microsoft feature.  For Windows Datacenter or Enterprise editions, the default SAN disk policy is to offline any non-boot disks the first time the OS sees them, unless you change the default policy and/or override the policy on them individually.  This is due to safety concerns when working with "shared" SAN disks and clustered environments.  If changing to virtio-scsi (at least), after running virt-v2v (and thus virt-p2v), Windows will see the "old" direct-attach disks as new SAN ones, since their driver/device path has changed.  If onlining the disks in Disk Manager resolves the "missing" disks, then this is the probable root cause.  See here for some references and examples on how to script changing the policy: http://longwhiteclouds.com/2017/08/15/windows-script-for-marking-all-disks-online/.  Changing the disk policy can be done as a prep step on the source side, or as a firstboot script with a reboot afterwards.

Comment 7 RHEL Program Management 2021-07-31 07:27:16 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.

Comment 8 Richard W.M. Jones 2021-07-31 07:44:31 UTC
My apologies, this bug was closed by a broken process that we
do not have any control over.  Reopening.

Comment 10 Richard W.M. Jones 2021-08-24 20:05:42 UTC
It still needs investigation by someone.

Comment 11 Richard W.M. Jones 2022-04-26 13:34:15 UTC
Moving to RHEL 9.  It seems (comment 4) like this is expected behaviour, in which
case it's not a bug, but it still needs investigation.

Comment 12 Laszlo Ersek 2022-05-03 09:30:26 UTC
Hopefully this will reproduce using:

- just virt-v2v, that is, using a VM on the source side -- the disks
  should be attached via virtio-scsi, so that, when they become
  virtio-blk after conversion, the same symptom is triggered;

- libvirt output (I don't see how RHV should be relevant here).

Comment 13 Richard W.M. Jones 2022-05-03 09:49:44 UTC
(In reply to Laszlo Ersek from comment #12)
> Hopefully this will reproduce using:
> 
> - just virt-v2v, that is, using a VM on the source side -- the disks
>   should be attached via virtio-scsi, so that, when they become
>   virtio-blk after conversion, the same symptom is triggered;
> 
> - libvirt output (I don't see how RHV should be relevant here).

I think so.

Although it's probably not relevant here, this tool - written by v2v alumnus
Matt Booth - can open Dynamic Disks from Linux:

https://github.com/mdbooth/libldm

Comment 14 Laszlo Ersek 2022-05-03 09:59:19 UTC
FWIW my goal is to reproduce this symptom and then to close the BZ as NOTABUG. Most of comment 4 describes that this is intentional behavior on Windows' part, and the last part of comment 4 (the auto-onlining) is really not to my liking. First, we should avoid adding more firstboot scripts if possible, second, indiscriminately messing with the onlining policy looks like a disaster -- Microsoft has a safety feature and we override it without asking the user? *shudder*. I'd rather let the user manually re-online the disks after conversion, or change the policy *before* conversion (manually, or using some kind of Windows scripting if they have a large set of VMs to convert). Virt-v2v intends to make the converted machine bootable; these are data disks, not boot disks.

Comment 15 Laszlo Ersek 2022-05-03 11:21:45 UTC
I reproduced the symptom with a Windows Server 2019 VM using three virtio-scsi disks (on the source (libvirt) side).

After manually onlining Disk 1 and Disk 2 in the converted domain, the array got automatically assembled, and the text file I had previously placed on R:, still in the source domain, could be successfully read back.

Perhaps of relevance, I should note that even in the *source* domain, when I was initially creating the software RAID array, I had to manually online Disk 1 and Disk 2. In comment 0, point 2.1 says, "Right click Unallocated Disk 1 which you want included in your RAID and select new spanned volume". However, the menu entry "new spanned volume" is greyed out (unavailable) until after: (a) the disks are manually onlined, and (b) the disks are manually "initialized" (= partitioned). So these steps are actually missing from comment#0. The point being: even on the source side, before conversion, any extra disks have to be onlined manually, and Disk Manager does explain that this requirement is due to local policy (if you hover over the Disk 1 and Disk 2 icons in Disk Manager with the mouse).

So I think there's nothing for virt-v2v to do here. Change the disk onlining policy on the source side, then convert; or convert, and then either manually online, or change the policy.

Comment 16 Laszlo Ersek 2022-05-03 11:23:51 UTC
(virt-v2v version used for comment 15: upstream 03c7f0561081)

Comment 17 Laszlo Ersek 2022-05-03 11:29:31 UTC
(In reply to Laszlo Ersek from comment #15)
> I reproduced the symptom with a Windows Server 2019 VM using three
> virtio-scsi disks (on the source (libvirt) side).

To clarify:

- manual onlining was required in both the original domain and the converted domain

- three virtio-scsi disks were used on the source side; virt-v2v converted each of those to virtio-blk (which is what triggers the policy on the target end)

I should have spelled my initial sentence as:

"I reproduced the symptom with a Windows Server 2019 VM, after conversion -- and the original VM, managed by libvirt, used three virtio-scsi disks".


Note You need to log in before you can comment on or make changes to this bug.