Bug 1355680 - Multipath disk should be recognized as /dev/mapper/* mode during dirty TUI install.
Summary: Multipath disk should be recognized as /dev/mapper/* mode during dirty TUI in...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: rhev-hypervisor
Version: 3.5.7
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Douglas Schilling Landgraf
QA Contact: cshao
URL:
Whiteboard:
: 1355682 (view as bug list)
Depends On:
Blocks: 1352452 1354596
TreeView+ depends on / blocked
 
Reported: 2016-07-12 08:44 UTC by cshao
Modified: 2022-04-16 08:51 UTC (History)
15 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-07-15 16:07:13 UTC
oVirt Team: Node
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
multipath-disk (17.94 KB, image/png)
2016-07-12 08:44 UTC, cshao
no flags Details
all log info (47.10 KB, application/x-gzip)
2016-07-12 08:45 UTC, cshao
no flags Details
all_log_info (100.88 KB, application/x-gzip)
2016-07-14 14:38 UTC, cshao
no flags Details
auto-failed (12.52 KB, image/png)
2016-07-14 14:40 UTC, cshao
no flags Details
The previous RHVH (60.28 KB, image/png)
2016-07-14 14:41 UTC, cshao
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHV-45705 0 None None None 2022-04-16 08:51:03 UTC

Description cshao 2016-07-12 08:44:29 UTC
Created attachment 1178809 [details]
multipath-disk

Description of problem:
The multipath disk should be recognized as /dev/mapper/* mode instead of /dev/sdx  during dirty TUI install.

Version-Release number of selected component (if applicable):
rhev-hypervisor6-6.8-20160707.3.iso 
ovirt-node-3.2.3-34.el6.noarch 


How reproducible:
100%

Steps to Reproduce:
1. Install RHVH via ISO via default ks on multipath machine.
2. Reboot host and reinstall vintage RHEV-H(with firstboot parameter).
3. Focus on "select disk" page.
4. Drop to shell to run#multipath -ll

Actual results:
1. After step 3, the multipath disk is recognized as sd* mode.
2. After step 4, no output after run #multipath -ll.

Expected results:
The multipath disk should be recognized as /dev/mapper/* mode instead of /dev/sdx  during dirty TUI install.

Additional info:
No such issue during clean install.

Comment 1 cshao 2016-07-12 08:45:04 UTC
Created attachment 1178810 [details]
all log info

Comment 2 Douglas Schilling Landgraf 2016-07-13 14:17:21 UTC
Hi shaochen,

Are you able to reproduce it with 3.6 rhev-h as well? From my tests, I cannot reproduce in 3.6.

Comment 3 Douglas Schilling Landgraf 2016-07-13 15:54:11 UTC
Additionally, the output of below sequence in the failed system:

# multipath -d

# multipath 

# multipath -ll

Comment 4 Douglas Schilling Landgraf 2016-07-13 15:55:41 UTC
Some logs from my reproducer system:


# multipath -ll
#

# multipath -d

create: 1ATA_QEMU_HARDDISK_1234 undef ATA,QEMU HARDDISK
size=25G features='0' hwhandler='0' wp=undef
|-+- policy='roud-robin 0' prio=1 status=undef
| `- 2:0:0:0 sda 8:0 undef ready running
`-+- policy='round-robin 0' prio=1 status=undef
  `- 3:0:0:0 sdb 8:16 undef ready running
  VG   #PV #LV #SN Attr   VSize  VFree
  onn    1   6   0 wz--n- 24.00g 3.58g


# fdisk | grep Disk

Disk /dev/loop0: 0 MB, 4096 bytes
Disk identifier: 0x387e866f
Disk /dev/loop1: 0 MB, 471040 bytes
Disk identifier: 0x00000000
Disk /dev/loop2 187 MB, 187412480 bytes
Disk identifier: 0xee03abf7
Disk /dev/loop3: 1610 MB, 1610612563 bytes
Disk identifier: 0x000000000
Disk /dev/loop4: 536 MB, 536870912 bytes
Disk identifier: 0x00000000
Disk /dev/sda 26.8 GB, 26843545600 bytes
Disk identifier: 0x00032a33
Disk /dev/sdb: 26.8 GB,26843545600 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/live-rw: 1610 MB, 1610612736 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/live-osimg-min: 1610 MB, 1610612736 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/onn-swap: 2686 MB, 2684354560 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/onn-var: 16.1 GB, 16106127360 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/onn-root: 3049 MB, 3049259008 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/onn-ovirt--node--ng--4.0.0--0.20160629.0+1: 3049 MB, 3049259008 bytes
Disk identifier: 0x00000000


# lvs
  LV                                 VG   Attr       LSize  Pool   Origin                           Data%  Meta%  Move Log Cpy%Sync Convert
  ovirt-node-ng-4.0.0-0.20160629.0   onn  Vwi---tz-k  2.84g pool00 root
  ovirt-node-ng-4.0.0-0.20160629.0+1 onn  Vwi-a-tz--  2.84g pool00 ovirt-node-ng-4.0.0-0.20160629.0 50.49
  pool00                             onn  twi-aotz-- 17.88g                                         9.56   5.49
  root                               onn  Vwi-a-tz--  2.84g pool00                                  50.49
  swap                               onn  -wi-a-----  2.50g
  var                                onn  Vwi-a-tz-- 15.00g pool00                                  1.68


# pvs 
  PV         VG   Fmt  Attr PSize  PFree
  /dev/sda2  onn  lvm2 a--u 24.00g 3.58g

# vgs
  VG   #PV #LV #SN Attr   VSize  VFree
  onn    1   6   0 wz--n- 24.00g 3.58g
                                                                                                                                          
  VG   #PV #LV #SN Attr   VSize  VFree
  onn    1   6   0 wz--n- 24.00g 3.58g

Comment 5 Douglas Schilling Landgraf 2016-07-13 16:02:43 UTC
Talked with Benjamin Marzinski and the conclusion is that devices get activated before multipathd has a chance to create the multipath device.

output from multipath command, showing the device is in use:
Jul 13 14:51:02 | 1ATA_QEMU_HARDDISK_1234: ignoring map 
Jul 13 14:51:02 | 1ATA_QEMU_HARDDISK_1234: ignoring map 

After the below commands to lvs using /dev/sda2 get deactivated installer works as expected:

# vgchange -an onn
# vgchange -ay onn
# multipath
# python -m ovirt.node.installer.__main__ --debug

Comment 6 Douglas Schilling Landgraf 2016-07-13 21:20:23 UTC
Hi shaochen,

Please try in rhev-hypervisor6-6.8-20160707.3.iso the kargs:

firstboot storage_init=<pv device> 

It should address the initialization of device, wiping the disk and removing the existing pvs,lvs,vgs from previous installation of RHV-H.

If possible, also confirm that bz#1355682 is also gone with this approach.

In my test enviroment: firstboot storage_init=/dev/sda or /dev/sda2 worked fine.

Thanks!

Comment 7 cshao 2016-07-14 14:37:12 UTC
(In reply to Douglas Schilling Landgraf from comment #6)
> Hi shaochen,
> 
> Please try in rhev-hypervisor6-6.8-20160707.3.iso the kargs:
> 
> firstboot storage_init=<pv device> 
> 
> It should address the initialization of device, wiping the disk and removing
> the existing pvs,lvs,vgs from previous installation of RHV-H.
> 
> If possible, also confirm that bz#1355682 is also gone with this approach.
> 
> In my test enviroment: firstboot storage_init=/dev/sda or /dev/sda2 worked
> fine.
> 
> Thanks!

Hi Douglas,

Still got failed after auto install.
It report "Device specified in storage_init does not exist", but actually it is exist.

please see the new attachment for more details.

# cat /proc/cmdline
initrd=/images/rhevh-6.8-20160707.3.el6ev-3.5/initrd0.img ksdevice=bootif rootflags=loop rootflags=ro liveimg RD_NO_LVM rd_NO_MULTIPATH crashkernel=128M rootfstype=auto check lang=  rd_NO_LUKS max_loop=256 rd_NO_MD quiet elevator=deadline rhgb install ro root=live:/rhevh-6.8-20160707.3.el6ev.iso rd_NO_DM  BOOTIF=01-08-9e-01-63-2c-b3 storage_init=/dev/mapper/360*6954 adminpw=OKr05SbCu3D3g firstboot BOOT_IMAGE=/images/rhevh-6.8-20160707.3.el6ev-3.5/vmlinuz0 
[root@dell-per515-01 admin]# 
[root@dell-per515-01 admin]# lvs
  Found duplicate PV O2zRod7UqU9qu1FpCWPFY61ajRDUE28j: using /dev/sdd2 not /dev/sdg2
  Using duplicate PV /dev/sdd2 without holders, ignoring /dev/sdg2
  WARNING: Device mismatch detected for rhel_bootp-73-75-36/swap which is accessing /dev/sdg2 instead of /dev/sdd2.
  WARNING: Device mismatch detected for rhel_bootp-73-75-36/home which is accessing /dev/sdg2 instead of /dev/sdd2.
  WARNING: Device mismatch detected for rhel_bootp-73-75-36/root which is accessing /dev/sdg2 instead of /dev/sdd2.
  LV   VG                  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home rhel_bootp-73-75-36 -wi-a----- 132.76g                                                    
  root rhel_bootp-73-75-36 -wi-a-----  50.00g                                                    
  swap rhel_bootp-73-75-36 -wi-a-----  15.69g                                                    
[root@dell-per515-01 admin]# 
[root@dell-per515-01 admin]# pvs
  Found duplicate PV O2zRod7UqU9qu1FpCWPFY61ajRDUE28j: using /dev/sdd2 not /dev/sdg2
  Using duplicate PV /dev/sdd2 without holders, ignoring /dev/sdg2
  WARNING: Device mismatch detected for rhel_bootp-73-75-36/swap which is accessing /dev/sdg2 instead of /dev/sdd2.
  WARNING: Device mismatch detected for rhel_bootp-73-75-36/home which is accessing /dev/sdg2 instead of /dev/sdd2.
  WARNING: Device mismatch detected for rhel_bootp-73-75-36/root which is accessing /dev/sdg2 instead of /dev/sdd2.
  PV         VG                  Fmt  Attr PSize   PFree 
  /dev/sdd2  rhel_bootp-73-75-36 lvm2 a--u 198.51g 64.00m
[root@dell-per515-01 admin]# 
[root@dell-per515-01 admin]# multipath -ll
360a9800050334c33424b41762d745551 dm-6 NETAPP,LUN
size=99G features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=2 status=active
  |- 9:0:0:2 sdf 8:80  active ready  running
  `- 9:0:1:2 sdi 8:128 active ready  running
360a9800050334c33424b41762d736d45 dm-5 NETAPP,LUN
size=99G features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=2 status=active
  |- 9:0:0:1 sde 8:64  active ready  running
  `- 9:0:1:1 sdh 8:112 active ready  running
[root@dell-per515-01 admin]# 
[root@dell-per515-01 admin]# ll /dev/mapper/
total 0
lrwxrwxrwx. 1 root root      7 2016-07-14 21:27 360a9800050334c33424b41762d736d45 -> ../dm-5
lrwxrwxrwx. 1 root root      7 2016-07-14 21:27 360a9800050334c33424b41762d736d45p1 -> ../dm-8
lrwxrwxrwx. 1 root root      7 2016-07-14 21:27 360a9800050334c33424b41762d745551 -> ../dm-6
lrwxrwxrwx. 1 root root      7 2016-07-14 21:27 360a9800050334c33424b41762d745551p1 -> ../dm-7
crw-rw----. 1 root root 10, 58 2016-07-14 21:27 control
lrwxrwxrwx. 1 root root      7 2016-07-14 21:27 live-osimg-min -> ../dm-1
lrwxrwxrwx. 1 root root      7 2016-07-14 21:27 live-rw -> ../dm-0
lrwxrwxrwx. 1 root root      7 2016-07-14 21:27 rhel_bootp--73--75--36-home -> ../dm-3
lrwxrwxrwx. 1 root root      7 2016-07-14 21:27 rhel_bootp--73--75--36-root -> ../dm-4
lrwxrwxrwx. 1 root root      7 2016-07-14 21:27 rhel_bootp--73--75--36-swap -> ../dm-2
[root@dell-per515-01 admin]#

Comment 8 cshao 2016-07-14 14:38:04 UTC
Created attachment 1179883 [details]
all_log_info

Comment 9 cshao 2016-07-14 14:40:34 UTC
Created attachment 1179884 [details]
auto-failed

Comment 10 cshao 2016-07-14 14:41:21 UTC
Created attachment 1179885 [details]
The previous RHVH

Comment 11 Douglas Schilling Landgraf 2016-07-14 16:09:10 UTC
Hi Shaochen,

> 
> Hi Douglas,
> 
> Still got failed after auto install.
> It report "Device specified in storage_init does not exist", but actually it
> is exist.

> # cat /proc/cmdline
> initrd=/images/rhevh-6.8-20160707.3.el6ev-3.5/initrd0.img ksdevice=bootif 
> rootflags=loop rootflags=ro liveimg RD_NO_LVM rd_NO_MULTIPATH 
> crashkernel=128M rootfstype=auto check lang=  rd_NO_LUKS max_loop=256 
> rd_NO_MD quiet elevator=deadline rhgb install ro root=live:/rhevh-
> 6.8-20160707.3.el6ev.iso rd_NO_DM  BOOTIF=01-08-9e-01-63-2c-b3 
> storage_init=/dev/mapper/360*6954 adminpw=OKr05SbCu3D3g firstboot 
> BOOT_IMAGE=/images/rhevh-6.8-20160707.3.el6ev-3.5/vmlinuz0 

From my understanding this report is about multipath not identifying the devices and not able to provide "/dev/mapper". So, would suggest you to not use
"storage_init=/dev/mapper/360*6954". From the logs, it seems your pv is /dev/sdd.

> [root@dell-per515-01 admin]# pvs
>   Found duplicate PV O2zRod7UqU9qu1FpCWPFY61ajRDUE28j: using /dev/sdd2 not
> /dev/sdg2
>   Using duplicate PV /dev/sdd2 without holders, ignoring /dev/sdg2
>   WARNING: Device mismatch detected for rhel_bootp-73-75-36/swap which is
> accessing /dev/sdg2 instead of /dev/sdd2.
>   WARNING: Device mismatch detected for rhel_bootp-73-75-36/home which is
> accessing /dev/sdg2 instead of /dev/sdd2.
>   WARNING: Device mismatch detected for rhel_bootp-73-75-36/root which is
> accessing /dev/sdg2 instead of /dev/sdd2.
>   PV         VG                  Fmt  Attr PSize   PFree 
>   /dev/sdd2  rhel_bootp-73-75-36 lvm2 a--u 198.51g 64.00m

So, based on the above, could you please try: "firstboot storage_init=/dev/sdd". It's not required to be autoinstall, you can do manual install as well.

If possible, please also confirm that bz#1355682 is also gone with this approach.

Finally, could you please confirm this report doesn't happen in 3.6 ?

Thanks!

Comment 12 cshao 2016-07-15 03:56:10 UTC
> So, based on the above, could you please try: "firstboot
> storage_init=/dev/sdd". It's not required to be autoinstall, you can do
> manual install as well.
> 
Auto install with storage_init=/dev/sdd can successful, but the original issue only occurs on TUI install, so I can only assume this a workaround :)

> If possible, please also confirm that bz#1355682 is also gone with this
> approach.
As 1355682 description, this bug only affect on dirty TUI installation, so I can't check the issue whether exists or not with this approach..

> Finally, could you please confirm this report doesn't happen in 3.6 ?
> 
I will check this ASAP.

> Thanks!

Comment 13 cshao 2016-07-15 05:28:53 UTC
The multipath issue is gone after append "storage_init=/dev/sdd firstboot" 
bz#1355682 is also gone.

NOTE:

"firstboot storage_init=/dev/sdd" will work in autoinstall or TUI install (without BOOTIF).

Thanks for dougsland's confirm :)

Comment 14 cshao 2016-07-15 06:20:06 UTC
> Finally, could you please confirm this report doesn't happen in 3.6 ?
> 
> Thanks!

Test version:
rhev-hypervisor7-7.2-20160711.0
ovirt-node-3.6.1-13.0.el7ev.noarch

3.6 can work well, without workaround.

Comment 15 Douglas Schilling Landgraf 2016-07-15 15:53:10 UTC
*** Bug 1355682 has been marked as a duplicate of this bug. ***

Comment 16 Douglas Schilling Landgraf 2016-07-15 16:07:13 UTC
Hi shaochen,

Thanks for your feedback, I am closing this bug for now as it doesn't happen in 3.6 and providing firstboot and storage_init=<pv device> as kargs are enough for 3.5. Additionally, it's suggested that storage must be wiped before installing rhev-h.

If you have additional questions, please let me know.


Note You need to log in before you can comment on or make changes to this bug.