Bug 1226305 - Can not install "Fedora 22" into existing LVM-on-RAID partitions
Summary: Can not install "Fedora 22" into existing LVM-on-RAID partitions
Keywords:
Status: CLOSED EOL
Alias: None
Product: Fedora
Classification: Fedora
Component: python-blivet
Version: 22
Hardware: x86_64
OS: Linux
unspecified
unspecified
Target Milestone: ---
Assignee: Anaconda Maintenance Team
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
: 1178181 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-05-29 12:01 UTC by Oleg Samarin
Modified: 2016-07-19 14:22 UTC (History)
8 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2016-07-19 14:22:33 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)
program.log from anaconda (35.37 KB, text/plain)
2015-05-29 12:04 UTC, Oleg Samarin
no flags Details
storage.log from anaconda (231.80 KB, text/plain)
2015-05-29 12:04 UTC, Oleg Samarin
no flags Details
journalctl output (317.85 KB, text/x-vhdl)
2015-05-29 12:06 UTC, Oleg Samarin
no flags Details
A patch that prevents destroying raid members in blivet (2.99 KB, patch)
2015-05-30 17:26 UTC, Oleg Samarin
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1141398 0 unspecified CLOSED anaconda does not see existing Fedora 21 install to LVM-on-RAID 2021-02-22 00:41:40 UTC
Red Hat Bugzilla 1160424 0 unspecified CLOSED MDRaidError: name_from_md_node(md126p1) failed 2021-02-22 00:41:40 UTC
Red Hat Bugzilla 1178181 0 unspecified CLOSED likely udev name and name yielded by pvs do not agree so lookup in pvInfo fails 2021-02-22 00:41:40 UTC

Internal Links: 1160424

Description Oleg Samarin 2015-05-29 12:01:40 UTC
Description of problem:

I have an intel imsm raid-1 /dev/md126 assembled with two disks /dev/sda and /dev/sdb. I have some LVM volumes on top of /dev/md126p2. I have a working Fedora 21 installed in some logical volumes.

I can not make a fresh Fedora 22 into existing lvm partitions. 

When the live Fedora 22 installer exposes devices for peaking as a destination, I select /dev/md126. I can see existing Fedora 21 system in the custom disk layout screen, but I can not see any logical volumes on top of /dev/md126p2 to choose.

How reproducible:

Always

Steps to Reproduce:
1. Make a bios imsm raid-1 from two disks
2. Install any previous version of Fedora into the raid volume with LVM layout
3. Create some free logical volumes for future installing of Fedora 22
4. Boot from Fedora-22 live media
5. Install into hard drive
6. Select the raid-1 volume as a destination and check a box "custom disk layout"

Actual results:

The custom layout screen shows a whole raid-1 partitions but does not show logical volumes on top of it. 

Expected results:

The custom layout screen should expose existing lvm logical volumes instead of the whole raid-1 partition. 

Additional info:

After making some research I found that the reason is that LVM reports that existing volumes belong to /dev/sdb2 instead of /dev/md126p2 so the layout screen does not show them, because /dev/sdb is not selected as a destination.

Before starting anaconda I did not have /dev/sdb2 at all. 

[root@localhost ~]# ls -ld /dev/md* /dev/sd*
drwxr-xr-x. 2 root root      120 May 29 03:43 /dev/md
brw-rw----. 1 root disk   9, 126 May 29 03:43 /dev/md126
brw-rw----. 1 root disk 259,   0 May 29 03:43 /dev/md126p1
brw-rw----. 1 root disk 259,   1 May 29 03:43 /dev/md126p2
brw-rw----. 1 root disk   9, 127 May 29 03:43 /dev/md127
brw-rw----. 1 root disk   8,   0 May 29 03:43 /dev/sda
brw-rw----. 1 root disk   8,  16 May 29 03:43 /dev/sdb
brw-rw----. 1 root disk   8,  32 May 29 03:43 /dev/sdc
brw-rw----. 1 root disk   8,  33 May 29 03:43 /dev/sdc1
brw-rw----. 1 root disk   8,  34 May 29 03:43 /dev/sdc2
brw-rw----. 1 root disk   8,  48 May 29 03:43 /dev/sdd
brw-rw----. 1 root disk   8,  49 May 29 03:43 /dev/sdd1
brw-rw----. 1 root disk   8,  50 May 29 03:43 /dev/sdd2
brw-rw----. 1 root disk   8,  64 May 29 03:43 /dev/sde

But after anaconda starts, the extra partitions /dev/sda1, /dev/sda2, /dev/sdb1, /dev/sdb2 appeared. 

[root@localhost ~]# ls -ld /dev/md* /dev/sd*
drwxr-xr-x. 2 root root      120 May 29 03:47 /dev/md
brw-rw----. 1 root disk   9, 126 May 29 03:47 /dev/md126
brw-rw----. 1 root disk 259,   2 May 29 03:47 /dev/md126p1
brw-rw----. 1 root disk 259,   3 May 29 03:47 /dev/md126p2
brw-rw----. 1 root disk   9, 127 May 29 03:46 /dev/md127
brw-rw----. 1 root disk   8,   0 May 29 03:47 /dev/sda
brw-rw----. 1 root disk   8,   1 May 29 03:47 /dev/sda1
brw-rw----. 1 root disk   8,   2 May 29 03:47 /dev/sda2
brw-rw----. 1 root disk   8,  16 May 29 03:47 /dev/sdb
brw-rw----. 1 root disk   8,  17 May 29 03:47 /dev/sdb1
brw-rw----. 1 root disk   8,  18 May 29 03:47 /dev/sdb2
brw-rw----. 1 root disk   8,  32 May 29 03:46 /dev/sdc
brw-rw----. 1 root disk   8,  33 May 29 03:46 /dev/sdc1
brw-rw----. 1 root disk   8,  34 May 29 03:46 /dev/sdc2
brw-rw----. 1 root disk   8,  48 May 29 03:47 /dev/sdd
brw-rw----. 1 root disk   8,  49 May 29 03:47 /dev/sdd1
brw-rw----. 1 root disk   8,  50 May 29 03:47 /dev/sdd2
brw-rw----. 1 root disk   8,  64 May 29 03:47 /dev/sde

Seems this behavior is wrong: the partitions /dev/sda1, /dev/sda2, /dev/sdb1, /dev/sdb2 must not exist because their disks are parts of /dev/md126 raid. Coming of the new partitions with udev forces lvm to switch from /dev/md126 to /dev/sdb2, because lvm always use the last come partition among the similar.

What does anaconda do for refreshing devices that changes the /dev/* contents?

Comment 1 Oleg Samarin 2015-05-29 12:04:05 UTC
Created attachment 1032037 [details]
program.log from anaconda

Comment 2 Oleg Samarin 2015-05-29 12:04:53 UTC
Created attachment 1032038 [details]
storage.log from anaconda

Comment 3 Oleg Samarin 2015-05-29 12:06:08 UTC
Created attachment 1032039 [details]
journalctl output

Comment 4 Oleg Samarin 2015-05-30 09:12:20 UTC
Seems t is an issue of blivet: Blivet.reset() change the udev device tree so /dev/sdb2 appear. Usually it is deleted by mdadm when it assembles arrays. But seems Blivet.reset() forces udev to scan /sdb after mdadm so the extra partitions left and they are the reasons of wrong lvm-over-md behavior

Comment 5 Oleg Samarin 2015-05-30 09:15:53 UTC
When I run

python
from blivet import Blivet
storage=Blivet()
storage.reset()

I receive the same result: /dev/sda2 and /dev/sdb2 appear

Comment 6 Oleg Samarin 2015-05-30 17:21:55 UTC
python-blivet uses python-parted for manipulating with partitions.

Seems python-parted is not mdadm-safe: the following code 

from parted import Device
partedDevice=Device("/dev/sda")

forces appearance of /dev/sda1 and /dev/sda2 that is not good for md raid member sisks. So blived should not call parted.Device() for raid members.

Comment 7 Oleg Samarin 2015-05-30 17:26:12 UTC
Created attachment 1032535 [details]
A patch that prevents destroying raid members in blivet

This patch solves the problem for me.After updating python-blivet I was able to install Fedora 22 into existing lvm partitions on top of bios raid.

Comment 8 David Lehman 2015-06-23 13:52:29 UTC
*** Bug 1178181 has been marked as a duplicate of this bug. ***

Comment 9 David Lehman 2015-06-23 21:09:47 UTC
In Fedora 23 this will hopefully become a non-issue as we do not instantiate parted.Device for every block device -- only those that contain valid partition tables and are not members of a fwraid array, multipath, &c.

Comment 10 Fedora End Of Life 2016-07-19 14:22:33 UTC
Fedora 22 changed to end-of-life (EOL) status on 2016-07-19. Fedora 22 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.


Note You need to log in before you can comment on or make changes to this bug.