RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1129390 - [RFE] support reuse of existing non-RHEL LVM VGs / mdadm RAIDs in the GUI installer
Summary: [RFE] support reuse of existing non-RHEL LVM VGs / mdadm RAIDs in the GUI ins...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: anaconda
Version: 7.0
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Anaconda Maintenance Team
QA Contact: Release Test Team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-08-12 15:59 UTC by Jiri Jaburek
Modified: 2014-11-21 20:50 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-11-21 20:50:18 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
gui installer screenshot, showing the advanced partitioning layout (93.44 KB, image/png)
2014-08-12 15:59 UTC, Jiri Jaburek
no flags Details
anaconda.log (20.47 KB, text/plain)
2014-08-13 10:17 UTC, Jiri Jaburek
no flags Details
anaconda-yum.conf (351 bytes, text/plain)
2014-08-13 10:18 UTC, Jiri Jaburek
no flags Details
ifcfg.log (1.17 KB, text/plain)
2014-08-13 10:18 UTC, Jiri Jaburek
no flags Details
program.log (48.83 KB, text/plain)
2014-08-13 10:18 UTC, Jiri Jaburek
no flags Details
storage.log (158.11 KB, text/plain)
2014-08-13 10:18 UTC, Jiri Jaburek
no flags Details
storage.state (36.00 KB, text/plain)
2014-08-13 10:19 UTC, Jiri Jaburek
no flags Details
syslog (100.01 KB, text/plain)
2014-08-13 10:19 UTC, Jiri Jaburek
no flags Details

Description Jiri Jaburek 2014-08-12 15:59:41 UTC
Created attachment 926155 [details]
gui installer screenshot, showing the advanced partitioning layout

Description of problem:

When I create empty mdadm-based software RAIDs or LVM Volume Groups outside of the GUI installer, I can't reuse them for system installation.

The reasoning behind this is to create a disk layout not supported by the GUI, like custom Physical Extent size of the LVM VG or various mdadm metadata (bitmap settings). All of these are pretty much standard (advanced) operations supported by the tools, with full metadata compatibility.

See the attached screenshot.

Version-Release number of selected component (if applicable):
(the version used for the official RHEL-7.0 DVD)

Comment 2 Brian Lane 2014-08-13 00:30:04 UTC
You should be able to do this. Create them, then use the circular arrow button to rescan the system. If that isn't working please include:

1. EXACT steps used to create these from the cmdline and actions taken in the GUI.

2. The logs from /tmp/*log attached here as *individual* text/plain attachments.

Comment 3 Jiri Jaburek 2014-08-13 10:15:42 UTC
I removed the raid/vg from a shell within anaconda (alt+f2), zeroed the metadata and started again, from the same shell, meaning the commands should be fully reproducible without an external system.

# create the partitions used (make sure to use an empty disk,
# eg. all zeroes, otherwise leftover metadata may be detected)

$ sfdisk -L /dev/vda <<EOF
# partition table of /dev/vda
unit: sectors

/dev/vda1 : start=     2048, size=  1048576, Id=83, bootable
/dev/vda2 : start=  1050624, size= 10485760, Id=83
/dev/vda3 : start= 11536384, size=  9435136, Id=8e
/dev/vda4 : start=        0, size=        0, Id= 0
EOF
Checking that no-one is using this disk right now ...
OK

Disk /dev/vda: 20805 cylinders, 16 heads, 63 sectors/track
Old situation:
Units: cylinders of 516096 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls    #blocks   Id  System
/dev/vda1   *      2+   1042-   1041-    524288   83  Linux
/dev/vda2       1042+  11444-  10403-   5242880   83  Linux
/dev/vda3      11444+  20805-   9361-   4717568   8e  Linux LVM
/dev/vda4          0       -       0          0    0  Empty
New situation:
Units: sectors of 512 bytes, counting from 0

   Device Boot    Start       End   #sectors  Id  System
/dev/vda1   *      2048   1050623    1048576  83  Linux
/dev/vda2       1050624  11536383   10485760  83  Linux
/dev/vda3      11536384  20971519    9435136  8e  Linux LVM
/dev/vda4             0         -          0   0  Empty
Successfully wrote the new partition table

Re-reading the partition table ...

If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes:  dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)

$ ls -l /dev/vda*
brw-rw----. 1 root disk 252, 0 Aug 13 09:34 /dev/vda
brw-rw----. 1 root disk 252, 1 Aug 13 09:34 /dev/vda1
brw-rw----. 1 root disk 252, 2 Aug 13 09:34 /dev/vda2
brw-rw----. 1 root disk 252, 3 Aug 13 09:34 /dev/vda3

# create the raid, use 'far' layout with 2 copies, see md(4)
# for details

$ mdadm --create --metadata 1.2 --level 10 --raid-devices 2 --layout f2 /dev/md0 /dev/vda2 missing
mdadm: array /dev/md0 started.

# create the volume group, use large PE size

$ vgcreate -s 128M testvg /dev/vda3
  Physical volume "/dev/vda3" successfully created
  Volume group "testvg" successfully created


Then switch to the GUI installer (alt+f6), reload storage configuration, see Unknown/unusable devices for the raid/lvm, with only vda1 (standard partition) being reusable.

It seems that the installer tries to detect / use device on its own, based on the metadata, instead of re-using existing block devices available on the system. This would explain raid/lvm detection even when both were inactive/stopped.

Bug 1129300 mentions that a degraded raid is unsupported, which would invalidate the raid use case for the time being, but the LVM VG should have been detected/reusable.

Comment 4 Jiri Jaburek 2014-08-13 10:17:13 UTC
Created attachment 926351 [details]
anaconda.log

Comment 5 Jiri Jaburek 2014-08-13 10:18:06 UTC
Created attachment 926352 [details]
anaconda-yum.conf

Comment 6 Jiri Jaburek 2014-08-13 10:18:19 UTC
Created attachment 926353 [details]
ifcfg.log

Comment 7 Jiri Jaburek 2014-08-13 10:18:37 UTC
Created attachment 926354 [details]
program.log

Comment 8 Jiri Jaburek 2014-08-13 10:18:52 UTC
Created attachment 926355 [details]
storage.log

Comment 9 Jiri Jaburek 2014-08-13 10:19:25 UTC
Created attachment 926356 [details]
storage.state

Comment 10 Jiri Jaburek 2014-08-13 10:19:36 UTC
Created attachment 926357 [details]
syslog

Comment 11 David Lehman 2014-08-13 13:40:21 UTC
What makes you say the preexisting devices are unusable?

Comment 12 Jiri Jaburek 2014-08-13 15:12:32 UTC
(In reply to David Lehman from comment #11)
> What makes you say the preexisting devices are unusable?

Please read comment #0 and see the attached screenshot (attachment 926155 [details]).

"Unusable" as in "can't be re-used for installation without prior removal".

I would expect the installer to be able to use the existing LVM VG (volume group) for creation of new LVs (logical volumes). In addition, I would also expect the installer to be able to format (create filesystem on) an existing raid block device (assemble the raid).

Comment 13 David Shea 2014-11-21 20:50:18 UTC
(In reply to Jiri Jaburek from comment #12)

> I would expect the installer to be able to use the existing LVM VG (volume
> group) for creation of new LVs (logical volumes). In addition, I would also
> expect the installer to be able to format (create filesystem on) an existing
> raid block device (assemble the raid).

You can. Just create a mountpoint, it will be added to one of the existing VGs with free space, and from there you can change which VG the mountpoint should ultimately be added to.

RHEL 7.1 removes some of the confusion by not showing empty volume groups as unknown devices.


Note You need to log in before you can comment on or make changes to this bug.