Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1101322

Summary: RHEL7rc kickstart fails to use existing volgroup
Product: Red Hat Enterprise Linux 7 Reporter: Mikolaj Kucharski <mikolaj>
Component: anacondaAssignee: mulhern <amulhern>
Status: CLOSED WORKSFORME QA Contact: Release Test Team <release-test-team-automation>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 7.0CC: amulhern, mbanas, mikolaj, pkotvan, wnefal+redhatbugzilla
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1172172 (view as bug list) Environment:
Last Closed: 2014-12-09 14:07:25 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Console output when Anaconda fails
none
anaconda.log
none
storage.log
none
program.log none

Description Mikolaj Kucharski 2014-05-26 21:53:09 UTC
Created attachment 899358 [details]
Console output when Anaconda fails

Description of problem: Kickstart installation fails with message:

No preexisting VG with the name "vg0" was found.

when using existing LVM volumes.

Version-Release number of selected component (if applicable): anaconda 19.31.77-1 from rhel7rc (http://ftp.redhat.com/redhat/rhel/rc/7/)


How reproducible:

Partition disk before Anaconda is partitioning disks, for example in %pre script of ks.cfg and try to use:

clearpart --none
raid /boot --useexisting --device md1
raid swap --useexisting --device md2
raid / --useexisting --device md3
volgroup vg0 --useexisting
logvol /var --useexisting --vgname vg0 --name lv0


Steps to Reproduce:

1. 4 HDDs in the system, 500GB and 3TB, all blank

2. Partition disks with parted in identical way:

mkpart primary 2048s 512MB
name 1 /boot
mkpart primary 512MB 2560MB
name 2 swap
mkpart primary 2560MB 4608MB
name 3 /
mkpart primary 4608MB -1
name 4 /var

3. Create software raids:

mdadm -C /dev/md1 -n 4 -l raid1 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
mdadm -C /dev/md2 -n 4 -l raid10 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2
mdadm -C /dev/md3 -n 4 -l raid10 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3
mdadm -C /dev/md4 -n 2 -l raid1 /dev/sda4 /dev/sdc4
mdadm -C /dev/md5 -n 2 -l raid1 /dev/sdb4 /dev/sdd4

4. Create LVM volume:

pvcreate /dev/md4 /dev/md5
vgcreate vg0 /dev/md4 /dev/md5
lvcreate -l 100%VG -n lv0 vg0

Actual results: This should create /dev/vg0/lv0 which I would like Anaconda to use, but it fails with above mentioned message.

Expected results: Installation should succeed.

Additional info: This looks like repeat of bz#503681 and bz#504913 from Fedora.

Comment 1 Mikolaj Kucharski 2014-05-26 21:54:29 UTC
Created attachment 899359 [details]
anaconda.log

Comment 2 Mikolaj Kucharski 2014-05-26 21:55:20 UTC
Created attachment 899360 [details]
storage.log

Comment 3 Mikolaj Kucharski 2014-05-26 21:56:03 UTC
Created attachment 899361 [details]
program.log

Comment 4 Mikolaj Kucharski 2014-05-26 22:00:01 UTC
I'm trying to use existing disk setup because of bz#1093144

Also, it's 4xHDDs, 2x500GB and 2x3TB

Comment 5 Mikolaj Kucharski 2014-05-26 22:04:56 UTC
The end result should be something like:

Filesystem           Size  Used Avail Use% Mounted on
/dev/md3             4.0G  1.2G  2.7G  31% /
/dev/md1             496M   80M  391M  17% /boot
/dev/mapper/vg0-lv0  1.4T 1021G  347G  75% /var

Filename                                Type            Size    Used    Priority
/dev/md2                                partition       4191224 40936   -1

Comment 7 mulhern 2014-08-29 22:23:18 UTC
The error is reported when pyanaconda.kickstart.VolGroupData.execute() fails to find the device on a getDeviceByName(self.vgname) call.

This chunk here is probably the relevant bit in the storage log:

08:41:48,395 INFO blivet: scanning md5 (/devices/virtual/block/md5)...
08:41:48,397 DEBUG blivet:            DeviceTree.getDeviceByName: name: md5 ;
08:41:48,398 DEBUG blivet:            DeviceTree.getDeviceByName returned None
08:41:48,400 DEBUG blivet:            DeviceTree.getDeviceByName: name: None ;
08:41:48,401 DEBUG blivet:            DeviceTree.getDeviceByName returned None
08:41:48,401 INFO blivet: md5 is an md device
08:41:48,403 DEBUG blivet:            DeviceTree.getDeviceByUuid returned None
08:41:48,404 DEBUG blivet:            DeviceTree.addUdevMDDevice: name: None ;
08:41:48,405 DEBUG blivet:             DeviceTree.getDeviceByName: name: sdb4 ;
08:41:48,407 DEBUG blivet:             DeviceTree.getDeviceByName returned existing 2857192MB partition sdb4 (46) with existing mdmember
08:41:48,408 DEBUG blivet:             DeviceTree.getDeviceByName: name: sdd4 ;
08:41:48,409 DEBUG blivet:             DeviceTree.getDeviceByName returned existing 2857192MB partition sdd4 (57) with existing mdmember
08:41:48,411 DEBUG blivet:             DeviceTree.getDeviceByName: name: None ;
08:41:48,412 DEBUG blivet:             DeviceTree.getDeviceByName returned None
08:41:48,414 DEBUG blivet: raw RAID 1 size == 2857192.50781
08:41:48,414 INFO blivet: Using 128MB superBlockSize
08:41:48,414 DEBUG blivet: non-existent RAID 1 size == 2857064.50781
08:41:48,414 DEBUG blivet:             DeviceTree.getDeviceByUuid returned existing 2857064MB mdarray 5 (47)
08:41:48,415 DEBUG blivet: no device or no media present
08:41:48,416 DEBUG blivet:           DeviceTree.getDeviceByName: name: vg0 ;
08:41:48,417 DEBUG blivet:           DeviceTree.getDeviceByName returned None
08:41:48,417 ERR blivet: failed to find vg 'vg0' after scanning pvs
08:41:48,417 DEBUG blivet: no device or no media present

and previously:

08:41:48,361 INFO blivet: scanning vg0-lv0 (/devices/virtual/block/dm-2)...
08:41:48,362 DEBUG blivet:          DeviceTree.getDeviceByName: name: vg0-lv0 ;
08:41:48,363 DEBUG blivet:          DeviceTree.getDeviceByName returned None
08:41:48,363 INFO blivet: vg0-lv0 is an lvm logical volume
08:41:48,364 DEBUG blivet:          DeviceTree.addUdevLVDevice: name: vg0-lv0 ;
08:41:48,366 DEBUG blivet:           DeviceTree.getDeviceByName: name: vg0 ;
08:41:48,367 DEBUG blivet:           DeviceTree.getDeviceByName returned None

It may be that allowing incomplete to be True is what we want here, when we are scanning for the device.

Comment 8 David Lehman 2014-09-02 16:22:46 UTC
You can create md arrays with meaningful names these days, you know, eg:

mdadm -C /dev/md/boot -n 4 -l raid1 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
                ^^^^^

If you do that, or even this:

mdadm -C /dev/md/1 -n 4 -l raid1 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
                ^^

Things should work as expected. It is a peculiarity of mdadm that it does not create /dev/md/ symlinks when arrays are created using explicit old-style names.

Comment 9 mulhern 2014-09-02 23:07:31 UTC
Well, the current storage log (comment #7) indicates that when scanning the volume group, the component array is looked up by its UUID and located 

08:41:48,414 DEBUG blivet:             DeviceTree.getDeviceByUuid returned existing 2857064MB mdarray 5 (47)

but then rejected because the device's mediaPresent attribute is False.

08:41:48,415 DEBUG blivet: no device or no media present

The most likely cause for the mediaPresent attribute to be False is that the device's status makes it unreadable since the log shows that it is believed to exist.

I've placed an updates img at http://mulhern.fedorapeople.org/1101322.img. Please run this against latest RHEL7. It should expand the set of statuses which blivet believes allows it to actually read the device and should also log some information about that status and how it was obtained to the storage log. Please attach the storage log and also let me know the result of the test, i.e., if the volume group was located or not. Please stick to old style names, as in your original test when you do this, so that we can distinguish what we think are the two causes of the problem.

Comment 10 mulhern 2014-09-04 15:52:42 UTC
Related bug number ended up in fixed in version field, now is in the right place.

Comment 11 mulhern 2014-09-11 19:53:30 UTC
Not able to proceed w/out additional info requested in Comment 9.

Comment 12 Mikolaj Kucharski 2014-09-19 03:22:02 UTC
@mulhern, I needed machine where I had issue reported here, and reimaged it with anaconda from Fedora 20. I don't have original hardware to reproduce as it is in use now. I can try to recreate mini version of that system as a virtual machine, but I don't have access to RHEL7. I can test with CentOS7 on KVM, is that okay?

Comment 13 mulhern 2014-09-19 15:53:38 UTC
Hmmm, I'm not certain that updates.img I provided will work with version of CentOS7 you'll have.

If it were just as convenient for you to test with Fedora 20 or 21 I could put together an updates.img for a version of that instead.

Let me know, thanks!

- mulhern

Comment 14 Mikolaj Kucharski 2014-12-08 00:38:41 UTC
As I don't have access to RHEL7 (nor to RHEL7rc), I've tested this again on CentOS7 and I cannot reproduce the issue.

However I did test this on Fedora 19, 20 and latest development snapshot of 21 (fedora/linux/development/21/x86_64) and I have very inconsistent results :/

Fedora 19 fails with "volgroup must be given a list of partitions" however I don't see the point on investigating that old release.

Fedora 20 works intermittently. When it fails, it is because of unrelated issue, and I also don't think this is worth investigating.

Fedora 21 fails with:

storage configuration failed: The following problem occurred on line 34 of the kickstart file:

Size can not be decided on from kickstart nor obtained from device.

 
    29  clearpart --none
    30  raid /boot --useexisting --device /dev/md/boot
    31  raid swap --useexisting --device /dev/md/swap
    32  raid / --useexisting --device /dev/md/root
    33  volgroup vg0 --useexisting
    34  logvol /var --useexisting --vgname vg0 --name lv0


So, I don't know what I should do next. In general I'm interested in CentOS7 which seems to work. Not sure would you like to investigate F21 issues. I'm happy to help and I'm sorry I was so slow to respond in recent months.

Comment 15 Mikolaj Kucharski 2014-12-08 00:40:33 UTC
While testing I didn't use 1101322.img provided by @mulhern

Comment 16 Mikolaj Kucharski 2014-12-08 00:50:32 UTC
Also, as I mentioned earlier, I cannot test this on the original hardware, so I've setup qemu-kvm with 4x40GB drives to test this.

Comment 17 mulhern 2014-12-09 14:00:55 UTC
Old error was

No preexisting VG with the name "vg0" was found.

New error would be:

Volume group "vg0" given in volgroup command does not exist.

In this test, the volgroup error is no longer being raised, else the logvol
error would not appear. The question is, why the logvol error?

It is not impossible that the logical volume was itself not located, leading to no ability to set size from it.

Any chance you could attach logs from f21 run?

Comment 18 mulhern 2014-12-09 14:07:25 UTC
Closing as a RHEL7 bug, but opening as an f21 bug, since chances of confirming it on RHEL7 have dwindled to about zero and the original bug seems to no longer be occurring on f21.