Bug 1226244 - LVMError: Process reported exit code 1280: Volume group "swapvg" not found Cannot process volume group swapvg
Summary: LVMError: Process reported exit code 1280: Volume group "swapvg" not found ...
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: Fedora
Classification: Fedora
Component: lvm2
Version: 22
Hardware: x86_64
OS: Linux
unspecified
unspecified
Target Milestone: ---
Assignee: LVM and device-mapper development team
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard: abrt_hash:5ee46375399486e02ebfa97ed24...
: 1234824 1234994 1289038 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-05-29 09:48 UTC by v.ronnen
Modified: 2016-06-01 20:41 UTC (History)
25 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-12-08 14:57:23 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)
File: anaconda-tb (648.69 KB, text/plain)
2015-05-29 09:48 UTC, v.ronnen
no flags Details
File: anaconda.log (34.66 KB, text/plain)
2015-05-29 09:48 UTC, v.ronnen
no flags Details
File: dnf.log (11.28 KB, text/plain)
2015-05-29 09:48 UTC, v.ronnen
no flags Details
File: environ (492 bytes, text/plain)
2015-05-29 09:48 UTC, v.ronnen
no flags Details
File: lsblk_output (4.15 KB, text/plain)
2015-05-29 09:48 UTC, v.ronnen
no flags Details
File: nmcli_dev_list (1.19 KB, text/plain)
2015-05-29 09:48 UTC, v.ronnen
no flags Details
File: os_info (447 bytes, text/plain)
2015-05-29 09:48 UTC, v.ronnen
no flags Details
File: storage.log (235.70 KB, text/plain)
2015-05-29 09:48 UTC, v.ronnen
no flags Details
File: syslog (108.69 KB, text/plain)
2015-05-29 09:48 UTC, v.ronnen
no flags Details
File: ifcfg.log (5.54 KB, text/plain)
2015-05-29 09:48 UTC, v.ronnen
no flags Details
File: packaging.log (1.13 KB, text/plain)
2015-05-29 09:48 UTC, v.ronnen
no flags Details
File: program.log (73.02 KB, text/plain)
2015-05-29 09:48 UTC, v.ronnen
no flags Details
F22 Anaconda debugging logs (292.52 KB, application/octet-stream)
2015-07-26 01:08 UTC, Dennis W. Tokarski
no flags Details
let lvmetad activate lvm on just-activated md pv(s) (4.10 KB, patch)
2015-12-08 16:26 UTC, David Lehman
no flags Details | Diff

Description v.ronnen 2015-05-29 09:48:19 UTC
Description of problem:
Just running the net based installer.
Fedora 22 Live workstation crashes if you have raid1
Fedora 22 Net install crashes 
Fedora 22 has not been tested.
Fedora 22 has been released too soon

Version-Release number of selected component:
anaconda-22.20.13-1

The following was filed automatically by anaconda:
anaconda 22.20.13-1 exception report
Traceback (most recent call first):
  File "/usr/lib64/python2.7/site-packages/gi/overrides/BlockDev.py", line 384, in wrapped
    raise transform[1](msg)
  File "/usr/lib/python2.7/site-packages/blivet/devices/lvm.py", line 628, in _setup
    blockdev.lvm.lvactivate(self.vg.name, self._name)
  File "/usr/lib/python2.7/site-packages/blivet/devices/storage.py", line 430, in setup
    self._setup(orig=orig)
  File "/usr/lib/python2.7/site-packages/blivet/deviceaction.py", line 661, in execute
    self.device.setup(orig=True)
  File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 362, in processActions
    action.execute(callbacks)
  File "/usr/lib/python2.7/site-packages/blivet/blivet.py", line 162, in doIt
    self.devicetree.processActions(callbacks)
  File "/usr/lib/python2.7/site-packages/blivet/osinstall.py", line 1057, in turnOnFilesystems
    storage.doIt(callbacks)
  File "/usr/lib64/python2.7/site-packages/pyanaconda/install.py", line 196, in doInstall
    turnOnFilesystems(storage, mountOnly=flags.flags.dirInstall, callbacks=callbacks_reg)
  File "/usr/lib64/python2.7/threading.py", line 766, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/usr/lib64/python2.7/site-packages/pyanaconda/threads.py", line 244, in run
    threading.Thread.run(self, *args, **kwargs)
LVMError: Process reported exit code 1280:   Volume group "swapvg" not found
  Cannot process volume group swapvg


Additional info:
addons:         com_redhat_kdump
cmdline:        /usr/bin/python2  /sbin/anaconda
cmdline_file:   BOOT_IMAGE=vmlinuz initrd=initrd.img inst.stage2=hd:LABEL=Fedora-22-x86_64 rd.live.check quiet biosdevname=0 net.ifnames=0
dnf.rpm.log:    May 29 09:39:54 INFO --- logging initialized ---
executable:     /sbin/anaconda
hashmarkername: anaconda
kernel:         4.0.4-301.fc22.x86_64
product:        Fedora
release:        Cannot get release name.
type:           anaconda
version:        22

Comment 1 v.ronnen 2015-05-29 09:48:24 UTC
Created attachment 1031788 [details]
File: anaconda-tb

Comment 2 v.ronnen 2015-05-29 09:48:26 UTC
Created attachment 1031789 [details]
File: anaconda.log

Comment 3 v.ronnen 2015-05-29 09:48:27 UTC
Created attachment 1031790 [details]
File: dnf.log

Comment 4 v.ronnen 2015-05-29 09:48:28 UTC
Created attachment 1031791 [details]
File: environ

Comment 5 v.ronnen 2015-05-29 09:48:30 UTC
Created attachment 1031792 [details]
File: lsblk_output

Comment 6 v.ronnen 2015-05-29 09:48:31 UTC
Created attachment 1031793 [details]
File: nmcli_dev_list

Comment 7 v.ronnen 2015-05-29 09:48:32 UTC
Created attachment 1031794 [details]
File: os_info

Comment 8 v.ronnen 2015-05-29 09:48:35 UTC
Created attachment 1031795 [details]
File: storage.log

Comment 9 v.ronnen 2015-05-29 09:48:37 UTC
Created attachment 1031796 [details]
File: syslog

Comment 10 v.ronnen 2015-05-29 09:48:38 UTC
Created attachment 1031797 [details]
File: ifcfg.log

Comment 11 v.ronnen 2015-05-29 09:48:40 UTC
Created attachment 1031798 [details]
File: packaging.log

Comment 12 v.ronnen 2015-05-29 09:48:42 UTC
Created attachment 1031799 [details]
File: program.log

Comment 13 v.ronnen 2015-06-14 12:22:31 UTC
This bug can be reproduced on a VirtualBox fc21 install with 2 disks in raid1
So install of fc22 is IMPOSSIBLE!
Installer has not reached pre alpha quality.
So Fedora2 is released too soon.

Comment 14 Brian Lane 2015-06-25 00:29:47 UTC
*** Bug 1234994 has been marked as a duplicate of this bug. ***

Comment 15 Brian Lane 2015-06-25 00:29:50 UTC
*** Bug 1234824 has been marked as a duplicate of this bug. ***

Comment 16 Bruno Thomsen 2015-07-07 17:42:39 UTC
Another user experienced a similar problem:

I wanted to create an user account before root account.

cmdline:        /usr/bin/python2  /sbin/anaconda --liveinst --method=livecd:///dev/mapper/live-base
cmdline_file:   BOOT_IMAGE=vmlinuz0 initrd=initrd0.img root=live:CDLABEL=Fedora-Live-WS-x86_64-22-3 rootfstype=auto ro rd.live.image quiet  rhgb rd.luks=0 rd.md=0 rd.dm=0 
hashmarkername: anaconda
kernel:         4.0.4-301.fc22.x86_64
other involved packages: libblockdev-0.13-2.fc22.x86_64, python-libs-2.7.9-6.fc22.x86_64, python-blivet-1.0.9-1.fc22.noarch
package:        anaconda-core-22.20.13-1.fc22.x86_64
packaging.log:  
product:        Fedora
release:        Fedora release 22 (Twenty Two)
version:        22

Comment 17 Dennis W. Tokarski 2015-07-25 19:36:45 UTC
Another user experienced a similar problem:

Testing install of Fedora 22 server on a virtual machine. 

During manual partitioning, the installer shows an existing luks volume
and accepts a password to open same. Once opened, it discovers the contained volume
group and lvs (luks/vg/lvs all previously created using rescue mode from the same DVD)
and offers the lvs as available for creating file systems.

After assigning mount points and labels to the lvs, click Done on manual partitioner,
and continue installation upon returning to the main installer window. The installer
chews on this for a long time and throws an error complaining of an unknown error.
The logs suggest that it can't find the lvs.

From what the log shows, it looks like the install preparation phase isn't smart enough
to open the luks container, which it clearly does know about.

The situation may be complicated by the fact that the luks container itself sits on a
four element raid6 md device.

The set of four virtual disks used in this test have another raid6 md device with a successful
Fedora 22 server installation lacking the luks layer but otherwise created in exactly the
same way.

This all worked in Fedora 20, can't speak for 21 (haven't tried it).

I'll be happy to upload the images for the drives if that'll help, though it
will take a while to push through small-business grade dsl.

addons:         com_redhat_kdump
cmdline:        /usr/bin/python2  /sbin/anaconda
cmdline_file:   BOOT_IMAGE=vmlinuz initrd=initrd.img inst.stage2=hd:LABEL=Fedora-22-x86_64 quiet
dnf.rpm.log:    Jul 25 19:04:41 INFO --- logging initialized ---
hashmarkername: anaconda
kernel:         4.0.4-301.fc22.x86_64
package:        anaconda-22.20.13-1
product:        Fedora
release:        Cannot get release name.
version:        22

Comment 18 Dennis W. Tokarski 2015-07-25 19:56:25 UTC
For what it's worth, I'm not at all convinced the auto-bug-reporter is correct
in assuming my problem is really the same as bug 1226244 or 1234824. It looks
like the reporter chose not to upload my logs because of this.

As noted in my previous comment, I have a successful install from the
server DVD image with a volume group and logical volumes in a raid6
device. It's just when a luks layer is inserted that I have problems.

Unfortunately all my systems are so configured. So I have to agree
with v.ronnen in comment #13: fedora 22 is uninstallable in my use
case.

Any workaround or installer bug fix would be greatly appreciated.

I'll also be happy to rerun the install and manually collect whatever
logs you might find helpful.

Comment 19 Dennis W. Tokarski 2015-07-26 01:08:44 UTC
Created attachment 1056164 [details]
F22 Anaconda debugging logs

Just had another look at the VM where I tried this installation and
discovered the installer also offers a way to package the crash data
manually, so here it is.

Comment 20 Vratislav Podzimek 2015-07-28 08:42:13 UTC
This doesn't seem to me like a libblockdev bug as it is requested to change a VG which doesn't exist and it correctly reports an error in such case. It's blivet (python-blivet) that creates such request. Reassigning.

Comment 21 Dennis W. Tokarski 2015-08-03 21:56:52 UTC
Ping?

Would it be helpful if I reposted this as a separate bug
in a new bug report?

Comment 22 David Lehman 2015-08-03 23:31:06 UTC
What I see is something along the lines of lvmetad not correctly accounting for the newly-activated md array pv01. The following is based on the logs from the original reporter:

(from program.log)
# blivet activates md array pv01 containing PV w/ UUID
# rGga9D-xtBx-CSyR-RWQK-Rsg5-kXjO-T8tjOY (this can be verified by
# checking the udev info for pv01 in storage.log)
11:45:37,334 INFO program: Running [77] mdadm --assemble /dev/md/pv01 --run --uuid=537040ff:6767d217:141295d2:3c768414 /dev/sda1 /dev/sdb1 ...
11:45:37,410 INFO program: stdout[77]: 
11:45:37,411 INFO program: stderr[77]: mdadm: /dev/md/pv01 has been started with 2 drives.

# Wait for udev and lvmetad to do whatever, twice.
11:45:37,411 INFO program: ...done [77] (exit code: 0)
11:45:37,412 INFO program: Running... udevadm settle --timeout=300
11:45:37,478 DEBUG program: Return code: 0
11:45:37,487 INFO program: Running... udevadm settle --timeout=300
11:45:37,506 DEBUG program: Return code: 0

# Try to activate one of the LVs from swapvg, whose lone PV is pv01
11:45:37,511 INFO program: Running [78] lvm lvchange -ay swapvg/swap --config= devices { preferred_names=["^/dev/mapper/", "^/dev/md/", "^/dev/sd"] filter=["r|/sdd1$|","r|/sdd2$|","r|/sdd3$|","r|/sdc1$|","r|/sdc$|"] }  ...
11:45:37,539 INFO program: stdout[78]: 
11:45:37,540 INFO program: stderr[78]:   Volume group "swapvg" not found
  Cannot process volume group swapvg
11:45:37,541 INFO program: ...done [78] (exit code: 1280)

# Run lsblk, notice that md127 is active and shows the UUID expected by
# VG 'swapvg'.
11:45:51,323 INFO program: Running... lsblk --perms --fs --bytes
11:45:51,344 INFO program: NAME                      SIZE OWNER GROUP MODE       NAME              FSTYPE            LABEL                      UUID                                   MOUNTPOINT
11:45:51,345 INFO program: sda               500107862016 root  disk  brw-rw---- sda
11:45:51,345 INFO program: |-sda1              8438939648 root  disk  brw-rw---- |-sda1            linux_raid_member localhost.localdomain:pv01 537040ff-6767-d217-1412-95d23c768414
11:45:51,346 INFO program: | `-md126           8434679808 root  disk  brw-rw---- | `-md126         LVM2_member                                  rGga9D-xtBx-CSyR-RWQK-Rsg5-kXjO-T8tjOY
11:45:51,346 INFO program: |-sda2               524288000 root  disk  brw-rw---- |-sda2            linux_raid_member localhost.localdomain:boot a112b238-f48e-6349-6a06-90332033a3b6
11:45:51,346 INFO program: `-sda3            491143561216 root  disk  brw-rw---- `-sda3            linux_raid_member localhost.localdomain:pv00 24bee081-1692-d1d3-8a1f-c3b3af089004
11:45:51,347 INFO program: `-md127         491009146880 root  disk  brw-rw----   `-md127         LVM2_member                                  Tzaftk-33km-YC62-AWjN-81fN-J6b4-tJqkbh
11:45:51,347 INFO program: `-homevg-home 423897333760 root  disk  brw-rw----     `-homevg-home ext4                                         67736429-c22f-4310-91d3-70531274dc9d
11:45:51,347 INFO program: sdb               500107862016 root  disk  brw-rw---- sdb
11:45:51,348 INFO program: |-sdb1              8438939648 root  disk  brw-rw---- |-sdb1            linux_raid_member localhost.localdomain:pv01 537040ff-6767-d217-1412-95d23c768414
11:45:51,348 INFO program: | `-md126           8434679808 root  disk  brw-rw---- | `-md126         LVM2_member                                  rGga9D-xtBx-CSyR-RWQK-Rsg5-kXjO-T8tjOY

Comment 23 Dennis W. Tokarski 2015-09-03 20:17:12 UTC
Ping again.

David's comment in #22 addresses the original poster's issue, but as
I noted earlier my problem goes a bit beyond this. I'm just here because
this is where the installer's auto-bug-reporting feature dropped me.

My install didn't seem to have a problem starting the raid array. Instead
it seemed unaware that the array might have a luks volume on it. See my
comments #17 and 18.

Since there doesn't seem to be much happening here that's helpful to me,
I have to ask again:  should I resubmit this issued as a separate bug?

It would be great to try out Fedora 22, but it's not installable on my
systems.

Comment 24 Vratislav Podzimek 2015-09-04 08:12:09 UTC
So, Dennis, you have a RAID 6 array, LUKS on top of it and an LVM setup that use that LUKS as its only PV, right? I'll try to deploy such setup and debug the issue. It could be caused by the same problem as the original report here -- by lvmetad not collecting all the information once an MD RAID is started or LUKS unlocked/opened.

Comment 25 Vratislav Podzimek 2015-09-04 10:32:42 UTC
I can confirm I've been able to reproduce the issue with the setup described in the comment #24. Also, Dennis is right about that being a different issue than what is tracked in this bug report. Good news is that it seems to be fixed in the F23/Rawhide installer which works as expected with such setup.

Comment 26 rpw8.aber 2015-09-09 20:25:59 UTC
Another user experienced a similar problem:

1) Used command line tool mdadm to create mdadm raid 1 (intended for /boot), and raid 5 using default options (v1.2 superblock)
2) Used command line tool cryptsetup to set up encryption, on the raid 5
3) Used command line tools to set up LVM (vg name aurora, lv name root)
4) Booted the server netinstall image (KDE live CD didn't even list show the raid/lvm in the installer), selected expert partitioning, entered password for the cryptsetup device and assigned them to the appropriate mount points. The mount point correctly showed up as an encrypted LVM, and the actions listed by the installer as the correct/expected reformatting of aurora-root
5) Picked hostname and software selection (LXDE desktop)
6) Commenced installation, resulting in the "unknown error" and failure. Skimming the error traceback suggests the volume group was not activated prior to trying to access the logical volume. 

addons:         com_redhat_kdump
cmdline:        /usr/bin/python2  /sbin/anaconda
cmdline_file:   BOOT_IMAGE=vmlinuz initrd=initrd.img inst.stage2=hd:LABEL=Fedora-22-x86_64 quiet
dnf.rpm.log:    Sep 09 19:58:34 INFO --- logging initialized ---
hashmarkername: anaconda
kernel:         4.0.4-301.fc22.x86_64
package:        anaconda-22.20.13-1
product:        Fedora
release:        Cannot get release name.
version:        22

Comment 27 Dennis W. Tokarski 2015-09-14 22:50:51 UTC
(In reply to Vratislav Podzimek from comment #24)
> So, Dennis, you have a RAID 6 array, LUKS on top of it and an LVM setup that
> use that LUKS as its only PV, right? I'll try to deploy such setup and debug
> the issue. It could be caused by the same problem as the original report
> here -- by lvmetad not collecting all the information once an MD RAID is
> started or LUKS unlocked/opened.

Yes, that's correct. I see in your next comment you've duplicated
the behavior, so thank you for that.

Comment 28 Dennis W. Tokarski 2015-09-14 22:58:35 UTC
(In reply to Vratislav Podzimek from comment #25)
> I can confirm I've been able to reproduce the issue with the setup described
> in the comment #24. Also, Dennis is right about that being a different issue
> than what is tracked in this bug report. Good news is that it seems to be
> fixed in the F23/Rawhide installer which works as expected with such setup.

That's good news for F23.

What can I do to get a working F22 installation in this configuration?

Back in the day when there were such things as floppy disks it was
possible to use an alternate installer by providing an updated anaconda on
a floppy and booting with a command line options to start that one
instead of the copy on the install media.

Does that feature still exist, and might it be possible to use the
F23 installer or some part of it for F22?

The only other way I can see is to make a minimal F22 install on a VM
without luks, manually set up the desired luks configuration on the
target system using rescue mode, then rsync the successful install
to the target. There would have to be some manual fiddling on the
target to get the bootloader installed. A major nuisance all in all,
but it might work.

Simpler suggestions would be most welcome.

Comment 29 Vratislav Podzimek 2015-09-23 09:19:12 UTC
I tried to create an updates.img for F22, but unfortunately, I wasn't able to identify the change that made this issue resolved. The only other solution that comes to my mind is to create a custom boot.iso with the F23 installer that thinks about itself it is a F22 installer. I could do that, but right now, the utility for creating F23 boot.iso fails for me because of a bug in systemd that is in the F23 repos. Once this gets resolved (the fixed systemd should already be built) I can try to do that custom boot.iso.

Comment 30 Anders Blomdell 2015-12-08 11:34:45 UTC
Might be related to: https://bugzilla.redhat.com/show_bug.cgi?id=1289038

Comment 31 David Lehman 2015-12-08 14:57:23 UTC
The fix for the original problem landed in python-blivet-1.15-1 FWIW. It is not in F23, but will be in F24.

Comment 32 Anders Blomdell 2015-12-08 15:13:17 UTC
Care to post the relevant diff to make it possible to create a suitable updates.img/RHupdates?

Comment 33 David Lehman 2015-12-08 16:26:24 UTC
Created attachment 1103645 [details]
let lvmetad activate lvm on just-activated md pv(s)

Comment 34 Anders Blomdell 2015-12-08 16:57:13 UTC
Much obliged, thanks!

I will close https://bugzilla.redhat.com/show_bug.cgi?id=1289038

Comment 35 Anders Blomdell 2015-12-08 16:57:44 UTC
*** Bug 1289038 has been marked as a duplicate of this bug. ***

Comment 36 Dennis W. Tokarski 2015-12-29 17:46:05 UTC
I just wanted to finish here with a note of thanks to the people who worked
on resolving this, especially Vratislav in #29. That a bug in systemd was
what ultimately frustrated his efforts is sadly not a surprise.

Now that Fedora 23 has been out a while, I've had a chance to try it out
and have found that it actually is installable in my use case, so for me
problem solved.

Thanks again, all.

Comment 37 Vratislav Podzimek 2016-03-21 08:28:28 UTC
(In reply to Dennis W. Tokarski from comment #36)
> I just wanted to finish here with a note of thanks to the people who worked
> on resolving this, especially Vratislav in #29. That a bug in systemd was
> what ultimately frustrated his efforts is sadly not a surprise.
> 
> Now that Fedora 23 has been out a while, I've had a chance to try it out
> and have found that it actually is installable in my use case, so for me
> problem solved.
Great to hear that! Enjoy your new Fedora 24! ;)

Comment 38 Aaron Walker 2016-05-09 17:06:23 UTC
Another user experienced a similar problem:

Attempted to install fedora server 23 onto a dell r410 - Configured LVM with 4 750 hdds and a internal usb as a boot drive. 
Most likely cause I can think of is LVM not liking / on 4 drives

addons:         com_redhat_kdump
cmdline:        /usr/bin/python3  /sbin/anaconda
cmdline_file:   BOOT_IMAGE=/images/pxeboot/vmlinuz inst.stage2=hd:LABEL=FEDORA-S-23 quiet
dnf.rpm.log:    May 09 16:52:28 INFO --- logging initialized ---
hashmarkername: anaconda
kernel:         4.2.3-300.fc23.x86_64
package:        anaconda-23.19.10-1
product:        Fedora
release:        Cannot get release name.
version:        23

Comment 39 Bob Gustafson 2016-05-20 20:03:04 UTC
Another user experienced a similar problem:

I am in Anaconda
Trying to set up Fedora23 on an EFI MDraid LVM system with two disks.
Anaconda accepted my two disks (already partitioned), but wanted to do its own partition scheme. I said yes, with the reclaim space checked.

I went through and accepted delete on all the partitions of both disks, but not the overall disks (I wanted to keep the partition tables I had already set up).

addons:         com_redhat_kdump
cmdline:        /usr/bin/python3  /sbin/anaconda
cmdline_file:   BOOT_IMAGE=/images/pxeboot/vmlinuz inst.stage2=hd:LABEL=Fedora-WS-23-x86_64 quiet
hashmarkername: anaconda
kernel:         4.2.3-300.fc23.x86_64
package:        anaconda-23.19.10-1
product:        Fedora
release:        Cannot get release name.
version:        23

Comment 40 David Lehman 2016-05-20 20:11:00 UTC
Please attach the logs from /tmp so we can investigate.

Comment 41 Bob Gustafson 2016-05-23 05:39:16 UTC
My own partition tables were wrong - I put 'boot' flag on /boot partition (seemed reasonable), but with gpt and efi, the 'boot' flag should go on the 'EFI System Partition' (sda1, sdb1, sdc1) instead.

I was able to finally get a raid1, md0, md1, lvm, w gpt and efi running F23.

The process was similar to https://blog.voina.org/fedora-22-lvm-single-drive-to-lvm-raid-1-mirror-migration

which also benefited from https://community.spiceworks.com/how_to/340-lvm-single-drive-to-lvm-raid-1-mirror-migration

My process was to:
1) start with 3 (ultimately blank) disks (2x2TB, 1x1TB), (sda,sdb,sdc)

2) install a plain vanilla Fedora 23 on the smaller disk (sdc) (with esc, init.gpt to get gpt partition table. Needed to boot from the EFI boot on the network Fedora23 install CD).

3) duplicate the partition table of the F23 install on the sda,sdb disks (letting last (3rd) partition fill the available space)

4) construct /dev/md0 on sdb2 (2TB) disks with a 'missing' drive.

5) add the sdc2 partition on F23 disk to the /dev/md0 raid array - this quickly copies all of the good F23 /boot contents to sdb2 as the raid array synchronizes.

6) edit the /etc/default/grub and /etc/fstab to point to the new /dev/md0 = /boot partition.

7) mount the /dev/sdb1 to /boot/efi and dnf reinstall grub2-efi shim

8) then grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg

9) then EFI reboot. Check 'mount | grep boot' for mount of /dev/md0 to /boot and /dev/sdb1 to /boot/efi

10) Then create /dev/md1 with /dev/sdb3 and 'missing'

11) vgextend -v fedora_hoho8 /dev/md1

12) pvmove -v /dev/sdc3 /dev/md1 (very slow process - a couple/3 hours)

13) vgreduce -v fedora_hoho8 /dev/sdc3, pvremove /dev/sdc3

14) edit /etc/default/grub and /etc/fstab to make sure they are pointing to the proper disks.

15) grub2-mkconfig again

16) reboot

17) add in sda2 to /dev/md0 and sda2 to /dev/md1 and wait for sync

18) reboot - voila

---------------------------
No, I don't have the logs any more ('blank' disks..)

Comment 42 Bob Gustafson 2016-05-23 05:43:33 UTC
> 17) add in sda2 to /dev/md0 and sda2 to /dev/md1 and wait for sync

17) add in sda2 to /dev/md0 and sda3 to /dev/md1 and wait for sync

Comment 43 Marian Csontos 2016-05-23 08:55:22 UTC
Aaron, do you happen to have anaconda logs?

Is this reproducible?

How did you come to conclusion "Most likely cause I can think of is LVM not liking / on 4 drives"? What is the storage layout? `lsblk` output would help a lot.

Comment 44 Aaron Walker 2016-05-23 14:05:36 UTC
Marian:
I tried doing it several times, but with no luck. Sadly I cant remember the steps I order to reproduce it again - I reformatted all of the drives, and have since deployed it, so I am unable to use the machine again. I came to that conclusion, as I was able to do it with / on 1 drive, but when I did it on all 4, it started complaining.

The storage layout was basically
usb drive - /boot (or the uefi boot, I cant remember)
1 - lvm - /
2 - lvm - /
3 - lvm - /
4 - lvm - /
Again, the machine is deployed, so I can't get access to it, otherwise I would.

Comment 45 Gerardo Rosales 2016-06-01 18:51:36 UTC
Another user experienced a similar problem:

Reinstalling system after the system crashed(Fedora 23 XFCE spin), after the hard reboot the system was unable to boot.

The content for /boot was deleted, idk how. Tried to reinstall grub2 using a liveCD on /boot without success.

Decided to reinstall the system, F23 LXDE spin, preserving my /home (fedora-home vg 1TB).

There are 3 disk in the system, two of them for /home. Everything worked fine until I pressed the "Begin installation button", when tried to create the User the installer froze and the error window appeared.

haven't tried again.

cmdline:        /usr/bin/python3  /sbin/anaconda --liveinst --method=livecd:///dev/mapper/live-base
cmdline_file:   BOOT_IMAGE=vmlinuz0 initrd=initrd0.img root=live:CDLABEL=Fedora-Live-LXDE-x86_64-23-10 rootfstype=auto ro rd.live.image quiet  rhgb rd.luks=0 rd.md=0 rd.dm=0 
hashmarkername: anaconda
kernel:         4.2.3-300.fc23.x86_64
other involved packages: python3-blivet-1.12.8-1.fc23.noarch, libblockdev-1.1-2.fc23.x86_64, python3-libs-3.4.3-5.fc23.x86_64
package:        anaconda-core-23.19.10-1.fc23.x86_64
packaging.log:  
product:        Fedora
release:        Fedora release 23 (Twenty Three)
version:        23


Note You need to log in before you can comment on or make changes to this bug.