Bug 986412 - Anaconda fails to offer fs on LVM on custom raid1 as install destination
Anaconda fails to offer fs on LVM on custom raid1 as install destination
Status: CLOSED INSUFFICIENT_DATA
Product: Fedora
Classification: Fedora
Component: anaconda (Show other bugs)
19
x86_64 Linux
low Severity medium
: ---
: ---
Assigned To: David Lehman
Fedora Extras Quality Assurance
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-07-19 13:23 EDT by Stephen Tweedie
Modified: 2013-08-22 13:55 EDT (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-08-22 13:55:50 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
Anaconda logs (136.24 KB, application/octet-stream)
2013-07-19 13:25 EDT, Stephen Tweedie
no flags Details

  None (edit)
Description Stephen Tweedie 2013-07-19 13:23:15 EDT
Description of problem:

With f19 GA live-usb, I attempted to install to a configuration:

/dev/sda: live USB
/dev/sdb: internal SSD
/dev/sdc: internal boot HD

with a custom /dev/md0 raid1 (mirroring SSD and HD, but write-mostly with writebehind on the HD, using just the SSD for reads), and LVM on top of that custom raid1, and an ext4 filesystem on top of the LVM.

I selected /dev/sdc (boot) and /dev/sdb for install, then entered custom partitioning.  Anaconda displayed the raid1 "0" (from md0) as a target on the "unused" storage targets list, but did not offer the LVM ("AsyncRaid") or the ext4 filesystem on that as valid targets, so it was not possible to install to this configuration.

I installed temporarily to a simpler LVM configuration and will copy the root fs over, but ideally we should allow install to any legal existing LV.

Config:

Attempting to install root on /dev/AsyncRaid/root (AsyncRaid is a VG
composed of just /dev/md0; md0 has /dev/sdb3 and /dev/sdc7).

# cat /proc/mdstat 
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdb3[0] sdc7[1](W)
      104791936 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

# pvs
  PV         VG        Fmt  Attr PSize   PFree  
  /dev/md0   AsyncRaid lvm2 a--   99.93g  69.93g
  /dev/sdb5  herobrine lvm2 a--  375.93g 360.93g
  /dev/sdc8  herobrine lvm2 a--  338.45g 338.45g

# lvs
  LV   VG        Attr      LSize  Pool Origin Data%  Move Log Copy%  Convert
  root AsyncRaid -wi-a---- 30.00g                                           
  swap herobrine -wi------ 15.00g
Comment 1 Stephen Tweedie 2013-07-19 13:25:36 EDT
Created attachment 775889 [details]
Anaconda logs

Containing
-rw-r--r-- root/root    110592 2013-07-18 23:27 bak/storage.state
-rw-r--r-- root/root   2097875 2013-07-18 23:27 bak/storage.log
-rw-r--r-- root/root    298570 2013-07-18 23:27 bak/program.log
-rw-r--r-- root/root    148719 2013-07-18 23:27 bak/anaconda.log
Comment 2 David Lehman 2013-07-19 13:34:15 EDT
Please try it again with the following appended to the liveinst command line:

 inst.updates=http://dlehman.fedorapeople.org/updates-986412.0.img

and attach the logs from before as well as /var/log/messages. Thanks.
Comment 3 Stephen Tweedie 2013-08-12 10:09:03 EDT
I tried this and am unable to reproduce the problem.  Were there any other fixes included in this image?

Unfortunately I cannot reproduce the exact same storage configuration as originally reported.  There should be only minor differences: I have installed Fedora into LVs on both the herobrine and AsyncRaid VGs, but the LVM and raid1 configuration is otherwise unchanged.  But now the target filesystem (an ext4 fs) *is* shown as a potential target in the "unused" section, and if I create a new mount point I *am* offered the chance to use the AsyncRaid VG as source for a new LV.

So either there's a subtle change which is affecting anaconda --- eg. the presence of an installed existing Fedora distribution in the LV is enough to persuade the UI to recognise that LV --- or the bug has already been fixed.  I can't tell which at this point.

Thanks,
 Stephen
Comment 4 David Lehman 2013-08-22 13:55:50 EDT
If this happens again, feel free to reopen this bug. I'm not totally sure what happened -- I suspect your system got into some odd state that was not easy to recreate.

Note You need to log in before you can comment on or make changes to this bug.