Bug 1022811 - Selecting one disk from multi-disk VG displays no information about the VG
Selecting one disk from multi-disk VG displays no information about the VG
Status: CLOSED DUPLICATE of bug 1209223
Product: Fedora
Classification: Fedora
Component: anaconda (Show other bugs)
20
Unspecified Unspecified
unspecified Severity high
: ---
: ---
Assigned To: Anaconda Maintenance Team
Fedora Extras Quality Assurance
RejectedBlocker RejectedFreezeException
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-10-24 02:08 EDT by Marian Csontos
Modified: 2015-04-15 10:15 EDT (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-04-15 10:15:36 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Marian Csontos 2013-10-24 02:08:24 EDT
Description of problem:
Having e.g. stripe over multiple disks, anaconda disk selection screen shows nothing but disk names and size/free size (which is BTW sometimes wrong. I will try to find a reproducer). One may accidentally select a disk with VG which spans multiple PVs. Anaconda should inform the user and let him (or her) choose whether to ignore the disk or use all disks in VG.

When such disk is used without remaining disks in a VG, it will show only a PV, and the change summary does not inform user there is a VG which would get corrupted.

Setting severity as high, as it may result in data corruption.

Suggesting as blocker as per:

> Custom partitioning
>
> When using the custom partitioning flow, the installer must be able to:
> 
> - Correctly interpret, and modify as described below, any disk with a valid ms-dos or gpt disk label and partition table containing ext4 partitions, LVM and/or btrfs volumes, and/or software RAID arrays at RAID levels 0, 1 and 5 containing ext4 partitions 

In my opinion the installer does not interpret the PV correctly.

Version-Release number of selected component (if applicable):
20.25.1-1

How reproducible:
100%

Steps to Reproduce:
0. prepare a multi-disk VG, with striped LV over them
1. select one disk from the VG as installation destination

Actual results:
Installation proceeds without informing user there is a VG on top of the PV

Expected results:
User should be informed about the incomplete VG and should be given chance to fix the problem - either by ignoring such disk, or selecting all PVs in the VG.

Additional info:
Comment 1 Marian Csontos 2013-10-24 02:14:09 EDT
And the other not met requirement is following Alpha blocker:

> Disk selection
> 
> The user must be able to select which of the disks connected to the system
> will be affected by the installation process.
Comment 2 Mike Ruckman 2013-10-24 13:38:50 EDT
Discussed in 2013-10-24 Go/NoGo meeting [1]. Rejected as a Blocker or FreezeException.  While unfortunate, this bug was judged as not to violate any of the F20 beta release criteria. As any fix is in a sensitive area of the code base, fixes would not be considered past freeze unless they are blocking release.

[1] http://meetbot.fedoraproject.org/meetbot/fedora-meeting-2/2013-10-24/
Comment 3 David Lehman 2013-10-24 14:00:43 EDT
Please provide logs and screenshots to clarify what is being reported.

Are you talking about striped lvs or a vg with striped md pvs?
Comment 4 Chris Murphy 2013-10-24 16:25:39 EDT
Two new qcow2, unpartitioned:
pvcreate /dev/vd[ab]
vgcreate vg1 /dev/vd[ab]
lvcreate vg1 -l 10000 -n hello
Run installer and pick only one disk.

Reclaim Space UI makes it clear this is an lvmpv device, but it's not clear there is another member disk that makes up this vg.

Custom partitioning is similar, but while it says "Available Space 0 B" it allows me to add a 10GB root LV to this vg1 without error. Once I'm at the hub, I still can't click Begin Installation.

So I'm uncertain what the steps are, exactly, to reproduce data loss or corruption, even though the UI handling could be better (maybe disallow selection of any multiple device volume members?)
Comment 5 Marian Csontos 2013-10-25 07:41:52 EDT
(In reply to Chris Murphy from comment #4)
> Two new qcow2, unpartitioned:
> pvcreate /dev/vd[ab]
> vgcreate vg1 /dev/vd[ab]
> lvcreate vg1 -l 10000 -n hello
> Run installer and pick only one disk.
> 
> Reclaim Space UI makes it clear this is an lvmpv device, but it's not clear
> there is another member disk that makes up this vg.

Yes, that was my point. Now you are going to lose 1/2 of your data.

In situation where there are multiple VGs spread over multiple disks, it is unsafe and error prone to not select all disks from the VG or one may not be sure the disks seen really are what one thinks they are (see Bug 1023426).

> 
> Custom partitioning is similar, but while it says "Available Space 0 B" it
> allows me to add a 10GB root LV to this vg1 without error. Once I'm at the
> hub, I still can't click Begin Installation.

It does not see free space in other VGs. Seems the math in space calculation is not 100% consistent (See Bug 1023402).

I will be unavailable for next few days so it would be beneficial if someone give that some hard time.

> So I'm uncertain what the steps are, exactly, to reproduce data loss or
> corruption,

The case I commented on above ^

> even though the UI handling could be better (maybe disallow
> selection of any multiple device volume members?)

One must be able to select multiple device volume members.

As I said installer should warn user and allow either select nothing or all (or force select but only "At your own risk, do not come to cry to a list!")
Comment 6 Marian Csontos 2013-10-25 07:44:09 EDT
(In reply to David Lehman from comment #3)
> Please provide logs and screenshots to clarify what is being reported.
> 
> Are you talking about striped lvs or a vg with striped md pvs?

Oh sorry, striped LVs.

But does work for any other LVs spread over multiple PVs (e.g. linear LV larger than physical disk spanning over multiple PVs)
Comment 7 David Shea 2015-04-15 10:15:36 EDT

*** This bug has been marked as a duplicate of bug 1209223 ***

Note You need to log in before you can comment on or make changes to this bug.