Bug 961138

Summary: red banner (failed to add device) trying to add a mount point after filling up a previous LVM VG which is restricted to certain disks and the new mount point should use other free disks
Product: [Fedora] Fedora Reporter: Reartes Guillermo <rtguille>
Component: anacondaAssignee: Anaconda Maintenance Team <anaconda-maint-list>
Status: CLOSED INSUFFICIENT_DATA QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: high Docs Contact:
Priority: unspecified    
Version: 19CC: anaconda-maint-list, dshea, g.kaviyarasu, jonathan, mkolman, rtguille, sbueno, vanmeeuwen+fedora
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-12-19 20:35:45 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
anaconda.log
none
storage.log
none
program.log
none
screenshot F19b RC4, custom partitionig, red banner after trying to add a /home (it would go into another vg)
none
anaconda.log (previous screenshot)
none
program.log (previous screenshot)
none
storage.log none

Description Reartes Guillermo 2013-05-08 21:54:52 UTC
Created attachment 745464 [details]
anaconda.log

Description of problem:

I want to create this with Custom Partitioning:

* vda and vdb --> VG: nodora_os_vg    --> MD RAID1  -->  LVs: swap, /boot and / 
* vdc and vdd --> VG: nodora_data_vg  --> MD RAID0  -->  LVs: /home   

But once i have used all space on 'nodora_os_vg', andaconda does not permit the creation of any new mount point. So it is not possible to create 'nodora_data_vg'. The Red Banner appears.

Version-Release number of selected component (if applicable):
F19b TC3 (19.24-1)
   
Steps to reproduce:

0. Reach the Main Hub, leave defaults at the Welcome Screen. (Or set English for F19b TC3 if it is not default).

1. Leave defaults at all spokes, wait and got to Storage: Installation Destination

2. Select the 4 disks (vda,vdb,vdc,vdd). On a previous boot, i did a 'wipefs -a' on all disks, so these disks not contain any data.

3. In Installation Options, select 'I want to review/modify my disk partitions before continuing.' and leave the Partition Scheme to 'lvm'


4. Click '+' to add a mount point, type 'swap', leave size value Blank and click 'Add mount point'.

5. Make sure that 'swap' item is selected (it should) and click 'Modify' to modify the Volume Group. (Since i selected LVM for the partition scheme, the
entry should be lvm). The 'Configure Volume Group' dialog will appear.

6. Change the [Volume Group] Name, i used 'nodora_os_vg'

7. Restrict the VG to vda and vdb (i used the device serial number i set in virt-manager, but it would be good if the device vda,vdb,etc were also shown) by
click and ctrl+click the first two disks. Make sure visually that no other disk is also selected.

8. Change the RAID Level to: 'RAID1 (Redundancy)' and click 'save'.
9. Click 'update settings'. 

Making room for /boot lv on nodora_os_vg:

10. Click 'swap' entry and change the size to 768 and 'update settings'. This will make room for the other LVs.

11. Click '+' to add a mount point, type '/boot', leave size value Blank and click 'Add mount point'
.
12. Make sure that '/boot' item is selected and set the Device Type to LVM and click 'Modify' once the button shows up.

13. In the 'Configure Volume Group' dialog, restrict the VG to vda and vdb (i used the device serial number i set in virt-manager, but it would be good if the device vda,vdb,etc were also shown) by click and ctrl+click the first two disks. Make sure visually that no other disk is also selected.

14. The 'RAID Level' should say 'RAID1 (Redundancy)'. Click 'save' and then 'update settings'.

Making room for / lv on nodora_os_vg:

15. Click '/boot' entry and change the size to 512 and 'update settings'. This will make room for the other LVs.

16. Click '+' to add a mount point, type '/', leave size value Blank and click 'Add mount point'.

17. Make sure that '/' item is selected and set the Device Type to LVM and click 'Modify' once the button shows up.

18. In the 'Configure Volume Group' dialog, restrict the VG to vda and vdb (i used the device serial number i set in virt-manager, but it would be good if the device vda,vdb,etc were also shown) by click and ctrl+click the first two disks. Make sure visually that no other disk is also selected.

Now verify that the disks vdc and vdd are not used, click the '4 storage devices selected button'

Both vda and vda have 969.23kb free space. That is ok.
Both vda and vda have 14.99Gb free space. That is ok.

Now i will creare the VG 'nodora_data_vg' and use it for /home:

19. Click '+' to add a mount point, type '/home', leave size value Blank and click 'Add mount point'.

Ok, i cannot. The Red Banner says "Failed to add new device. Click for details.". Click only reveals 'failed to add device'.

20. Click '+' to add a mount point, type '/home', set size to 1 and click 'Add mount point'.

Ok, i still cannot. The Red Banner says "Failed to add new device. Click for details.". Click only reveals 'failed to add device'. This should have worked, i still have free disks and intend to use them. At this point, i cannot add anything more.
Ok, time for a workaround? 

WORKAROUND #1: (FAILS)

21. Select the '/' entry again and reduce size from 13.708Gb to 13.000Gb
I hope that this will be let me create a mount point to later switch to a different VG... and then re-size back the / mount point.

22. Click '+' to add a mount point, type '/home', leave size value Blank and click 'Add mount point'.
Now it works... 

23. Select the '/home' entry (the Device Type will be set to LVM and using the nodora_os_vg) and click 'Modify'.

24. In the 'Configure Volume Group' dialog, restrict the VG to vdc and vdd (i used the device serial number i set in virt-manager, but it would be good if the device vdc,vdd,etc were also shown) by click and ctrl+click the last two disks. Make sure visually that no other disk is also selected.

25. Set 'nodora_data_vg' as the name for the new VG. Also set it to be 'RAID0 (Performance)', then click 'save' and 'update settings'.

Humm, the '4 storage devices selected button' shows that now vda and vdb have 14.99GB free space and vdc,vdd have 7.49Gb.
Humm, custom partitioning shows all entries belonging to 'nodora_data_vg'... I don't like it. Expanding the 'volume group' dialog shows
that the 'nodora_os_vg' is <~Gone~>... The workaround for #19 (Red Banner) failed.

WORKAROUND #2: (FAILS)

I will create a new VG from some entry, then switch back to the previous VG and then create a new mount point hopping that anaconda will put it in the newly created VG that should still have space:

While /boot is pseudo-selected (it is not but it is shown)
21. Select the Volume Group and select 'create a new vg...'
22. Choose a new name 'nodora_data_vg' and restrict it to vdc,vdd.
23. Set it to RAID0, then click 'save' and 'update settings'.
24. Change back the VG for /boot to 'nodora_os_vg'

It changes back, but 'nodora_data_vg' disappeared into thin air. 
Trying to add a mount point now will also result in the Red Banner.
And 'nodora_os_vg' now is in vdc,vdd instead of vda,vdb.

Actual results:
cannot add more mount points (of any kind) even if there are more empty disks.

Expected results:
Being able to. (or a better error description, if it is an user error).

Comment 1 Reartes Guillermo 2013-05-08 21:55:19 UTC
Created attachment 745465 [details]
storage.log

Comment 2 Reartes Guillermo 2013-05-08 21:55:41 UTC
Created attachment 745466 [details]
program.log

Comment 3 Reartes Guillermo 2013-05-08 22:05:06 UTC
From: https://fedoraproject.org/wiki/Fedora_19_Beta_Release_Criteria#Custom_partitioning

Custom partitioning:

* Create mount points backed by ext4 partitions, LVM volumes or btrfs volumes, or 
software RAID arrays at RAID levels 0, 1 and 5 containing ext4 partitions 

Once the one reach step #19 it not possible to creating anything more, leaving all remaining disks unused.

One still can delete one of the offending mount points (most likely /) and try to workaround it somehow. I am not sure to propose it at the moment. 

I will try to find another workaround.

Comment 4 Reartes Guillermo 2013-05-08 22:37:16 UTC
I tried to workaround it but failed:

0. Reach the Main Hub, leave defaults at the Welcome Screen. (Or set English for F19b TC3 if it is not default).
1. Leave defaults at all spokes, wait and got to Storage: Installation Destination
2. Select the 4 disks (vda,vdb,vdc,vdd). On a previous boot, i did a 'wipefs -a' on all disks, so these disks not contain any data.
3. In Installation Options, seledt 'I want to review/modify my disk partitions before continuing.' and leave the Partition Scheme to 'standard partition'


4. Click '+' to add a mount point, type '/foo1', leave size value Blank and click 'Add mount point'.
5. Make sure that '/foo1' item is selected (it should) and click 'configure selected mount point'
6. In the  'configure selected mount point' dialog, select the two FIRST disks. Make sure the other disk are not selected.
7. 'touch' the label filed and 'update settings'.

7. Click '+' to add a mount point, type '/foo2', leave size value Blank and click 'Add mount point'.
8. Select item '/foo2' and click 'configure selected mount point'
9. In the  'configure selected mount point' dialog, select the two LAST disks. Make sure the other disk are not selected.
10. 'touch' the label filed and 'update settings'.

Check the '4 storage devices selected' dialog and make sure that /foo1 is in vda and /foo2 is in vdc.
Ok, now i will try to fool anaconda to make two VGs.

11. Select item '/foo1' change its 'device type' to 'lvm', in Volume Group, select 'Create a new volume group'
12. Set name to 'nodora_os_vg', restrict it to the first two disks. (vda,vdb) and set the raid level to RAID1
13. Click 'save' and 'update settings'.

14. Select item '/foo2' change its 'device type' to 'lvm', in Volume Group, select 'Create a new volume group'
15. Set name to 'nodora_data_vg', restrict it to the last two disks. (vdc,vdd) and set the raid level to RAID1.
Note, i found it was set to RAID4 for some odd reason... 
16. Click 'save' and 'update settings'.

17. The Yellow Banner appeared, it says "Device reconfiguration failed. click for details". Clicking reveals 
that "raid1 requires at least 2 disks".

18. Select item '/foo2' (AGAIN) change its 'device type' to 'lvm', in Volume Group, select 'Create a new volume group'
15. Set name to 'nodora_data_vg', restrict it to one of the two last disks. (vdc) and set the raid level to NONE.
Note, i found it was set to RAID1 for some odd reason... even if it failed?... 
16. Click 'save' and 'update settings'.

17. The Yellow Banner appeared, it says "Device reconfiguration failed. click for details". I clicked 'update settings' again and /foo2 became a 1mb partition in vdc1.

So, i am not able to create more than one VG, red banner, yellow banner.

Comment 5 Reartes Guillermo 2013-05-08 22:48:37 UTC
I also tried /foo1 and /foo2 each having a one disk VG but it also fails. Currently anaconda does not permit the creation of more than one VG.

I also tried to switch/morph from btrfs or raid devices to lvm but failed with worse errors.

Comment 6 Reartes Guillermo 2013-05-29 23:23:53 UTC
Created attachment 754584 [details]
screenshot F19b RC4, custom partitionig, red banner after trying to add a /home (it would go into another vg)

On a previous test, i was able to create this:

vda, vdb ................> os_vg.......>  /boot, swap, /, /data0
vdc, vdd ................> home_vg.....>  /home

In that case, no mirroring was selected. It was possible to add more than one VG that time.

But then tried to reproduce this bug-report again and fount that it does still happen. The screenshot shows the red banner after trying to add a /home (not specifying any size).

Unlike previous test, in this one os_vg raid level is mirror.

Comment 7 Reartes Guillermo 2013-05-29 23:26:15 UTC
Created attachment 754585 [details]
anaconda.log (previous screenshot)

Comment 8 Reartes Guillermo 2013-05-29 23:26:56 UTC
Created attachment 754586 [details]
program.log (previous screenshot)

Comment 9 Reartes Guillermo 2013-05-29 23:27:26 UTC
Created attachment 754588 [details]
storage.log

Comment 10 Reartes Guillermo 2013-05-29 23:48:49 UTC
storage.log: 

20:19:34,065 DEBUG storage.ui: vg os_vg has 0MB free
20:19:34,066 DEBUG storage.ui: Adding os_vg-home/0MB to os_vg
20:19:34,066 INFO storage.ui: added lvmlv os_vg-home (id 41) to device tree
20:19:34,067 INFO storage.ui: registered action: [59] Create Device lvmlv os_vg-home (id 41)
20:19:34,067 DEBUG storage.ui: getFormat('None') returning DeviceFormat instance
20:19:34,068 INFO storage.ui: registered action: [60] Create Format ext4 filesystem mounted at /home on lvmlv os_vg-home (id 41)
20:19:34,068 INFO storage.ui: removed lvmlv os_vg-home (id 41) from device tree
20:19:34,070 DEBUG storage.ui:                  LVMVolumeGroupDevice.removeChild: kids: 4 ; name: os_vg ;
20:19:34,070 INFO storage.ui: registered action: [61] Destroy Device lvmlv os_vg-home (id 41)
20:19:34,071 ERR storage.ui: failed to configure device factory: failed to create device

Comment 11 Reartes Guillermo 2013-05-30 00:45:11 UTC
Found a Workaround:

If the issue in this bug-report not fixed by final, at least the workaround must be documented.


* /boot
 --> add mount point, '/boot' and '512' for size
 --> change it to LVM, edit LVM 'Modify...'
 --> change the VG name to 'os_vg'
 --> restrict it to vda, vdb
 --> set it to RAID1 and return to Custom Partitioning.

* SWAP
 --> add mount point, 'swap' and '768' for size
 --> it should be automatically part of 'os_vg' VG

* /
 --> add mount point, '/' and do not specify size
 --> it should be automatically part of 'os_vg' VG

*WORKAROUND_BEGIN

* Select any of the previous mount points and reduce it to allow
at least a 1mb new mount point.

* /home (on the home_vg)
 --> add mount point, '/home' and '1' for size
 --> Select to create a new VG
 --> set the VG name to 'home_vg'
 --> restrict it to vdc, vdd
 --> set it to RAID1 and return to Custom Partitioning.
 --> do 'update settings'

* Select the previous selected mount point and add-back the 1mb.

*WORKAROUND_END

* Press 'Done' and install Fedora.

Installed Guest via the workaround:

# pvs
  PV         VG      Fmt  Attr PSize  PFree
  /dev/md125 home_vg lvm2 a--  14.64g 4.00m
  /dev/md127 os_vg   lvm2 a--  14.64g    0 

# vgs
  VG      #PV #LV #SN Attr   VSize  VFree
  home_vg   1   1   0 wz--n- 14.64g 4.00m
  os_vg     1   3   0 wz--n- 14.64g    0 

# lvs
  LV   VG      Attr      LSize   Pool Origin Data%  Move Log Copy%  Convert
  home home_vg -wi-ao---  14.63g                                           
  boot os_vg   -wi-ao--- 512.00m                                           
  root os_vg   -wi-ao---  13.39g                                           
  swap os_vg   -wi-ao--- 768.00m 

# mdadm --detail /dev/md125
/dev/md125:
        Version : 1.2
  Creation Time : Wed May 29 20:54:49 2013
     Raid Level : raid1
     Array Size : 15350656 (14.64 GiB 15.72 GB)
  Used Dev Size : 15350656 (14.64 GiB 15.72 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Wed May 29 21:38:13 2013
          State : active 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : localhost.localdomain:pv02
           UUID : dacc4175:d3ce3fc0:598fc5f0:5458e57f
         Events : 476

    Number   Major   Minor   RaidDevice State
       0     252       33        0      active sync   /dev/vdc1
       1     252       49        1      active sync   /dev/vdd1

# mdadm --detail /dev/md127
/dev/md127:
        Version : 1.2
  Creation Time : Wed May 29 20:55:04 2013
     Raid Level : raid1
     Array Size : 15350656 (14.64 GiB 15.72 GB)
  Used Dev Size : 15350656 (14.64 GiB 15.72 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Wed May 29 21:41:49 2013
          State : active, resyncing 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

  Resync Status : 96% complete

           Name : localhost.localdomain:pv01
           UUID : a4a456c1:cdf9a0b4:daf4669d:d4342fde
         Events : 526

    Number   Major   Minor   RaidDevice State
       0     252        1        0      active sync   /dev/vda1
       1     252       17        1      active sync   /dev/vdb1

Comment 12 David Shea 2014-12-08 21:53:14 UTC
Does this still occur with F21?

Comment 13 Red Hat Bugzilla 2023-09-14 01:44:02 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days