Bug 569469 - ValueError: Cannot remove non-leaf device 'vda5'
Summary: ValueError: Cannot remove non-leaf device 'vda5'
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Fedora
Classification: Fedora
Component: anaconda
Version: 14
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
Assignee: David Lehman
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard: anaconda_trace_hash:eb6ce36a1e86c06ee...
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2010-03-01 14:54 UTC by James Laska
Modified: 2013-09-02 06:45 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-08-16 22:40:24 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)
Attached traceback automatically from anaconda. (321.73 KB, text/plain)
2010-03-01 14:54 UTC, James Laska
no flags Details
Attached traceback automatically from anaconda. (268.94 KB, text/plain)
2010-05-03 18:18 UTC, James Laska
no flags Details
Attached traceback automatically from anaconda. (3.24 MB, text/plain)
2011-02-02 02:55 UTC, Gahn Hye Nun
no flags Details

Description James Laska 2010-03-01 14:54:02 UTC
The following was filed automatically by anaconda:
anaconda 13.32 exception report
Traceback (most recent call first):
  File "/usr/lib/anaconda/storage/devicetree.py", line 716, in _removeDevice
    raise ValueError("Cannot remove non-leaf device '%s'" % dev.name)
  File "/usr/lib/anaconda/storage/devicetree.py", line 771, in registerAction
    self._removeDevice(action.device)
  File "/usr/lib/anaconda/storage/__init__.py", line 789, in destroyDevice
    self.devicetree.registerAction(action)
  File "/usr/lib/anaconda/partIntfHelpers.py", line 143, in doDeleteDevice
    storage.destroyDevice(device)
  File "/usr/lib/anaconda/iw/partition_gui.py", line 1290, in deleteCB
    device):
ValueError: Cannot remove non-leaf device 'vda5'

Comment 1 James Laska 2010-03-01 14:54:05 UTC
Created attachment 397084 [details]
Attached traceback automatically from anaconda.

Comment 2 James Laska 2010-03-01 14:57:07 UTC
= Steps to reproduce =

1. Following the test case instructions at https://fedoraproject.org/wiki/QA/TestCases/PartitioningUsrOnRaid5
2. Start with a virt-install using 2 10G drives.  

Drives were previously formatted for another RAID test case as follows ...

    sh-4.1# parted /dev/vda -s p
    Model: Virtio Block Device (virtblk)
    Disk /dev/vda: 10.7GB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos

    Number  Start   End     Size    Type     File system     Flags
     1      32.3kB  524MB   524MB   primary  ext4            boot
     2      524MB   1598MB  1074MB  primary  linux-swap(v1)
     3      1598MB  10.7GB  9138MB  primary                  raid

    sh-4.1# parted /dev/vdb -s p
    Model: Virtio Block Device (virtblk)
    Disk /dev/vdb: 10.7GB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos

    Number  Start   End     Size    Type     File system     Flags
     1      32.3kB  1074MB  1074MB  primary  linux-swap(v1)
     2      1074MB  10.7GB  9663MB  primary                  raid


3. Reformat /boot and swap
4. Attempt to create a RAID5 device ... it should fail since there are only 2 RAID members
5. Go back and attempt to delete /dev/vda3

Comment 3 Adam Williamson 2010-04-23 16:15:16 UTC
agreed at 04/16 and 04/23 blocker review meetings that this is a blocker if it consistently crashes after you try to create an invalid RAID array and then go back and re-edit the layout. jlaska will check that.



-- 
Fedora Bugzappers volunteer triage team
https://fedoraproject.org/wiki/BugZappers

Comment 4 James Laska 2010-05-03 18:18:20 UTC
Created attachment 411083 [details]
Attached traceback automatically from anaconda.

Comment 5 James Laska 2010-05-03 18:25:39 UTC
Easy to reproduce using the instructions from comment#2

1. First install system using a RAID1 configuration as follows

# Partition clearing information
clearpart --all --initlabel 
# Disk partitioning information
part /boot --asprimary --fstype="ext3" --size=500 --label=BOOT
part swap --fstype="swap" --recommended --label=SWAP
part /multi-stage --fstype="ext3" --size=100 --label=MULTISTAGE
part raid.01 --grow --size=2048
part raid.02 --grow --size=2048
raid / --device=0 --fstype="ext3" --level=RAID1 raid.01 raid.02

2. Next, initiate a new install, and choose 'custom' partitioning
3. Edit, and choose to reformat existing '/boot' and swap partitions
4. Delete existing /dev/md0, and attempt to create a RAID5 ... it should complain since there are only 2 RAID members (3 are needed for RAID5).
5. Attempt to delete existing RAID members (starting with vda3) 

The easy workaround for this is to *skip* step#4

Comment 6 Adam Williamson 2010-05-03 19:49:48 UTC
The important question was whether this happens a) whenever you edit a RAID config, b) whenever you edit a *faulty* RAID config, or c) only with this specific RAID config...



-- 
Fedora Bugzappers volunteer triage team
https://fedoraproject.org/wiki/BugZappers

Comment 7 Adam Williamson 2010-05-04 22:52:06 UTC
james?

Comment 8 James Laska 2010-05-05 00:24:18 UTC
Sorry, I don't know the answer to those questions.  I was hoping the traceback would offer some insight.

(In reply to comment #6)
> The important question was whether this happens
>  a) whenever you edit a RAID config,

I haven't observed that this is the case.  I've managed to remove RAID members and edit RAID devices during other tests.

>  b) whenever you edit a *faulty* RAID config, or

Possibly, but not tested

>  c) only with this specific RAID config...

This is all I know at this point

Comment 9 Jesse Keating 2010-05-05 00:47:31 UTC
This seems pretty convoluted to reproduce, not sure I'd consider this a blocker.

Comment 10 Adam Williamson 2010-05-05 01:08:05 UTC
if it only happens with this particular config, I'd agree not a blocker. if it happens every time you try to edit a faulty RAID config that's a bit more borderline, but honestly I wouldn't hate myself if we shipped with that (you can always reboot and NOT SCREW UP the next time...)

Comment 11 James Laska 2010-05-05 14:15:51 UTC
What I know ...
 * I'm able to edit existing RAID devices (changing their mountpoint and format)
 * I'm able to add and remove RAID devices (e.g. /dev/md0)
 * I'm able to add and remove RAID members
 * When attempting to re-partition a RAID1 system into a RAID5 system ...
   1. the installer correctly warns that at least 3 RAID members are needed 
   2. the installer fails while attempting to delete existing RAID members

This is certainly a valid partitioning scenario, albeit unusual and likely not common.  Since the operations I am performing are destructive to the disk anyway, we can recommend a workaround of removing the RAID device first.

I'm happy with documenting this issue and workaround on Common_F13_Bugs, and removing this issue from the list.  Objections?

Comment 12 Adam Williamson 2010-05-05 15:39:55 UTC
I agree. We can say that it's not a big infringement of the criterion because it's not impossible to actually use the layout you want; it's just a question of how you get there (it only fails if you make mistakes along the way).

Comment 13 James Laska 2010-05-05 16:12:47 UTC
Thanks Adam.  I've discussed this with release engineering (jkeating) and anaconda-devel (dlehman).  Both are comfortable that we've pinpointed the failure conditions in this bug, and it is safe to release with this problem documented on http://fedoraproject.org/wiki/Common_F13_bugs

I'm adding keyword:CommonBugs.  Either Adam or I will document this issue on the wiki prior to release.

Comment 14 Bug Zapper 2010-07-30 10:57:02 UTC
This bug appears to have been reported against 'rawhide' during the Fedora 14 development cycle.
Changing version to '14'.

More information and reason for this action is here:
http://fedoraproject.org/wiki/BugZappers/HouseKeeping

Comment 15 Gahn Hye Nun 2011-02-02 02:55:04 UTC
Created attachment 476511 [details]
Attached traceback automatically from anaconda.

Comment 16 Zdenek Wagner 2012-03-18 23:04:06 UTC
(In reply to comment #10)
> if it only happens with this particular config, I'd agree not a blocker. if it
> happens every time you try to edit a faulty RAID config that's a bit more
> borderline, but honestly I wouldn't hate myself if we shipped with that (you
> can always reboot and NOT SCREW UP the next time...)

It happened to me when installing Fedora 16. I proceeded in the following way:

1. Decided how much space I need for a partition (eg /boot) that must not be within LVM

2. Created RAID partitions on both disks using fixed size

3. Created RAID1 device

When finished with non-LVM partitioins I wanted to assign the rest to the RAID1 device with LVM. On both disks I created RAID partitions asking to fill the remaining space (both were of the same size), then created the RAID device but anaconda told me that it cannot create physical LVM on RAID1 because it must not be growable. When I tried to delete this RAID device, it crashed.

Next I tried without LVM. I created several RAID1 devices with fixed size. I wanted to mount the largest as /home, so again I asked to fill the remaining space on each disk and tried to create RAID1 device mounted at /home and formatted as ext4 but anaconda contained that RAID1 device cannot be growable. When trying to delete this RAID device it crashed.

The key point is that by selecting "fill the remaining space" it somehow sets some growable flag. If I ask for fixed space, it works both with and without LVM and partitions without this growable flag can be deleted without crash.

Comment 17 David Lehman 2012-05-15 16:35:13 UTC
This error is caused by dangling object references left over when the checks for valid member count or growable members fail, causing early exit from the MDRaidArrayDevice constructor. I have a patch that fixes the bug in my testing. It's probably too late for this for F17, but I'll see what people controlling the release think about it.

Comment 18 Fedora End Of Life 2012-08-16 22:40:27 UTC
This message is a notice that Fedora 14 is now at end of life. Fedora 
has stopped maintaining and issuing updates for Fedora 14. It is 
Fedora's policy to close all bug reports from releases that are no 
longer maintained.  At this time, all open bugs with a Fedora 'version'
of '14' have been closed as WONTFIX.

(Please note: Our normal process is to give advanced warning of this 
occurring, but we forgot to do that. A thousand apologies.)

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, feel free to reopen 
this bug and simply change the 'version' to a later Fedora version.

Bug Reporter: Thank you for reporting this issue and we are sorry that 
we were unable to fix it before Fedora 14 reached end of life. If you 
would still like to see this bug fixed and are able to reproduce it 
against a later version of Fedora, you are encouraged to click on 
"Clone This Bug" (top right of this page) and open it against that 
version of Fedora.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events.  Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

The process we are following is described here: 
http://fedoraproject.org/wiki/BugZappers/HouseKeeping

Comment 19 Zdenek Wagner 2012-08-16 22:47:21 UTC
The bug was originally reported in Fedora 14 but the same happened to me in Fedora 16. The bug reporting tool automatically found it as duplicated and did not advance the version number.


Note You need to log in before you can comment on or make changes to this bug.