Bug 1614592 - ValueError: device is already in tree
Summary: ValueError: device is already in tree
Keywords:
Status: CLOSED EOL
Alias: None
Product: Fedora
Classification: Fedora
Component: python-blivet
Version: 28
Hardware: x86_64
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Blivet Maintenance Team
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard: abrt_hash:529906f98584104b2cee39641c8...
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-08-10 02:31 UTC by hardy.heroin
Modified: 2019-05-28 22:05 UTC (History)
13 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2019-05-28 22:05:39 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)
File: anaconda-tb (1.63 MB, text/plain)
2018-08-10 02:31 UTC, hardy.heroin
no flags Details
File: anaconda.log (31.17 KB, text/plain)
2018-08-10 02:31 UTC, hardy.heroin
no flags Details
File: dbus.log (9.33 KB, text/plain)
2018-08-10 02:31 UTC, hardy.heroin
no flags Details
File: environ (695 bytes, text/plain)
2018-08-10 02:31 UTC, hardy.heroin
no flags Details
File: journalctl (927.78 KB, text/plain)
2018-08-10 02:31 UTC, hardy.heroin
no flags Details
File: lsblk_output (4.37 KB, text/plain)
2018-08-10 02:31 UTC, hardy.heroin
no flags Details
File: nmcli_dev_list (1.92 KB, text/plain)
2018-08-10 02:31 UTC, hardy.heroin
no flags Details
File: os_info (556 bytes, text/plain)
2018-08-10 02:31 UTC, hardy.heroin
no flags Details
File: program.log (115.18 KB, text/plain)
2018-08-10 02:31 UTC, hardy.heroin
no flags Details
File: storage.log (558.85 KB, text/plain)
2018-08-10 02:31 UTC, hardy.heroin
no flags Details
File: ifcfg.log (5.76 KB, text/plain)
2018-08-10 02:31 UTC, hardy.heroin
no flags Details

Description hardy.heroin 2018-08-10 02:31:22 UTC
Description of problem:
I started the live installer, waited a few seconds, and this crash occurs reproducibly. 

In an earlier bug report (of which there are many) I found a clue it is caused by several devices with the same UUID, which I can confirm. 

For reference, this is the bug report that provided the clue:
https://bugzilla.redhat.com/show_bug.cgi?id=1472999
I note this live installer has (pip3 freeze) blivet package version:
blivet (3.0.0b1)
blivet-gui (2.1.8)
and the error occurs at this line:
https://github.com/storaged-project/blivet/blob/3.0-release/blivet/devicetree.py#L158
The relevant code (# added by me) being:
#        if newdev.uuid and newdev.uuid in [d.uuid for d in self._devices] and \
#           not isinstance(newdev, NoDevice):
#raise ValueError("device is already in tree")

These are the relevant RAID1 devices. Here is the output of blkid:

[liveuser@localhost-live ~]$ blkid
# non-relevant entries omitted
/dev/sdc1: UUID="12c3222d-2959-51ba-1d8f-0b64164d4fed" UUID_SUB="f4da13e4-d837-2836-20ac-3779cee2456c" LABEL="myraidname" TYPE="linux_raid_member" PARTUUID="25899521-7fdd-4731-b15f-891c64b8b1c5"
/dev/sdd1: UUID="12c3222d-2959-51ba-1d8f-0b64164d4fed" UUID_SUB="23fdef09-9ddd-392c-d605-6966ddd07101" LABEL="myraidname" TYPE="linux_raid_member" PARTUUID="25899521-7fdd-4731-b15f-891c64b8b1c5"
/dev/sde: UUID="7956d543-45c7-4344-9816-8cfc8a622355" UUID_SUB="e80a93a0-4732-4856-a164-1e277aea68a7" TYPE="btrfs"
/dev/sdf: UUID="7956d543-45c7-4344-9816-8cfc8a622355" UUID_SUB="b992d41e-aa97-4a38-8dfc-44c7cd72d8ae" TYPE="btrfs"
/dev/md127p1: LABEL="bigraid" UUID="55d77587-4737-45c4-a72c-6b343acff43b" TYPE="ext4" PARTLABEL="raidhome" PARTUUID="a6e13a91-6f14-4f6d-9ae7-750b5780dafc"

I note that while the UUID of RAID1 devices are identical (not deliberatly set by me, btw) the UUID_SUB is always unique. 
Perhaps blivet should add a device to the tree based on this value instead?
As far as I understand, RAID devices with the same UUID are not uncommon:
https://serverfault.com/a/205790

Here are all the related bug reports:
https://duckduckgo.com/?q=blivet+%22ValueError%3A+device+is+already+in+tree%22+site%3Aredhat.com&t=hj&ia=web

Version-Release number of selected component:
anaconda-core-28.22.10-1.fc28.x86_64

The following was filed automatically by anaconda:
anaconda 28.22.10 exception report
Traceback (most recent call first):
  File "/usr/lib/python3.6/site-packages/blivet/devicetree.py", line 158, in _add_device
    raise ValueError("device is already in tree")
  File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/blivet/populator/helpers/partition.py", line 112, in run
    self._devicetree._add_device(device)
  File "/usr/lib/python3.6/site-packages/blivet/populator/populator.py", line 263, in handle_device
    device = helper_class(self, info).run()
  File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/blivet/populator/populator.py", line 122, in _add_slave_devices
    self.handle_device(slave_info)
  File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/blivet/populator/helpers/mdraid.py", line 55, in run
    self._devicetree._add_slave_devices(self.data)
  File "/usr/lib/python3.6/site-packages/blivet/populator/populator.py", line 263, in handle_device
    device = helper_class(self, info).run()
  File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/blivet/populator/helpers/mdraid.py", line 216, in run
    self._devicetree.handle_device(array_info, update_orig_fmt=True)
  File "/usr/lib/python3.6/site-packages/blivet/populator/populator.py", line 303, in handle_format
    helper_class(self, info, device).run()
  File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/blivet/populator/populator.py", line 275, in handle_device
    self.handle_format(info, device)
  File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/blivet/populator/populator.py", line 462, in _populate
    self.handle_device(dev)
  File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/blivet/populator/populator.py", line 412, in populate
    self._populate()
  File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/blivet/blivet.py", line 161, in reset
    self.devicetree.populate(cleanup_only=cleanup_only)
  File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib64/python3.6/site-packages/pyanaconda/storage/osinstall.py", line 1670, in reset
    super().reset(cleanup_only=cleanup_only)
  File "/usr/lib/python3.6/site-packages/blivet/threads.py", line 53, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib64/python3.6/site-packages/pyanaconda/storage/osinstall.py", line 2193, in storage_initialize
    storage.reset()
  File "/usr/lib64/python3.6/threading.py", line 864, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib64/python3.6/site-packages/pyanaconda/threading.py", line 291, in run
    threading.Thread.run(self)
ValueError: device is already in tree

Additional info:
addons:         com_redhat_kdump
cmdline:        /usr/bin/python3  /sbin/anaconda --liveinst --method=livecd:///dev/mapper/live-base
cmdline_file:   BOOT_IMAGE=/images/pxeboot/vmlinuz root=live:CDLABEL=Fedora-SciK-Live-28-1-1 rd.live.image quiet
executable:     /sbin/anaconda
hashmarkername: anaconda
kernel:         4.16.3-301.fc28.x86_64
other involved packages: python3-libs-3.6.5-1.fc28.x86_64, python3-blivet-3.0.0-0.6.1.b1.fc28.noarch
product:        Fedora
release:        Fedora release 28 (Twenty Eight)
type:           anaconda
version:        28

Comment 1 hardy.heroin 2018-08-10 02:31:32 UTC
Created attachment 1474853 [details]
File: anaconda-tb

Comment 2 hardy.heroin 2018-08-10 02:31:34 UTC
Created attachment 1474854 [details]
File: anaconda.log

Comment 3 hardy.heroin 2018-08-10 02:31:36 UTC
Created attachment 1474855 [details]
File: dbus.log

Comment 4 hardy.heroin 2018-08-10 02:31:37 UTC
Created attachment 1474856 [details]
File: environ

Comment 5 hardy.heroin 2018-08-10 02:31:41 UTC
Created attachment 1474857 [details]
File: journalctl

Comment 6 hardy.heroin 2018-08-10 02:31:43 UTC
Created attachment 1474858 [details]
File: lsblk_output

Comment 7 hardy.heroin 2018-08-10 02:31:44 UTC
Created attachment 1474859 [details]
File: nmcli_dev_list

Comment 8 hardy.heroin 2018-08-10 02:31:46 UTC
Created attachment 1474860 [details]
File: os_info

Comment 9 hardy.heroin 2018-08-10 02:31:48 UTC
Created attachment 1474861 [details]
File: program.log

Comment 10 hardy.heroin 2018-08-10 02:31:51 UTC
Created attachment 1474862 [details]
File: storage.log

Comment 11 hardy.heroin 2018-08-10 02:31:53 UTC
Created attachment 1474863 [details]
File: ifcfg.log

Comment 12 hardy.heroin 2018-08-10 02:43:45 UTC
For completeness sake (as I will probably be working around this issue by disconnecting half the RAID1 devices temporarily) here is the mdadm output.
NB: I don't know why I have the same system duplicated as md127 and md127p1

[root@localhost-live]# mdadm -D /dev/md127
/dev/md127:
        Version : 1.2
  Creation Time : Sat Mar 17 19:18:49 2018
     Raid Level : raid1
     Array Size : 3906886464 (3725.90 GiB 4000.65 GB)
  Used Dev Size : 3906886464 (3725.90 GiB 4000.65 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Thu Aug  9 21:20:48 2018
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : myraidname
           UUID : 12c3222d:295951ba:1d8f0b64:164d4fed
         Events : 1229519

    Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1
       2       8       49        1      active sync   /dev/sdd1

[root@localhost-live]# mdadm -D /dev/md127p1
/dev/md127p1:
        Version : 1.2
  Creation Time : Sat Mar 17 19:18:49 2018
     Raid Level : raid1
     Array Size : 3906884608 (3725.90 GiB 4000.65 GB)
  Used Dev Size : 3906886464 (3725.90 GiB 4000.65 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Thu Aug  9 21:20:48 2018
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : myraidname
           UUID : 12c3222d:295951ba:1d8f0b64:164d4fed
         Events : 1229519

    Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1
       2       8       49        1      active sync   /dev/sdd1


The btrfs raid1 devices don't show up as an md device:
/dev/sde: UUID="7956d543-45c7-4344-9816-8cfc8a622355" UUID_SUB="e80a93a0-4732-4856-a164-1e277aea68a7" TYPE="btrfs"
/dev/sdf: UUID="7956d543-45c7-4344-9816-8cfc8a622355" UUID_SUB="b992d41e-aa97-4a38-8dfc-44c7cd72d8ae" TYPE="btrfs"

Comment 13 Jiri Konecny 2018-08-10 07:18:16 UTC
Based on the traceback and your description this bug is more suited for our storage library. They should be able to help you better than we can.

Changing components.

Comment 14 Ben Cotton 2019-05-02 21:10:40 UTC
This message is a reminder that Fedora 28 is nearing its end of life.
On 2019-May-28 Fedora will stop maintaining and issuing updates for
Fedora 28. It is Fedora's policy to close all bug reports from releases
that are no longer maintained. At that time this bug will be closed as
EOL if it remains open with a Fedora 'version' of '28'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not 
able to fix it before Fedora 28 is end of life. If you would still like 
to see this bug fixed and are able to reproduce it against a later version 
of Fedora, you are encouraged  change the 'version' to a later Fedora 
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

Comment 15 Ben Cotton 2019-05-28 22:05:39 UTC
Fedora 28 changed to end-of-life (EOL) status on 2019-05-28. Fedora 28 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.


Note You need to log in before you can comment on or make changes to this bug.