Description of problem: Booted Workstation DVD then selected "Install to disk" option. Failure occurs without doing anything else. Version-Release number of selected component: anaconda-core-29.24.7-1.fc29.x86_64 The following was filed automatically by anaconda: anaconda 29.24.7 exception report Traceback (most recent call first): File "/usr/lib/python3.7/site-packages/blivet/devicetree.py", line 158, in _add_device raise ValueError("device is already in tree") File "/usr/lib/python3.7/site-packages/blivet/threads.py", line 53, in run_with_lock return m(*args, **kwargs) File "/usr/lib/python3.7/site-packages/blivet/populator/helpers/partition.py", line 112, in run self._devicetree._add_device(device) File "/usr/lib/python3.7/site-packages/blivet/populator/populator.py", line 264, in handle_device device = helper_class(self, info).run() File "/usr/lib/python3.7/site-packages/blivet/threads.py", line 53, in run_with_lock return m(*args, **kwargs) File "/usr/lib/python3.7/site-packages/blivet/populator/populator.py", line 461, in _populate self.handle_device(dev) File "/usr/lib/python3.7/site-packages/blivet/threads.py", line 53, in run_with_lock return m(*args, **kwargs) File "/usr/lib/python3.7/site-packages/blivet/populator/populator.py", line 413, in populate self._populate() File "/usr/lib/python3.7/site-packages/blivet/threads.py", line 53, in run_with_lock return m(*args, **kwargs) File "/usr/lib/python3.7/site-packages/blivet/blivet.py", line 161, in reset self.devicetree.populate(cleanup_only=cleanup_only) File "/usr/lib/python3.7/site-packages/blivet/threads.py", line 53, in run_with_lock return m(*args, **kwargs) File "/usr/lib64/python3.7/site-packages/pyanaconda/storage/osinstall.py", line 1738, in reset super().reset(cleanup_only=cleanup_only) File "/usr/lib/python3.7/site-packages/blivet/threads.py", line 53, in run_with_lock return m(*args, **kwargs) File "/usr/lib64/python3.7/site-packages/pyanaconda/storage/osinstall.py", line 2298, in storage_initialize storage.reset() File "/usr/lib64/python3.7/threading.py", line 865, in run self._target(*self._args, **self._kwargs) File "/usr/lib64/python3.7/site-packages/pyanaconda/threading.py", line 286, in run threading.Thread.run(self) ValueError: device is already in tree Additional info: addons: com_redhat_kdump cmdline: /usr/bin/python3 /sbin/anaconda --liveinst --method=livecd:/dev/mapper/live-base cmdline_file: BOOT_IMAGE=/images/pxeboot/vmlinuz root=live:CDLABEL=Fedora-WS-Live-29-1-2 rd.live.image quiet executable: /sbin/anaconda hashmarkername: anaconda kernel: 4.18.16-300.fc29.x86_64 other involved packages: python3-blivet-3.1.1-2.fc29.noarch, python3-libs-3.7.0-9.fc29.x86_64 product: Fedora release: Fedora release 29 (Twenty Nine) type: anaconda version: 29
Created attachment 1509896 [details] File: anaconda-tb
Created attachment 1509897 [details] File: anaconda.log
Created attachment 1509898 [details] File: dbus.log
Created attachment 1509899 [details] File: environ
Created attachment 1509900 [details] File: journalctl
Created attachment 1509901 [details] File: lsblk_output
Created attachment 1509902 [details] File: lvm.log
Created attachment 1509903 [details] File: nmcli_dev_list
Created attachment 1509904 [details] File: os_info
Created attachment 1509905 [details] File: program.log
Created attachment 1509906 [details] File: storage.log
Created attachment 1509907 [details] File: ifcfg.log
It seems to be an issue in the storage configuration library. Reassigning to blivet.
Further research indicated this issue may be due to an old RAID set. Two of the disks were once a shadowed pair, but have not been used as such for more than a year since the system was upgraded from Fedora-14 to CentOS 7 with no issues. Only one of the mirrored disks were used. The other was left unmounted. To test if the split RAID set was causing the installation problem, unplugged one of the disks. Installation then proceeded with no more issues. After completing the installation, plugged the disk back in then rebooted and everything works as expected. The second disk of the old RAID set is not used and is still not mounted. Will reformat it at some point to make sure no further problems arise. From the submitters viewpoint this issue can be considered resolved.
Hi Randy, this exception usually happens when blivet encounters duplicate UUID. Blivet uses UUID for device identification and gets confused when the same value is encountered twice. In your case it seems that sdc and sdd were the culprits. The most common cause of duplicate UUID is cloning disks (e.g. for RAID). That would explain why temporary disconnection of the disk helped. I am afraid that we cannot do much in this case. In newer versions of blivet this issue at least gets a better description, though. I am closing the bug but feel free to reopen it if you have any questions. Have a nice day, Jan