Bug 1400318 - pyanaconda.packaging.PayloadInstallError: Unable to find osimg for /dev/mapper/live-base
Summary: pyanaconda.packaging.PayloadInstallError: Unable to find osimg for /dev/mappe...
Keywords:
Status: CLOSED EOL
Alias: None
Product: Fedora
Classification: Fedora
Component: python-blivet
Version: 29
Hardware: x86_64
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Blivet Maintenance Team
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard: abrt_hash:88cde4551878b87a6935c32588d...
: 1406239 1410216 1414104 1421500 1423404 1443244 1467095 1574190 1574220 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-11-30 21:43 UTC by Mark
Modified: 2019-11-27 23:33 UTC (History)
30 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-11-27 23:33:22 UTC
Type: ---


Attachments (Terms of Use)
File: anaconda-tb (1.52 MB, text/plain)
2016-11-30 21:43 UTC, Mark
no flags Details
File: anaconda.log (88.85 KB, text/plain)
2016-11-30 21:43 UTC, Mark
no flags Details
File: environ (508 bytes, text/plain)
2016-11-30 21:43 UTC, Mark
no flags Details
File: journalctl (677.01 KB, text/plain)
2016-11-30 21:43 UTC, Mark
no flags Details
File: lsblk_output (5.01 KB, text/plain)
2016-11-30 21:43 UTC, Mark
no flags Details
File: nmcli_dev_list (1.58 KB, text/plain)
2016-11-30 21:43 UTC, Mark
no flags Details
File: os_info (518 bytes, text/plain)
2016-11-30 21:43 UTC, Mark
no flags Details
File: program.log (194.92 KB, text/plain)
2016-11-30 21:43 UTC, Mark
no flags Details
File: storage.log (536.60 KB, text/plain)
2016-11-30 21:43 UTC, Mark
no flags Details
File: ifcfg.log (10.65 KB, text/plain)
2016-11-30 21:44 UTC, Mark
no flags Details
File: Screenshot from 2016-11-30 21-39-50.png (122.52 KB, application/octet-stream)
2016-11-30 21:44 UTC, Mark
no flags Details

Description Mark 2016-11-30 21:43:07 UTC
Description of problem:
At the first stage of installing to hard drive from USB, select language then 'continue' then the error arises;
the 7.5GB device is the USB drive; error seems identical whether I create the USB drive from mediawriter /
alternatively call livecd-iso-to-disk

Version-Release number of selected component:
anaconda-core-25.20.8-1.fc25.x86_64

The following was filed automatically by anaconda:
anaconda 25.20.8-1 exception report
Traceback (most recent call first):
  File "/usr/lib64/python3.5/site-packages/pyanaconda/packaging/livepayload.py", line 76, in setup
    raise PayloadInstallError("Unable to find osimg for %s" % self.data.method.partition)
  File "/usr/lib64/python3.5/site-packages/pyanaconda/packaging/__init__.py", line 1347, in _runThread
    payload.setup(storage, instClass)
  File "/usr/lib64/python3.5/threading.py", line 862, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib64/python3.5/site-packages/pyanaconda/threads.py", line 251, in run
    threading.Thread.run(self, *args, **kwargs)
pyanaconda.packaging.PayloadInstallError: Unable to find osimg for /dev/mapper/live-base

Additional info:
addons:         com_redhat_kdump
cmdline:        /usr/bin/python3  /sbin/anaconda --liveinst --method=livecd:///dev/mapper/live-base
cmdline_file:   BOOT_IMAGE=vmlinuz initrd=initrd.img root=live:UUID=85efab60-7edb-4638-b99b-69eed8fd15ac rd.live.image quiet
executable:     /sbin/anaconda
hashmarkername: anaconda
kernel:         4.8.6-300.fc25.x86_64
other involved packages: system-python-libs-3.5.2-4.fc25.x86_64
product:        Fedora
release:        Fedora release 25 (Twenty Five)
type:           anaconda
version:        25

Comment 1 Mark 2016-11-30 21:43:29 UTC
Created attachment 1226514 [details]
File: anaconda-tb

Comment 2 Mark 2016-11-30 21:43:32 UTC
Created attachment 1226515 [details]
File: anaconda.log

Comment 3 Mark 2016-11-30 21:43:33 UTC
Created attachment 1226516 [details]
File: environ

Comment 4 Mark 2016-11-30 21:43:43 UTC
Created attachment 1226517 [details]
File: journalctl

Comment 5 Mark 2016-11-30 21:43:45 UTC
Created attachment 1226518 [details]
File: lsblk_output

Comment 6 Mark 2016-11-30 21:43:46 UTC
Created attachment 1226519 [details]
File: nmcli_dev_list

Comment 7 Mark 2016-11-30 21:43:48 UTC
Created attachment 1226520 [details]
File: os_info

Comment 8 Mark 2016-11-30 21:43:51 UTC
Created attachment 1226521 [details]
File: program.log

Comment 9 Mark 2016-11-30 21:43:59 UTC
Created attachment 1226522 [details]
File: storage.log

Comment 10 Mark 2016-11-30 21:44:01 UTC
Created attachment 1226523 [details]
File: ifcfg.log

Comment 11 Mark 2016-11-30 21:44:04 UTC
Created attachment 1226524 [details]
File: Screenshot from 2016-11-30 21-39-50.png

Comment 12 Cameron 2016-12-02 03:45:27 UTC
Similar problem has been detected:

Selected Install to Hard Drive and the error occured immediately. USB image was created using Universal USB Installer.

addons:         com_redhat_kdump
cmdline:        /usr/bin/python3  /sbin/anaconda --liveinst --method=livecd:///dev/mapper/live-base
cmdline_file:   initrd=initrd.img root=live:LABEL=UUI NULL=Fedora-WS-Live-25-1-3 rd.live.image quiet BOOT_IMAGE=vmlinuz 
hashmarkername: anaconda
kernel:         4.8.6-300.fc25.x86_64
other involved packages: system-python-libs-3.5.2-4.fc25.x86_64
package:        anaconda-core-25.20.8-1.fc25.x86_64
packaging.log:  
product:        Fedora
reason:         pyanaconda.packaging.PayloadInstallError: Unable to find osimg for /dev/mapper/live-base
release:        Fedora release 25 (Twenty Five)
version:        25

Comment 13 Vinicius Toti 2016-12-20 03:19:16 UTC
*** Bug 1406239 has been marked as a duplicate of this bug. ***

Comment 14 Stuart R. DeGraaf 2017-01-04 19:17:58 UTC
*** Bug 1410216 has been marked as a duplicate of this bug. ***

Comment 15 Brian "netdragon" Bober 2017-01-08 20:38:55 UTC
Similar problem has been detected:

Booted with Live CD for first time. Chose to install to Hard Drive immediately.

addons:         com_redhat_kdump
cmdline:        /usr/bin/python3  /sbin/anaconda --liveinst --method=livecd:///dev/mapper/live-base
cmdline_file:   BOOT_IMAGE=vmlinuz initrd=initrd.img root=live:CDLABEL=Fedora-WS-Live-25-1-3 rd.live.image rd.live.check quiet
hashmarkername: anaconda
kernel:         4.8.6-300.fc25.x86_64
other involved packages: system-python-libs-3.5.2-4.fc25.x86_64
package:        anaconda-core-25.20.8-1.fc25.x86_64
packaging.log:  
product:        Fedora
reason:         pyanaconda.packaging.PayloadInstallError: Unable to find osimg for /dev/mapper/live-base
release:        Fedora release 25 (Twenty Five)
version:        25

Comment 16 Brian "netdragon" Bober 2017-01-09 00:59:02 UTC
Note: I was using the USB install. I noticed that when I run the hard drive installer, the cd drive spins up, which makes no sense b/c I booted off USB. Perhaps related?

I also determined that fedora's disk tool can't delete partitions on RAID hard drive. May be related. However, I know hard drive worked since Windows 7 was previously on machine.

So, maybe instead a RAID driver issue... Will update when I find solution.

Comment 17 Brian "netdragon" Bober 2017-01-09 04:33:47 UTC
Update: I am leaning towards Linux not liking the RAID setup. Strangely, after observing the disk tool wasn't working, I noticed the drive was automatically mounted and the windows files were visible. So, I did a full wipe and re-initialization of the RAID controller just to ensure that it was a clean disk, and no dice. The disk tool is still not able to work with it, and the installer won't work.

Motherboard:
Asus M4A785TD-M https://www.asus.com/Motherboards/M4A785TDM_EVO/specifications/

Raid is SB710 Chipset RAID

Comment 18 Brian "netdragon" Bober 2017-01-09 05:12:04 UTC
I was able to solve the issue by disabling the motherboard's RAID and turning the drives back to SATA only, then using fdisk to wipe what was apparently promise configuration information on the drive (not surprising that caused issues) or at least was identified as such (can't remember the exact terminology fdisk used)

The manual didn't say anything but SB710 Chipset RAID but I'm guessing at least when this mobo was made in 2008, SB710 Chipset RAID was using a Promise controller for what I have read online some people term as 'fake raid' vs software or hardware RAID.

I can't guarantee that this approach will solve everyone's issue, but worth a try. May also be worth people here sharing what mobo they have and looking into their documentation to see what its RAID controller is.

Comment 19 Mark 2017-01-09 11:53:45 UTC
The workaround for me was install F24 then upgrade to F25; not entirely satisfactory, at least it works. No RAID setup here (just a few different drives). Motherboard ASRock N68-VS3 FX

Comment 20 Rick Wagner 2017-01-14 00:38:09 UTC
Similar problem has been detected:

Booted Fedora25-KDE live image. Clicked the "Install to Hard Drive". Got to the select language page, and just clicked "Forward" (the defaults of "English/US Keyboard" were correct). Then the installer crahed.

I had run a couple times without completing. This time, I added a second SSD, and configured manually created RAID paritions on them via MDADM.

addons:         com_redhat_kdump
cmdline:        /usr/bin/python3  /sbin/anaconda --liveinst --method=livecd:///dev/mapper/live-base
cmdline_file:   BOOT_IMAGE=vmlinuz initrd=initrd.img root=live:CDLABEL=Fedora-KDE-Live-25-1-3 rd.live.image quiet
hashmarkername: anaconda
kernel:         4.8.6-300.fc25.x86_64
other involved packages: system-python-libs-3.5.2-4.fc25.x86_64
package:        anaconda-core-25.20.8-1.fc25.x86_64
packaging.log:  
product:        Fedora
reason:         pyanaconda.packaging.PayloadInstallError: Unable to find osimg for /dev/mapper/live-base
release:        Fedora release 25 (Twenty Five)
version:        25

Comment 21 neal.laugman 2017-01-17 18:50:32 UTC
*** Bug 1414104 has been marked as a duplicate of this bug. ***

Comment 22 Derek 2017-02-12 19:18:47 UTC
*** Bug 1421500 has been marked as a duplicate of this bug. ***

Comment 23 Bob Gustafson 2017-02-17 09:21:03 UTC
*** Bug 1423404 has been marked as a duplicate of this bug. ***

Comment 24 Bob Gustafson 2017-02-17 10:03:46 UTC
I have 3 hard disks on my system.
sda, sdb are a raid array. Fedora 25 had been working for a few months now on these two raid1 disks.

I had a problem with my system - could boot, but it then did not go beyond a text command line - said something about recovery or maintenance - I tried, but not useful.

Then I bought a new 1TB disk, mounted it in the box and attempted to use both the netinstall and live cds to install a new F25 on the new disk.

I continually got the error cited in this bug list. Could not get further than language selection (which was correct) panel - then error panel.

=====

My solution to install F25 was to disconnect the SATA data cables to the two raid pair disks.

I now have only sda (the new 1TB disk) and the install works fine.

The rubber will hit the road when I reconnect the SATA data cables to the raid pair and try to reboot. Will report results.

Comment 25 Bob Gustafson 2017-02-17 12:02:22 UTC
Ok, I connected the SATA data cables to the raid pair disks.

Then adjusted the boot sequence in the bios so it would boot from the new 1TB disk and not either of the two 500MB raid disks (as they have a prior problem).
(It is helpful to have different sized disks because in the bios there is little else to distinguish one hdd from another)

Boot up was fine.

In a terminal I can do 'cat /proc/mdstat' and I see the 3 raid partitions (boot, swap, home) that I set up a few months ago. However, instead of md0, md1, md2 they are md125, md126, md127.

creating a mount point /mnt/home, I can mount the md125 (the boot partition of the raid pair), but mounting the md127 to /mnt/home gives problems:
  mount: unknown filesystem type 'LVM2_member'

I need to do a bit more reading to be able to access that raid partition so I can offload the contents to the 1TB (rescue) disk. Any suggestions are welcome.

Comment 26 Bob Gustafson 2017-02-17 12:19:29 UTC
Hmm, looks pretty good.

lvs gives:

LV    VG      LSize

home      fedora   872.59g
root      fedora    50.00g
swap      fedora     7.92g
LogVol00  vg_hoho6 456.03g

Clearly the first 3 lines are the new logical partitions on the new disk.

Using the last line, I can mount the LVM volume as:

mount /dev/vg_hoho6/LogVol00  /mnt/home -o ro

Since I will be backing it up, the read_only flag is there for safety.

Comment 27 Bob Gustafson 2017-02-17 18:43:15 UTC
Looks good - took about 2:21:22 to transfer 222g (just the home partition)

I'm rsync ing the root and etc partition/directories. I have looked at some of the backed up home files and everything looks good.

Comment 28 Bob Gustafson 2017-02-17 18:59:46 UTC
If you create a raid array with a lvm partition, it is useful to pick a unique name for the volume group. If you need to recover in the future, keep in mind that the default volume group name created by anaconda is 'fedora'.

If you are in 'recovery mode' at a later date, having more than one 'fedora' volume group is not useful.

Comment 29 Paul Campbell 2017-04-18 20:50:26 UTC
*** Bug 1443244 has been marked as a duplicate of this bug. ***

Comment 30 Mike 2017-04-28 23:56:33 UTC
Similar problem has been detected:

Attempting to install Fedora 25

addons:         com_redhat_kdump
cmdline:        /usr/bin/python3  /sbin/anaconda --liveinst --method=livecd:///dev/mapper/live-base
cmdline_file:   BOOT_IMAGE=vmlinuz initrd=initrd.img root=live:CDLABEL=Fedora-WS-Live-25-1-3 rd.live.image quiet
hashmarkername: anaconda
kernel:         4.8.6-300.fc25.x86_64
other involved packages: system-python-libs-3.5.2-4.fc25.x86_64
package:        anaconda-core-25.20.8-1.fc25.x86_64
packaging.log:  
product:        Fedora
reason:         pyanaconda.packaging.PayloadInstallError: Unable to find osimg for /dev/mapper/live-base
release:        Fedora release 25 (Twenty Five)
version:        25

Comment 31 Mike 2017-04-28 23:59:12 UTC
Similar problem has been detected:

Installing Fedora 25

addons:         com_redhat_kdump
cmdline:        /usr/bin/python3  /sbin/anaconda --liveinst --method=livecd:///dev/mapper/live-base
cmdline_file:   BOOT_IMAGE=vmlinuz initrd=initrd.img root=live:CDLABEL=Fedora-WS-Live-25-1-3 rd.live.image quiet
hashmarkername: anaconda
kernel:         4.8.6-300.fc25.x86_64
other involved packages: system-python-libs-3.5.2-4.fc25.x86_64
package:        anaconda-core-25.20.8-1.fc25.x86_64
packaging.log:  
product:        Fedora
reason:         pyanaconda.packaging.PayloadInstallError: Unable to find osimg for /dev/mapper/live-base
release:        Fedora release 25 (Twenty Five)
version:        25

Comment 32 David W. Legg 2017-06-26 12:31:33 UTC
Similar problem has been detected:

Selected UK English and clicked on Continue in anaconda on HP Pavilion with SD card present in MMC slot.

addons:         com_redhat_kdump
cmdline:        /usr/bin/python3  /sbin/anaconda --liveinst --method=livecd:///dev/mapper/live-base
cmdline_file:   BOOT_IMAGE=vmlinuz initrd=initrd.img root=live:CDLABEL=Fedora-KDE-Live-25-1-3 rd.live.image rd.live.check quiet
hashmarkername: anaconda
kernel:         4.8.6-300.fc25.x86_64
other involved packages: system-python-libs-3.5.2-4.fc25.x86_64
package:        anaconda-core-25.20.8-1.fc25.x86_64
packaging.log:  
product:        Fedora
reason:         pyanaconda.packaging.PayloadInstallError: Unable to find osimg for /dev/mapper/live-base
release:        Fedora release 25 (Twenty Five)
version:        25

Comment 33 Fedora End Of Life 2017-11-16 19:07:10 UTC
This message is a reminder that Fedora 25 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 25. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as EOL if it remains open with a Fedora  'version'
of '25'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version'
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not
able to fix it before Fedora 25 is end of life. If you would still like
to see this bug fixed and are able to reproduce it against a later version
of Fedora, you are encouraged  change the 'version' to a later Fedora
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's
lifetime, sometimes those efforts are overtaken by events. Often a
more recent Fedora release includes newer upstream software that fixes
bugs or makes them obsolete.

Comment 34 Fedora End Of Life 2017-12-12 10:07:08 UTC
Fedora 25 changed to end-of-life (EOL) status on 2017-12-12. Fedora 25 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.

Comment 35 Jiri Konecny 2018-05-03 07:43:16 UTC
*** Bug 1574190 has been marked as a duplicate of this bug. ***

Comment 36 Jiri Konecny 2018-05-03 07:43:29 UTC
*** Bug 1467095 has been marked as a duplicate of this bug. ***

Comment 37 Vendula Poncova 2018-05-03 11:46:37 UTC
The failure is caused by another exception, see anaconda.log:


21:25:23,992 CRIT anaconda: Traceback (most recent call last):

  File "/usr/lib64/python3.5/site-packages/pyanaconda/threads.py", line 251, in run
    threading.Thread.run(self, *args, **kwargs)

  File "/usr/lib64/python3.5/threading.py", line 862, in run
    self._target(*self._args, **self._kwargs)

  File "/usr/lib/python3.5/site-packages/blivet/osinstall.py", line 1175, in storage_initialize
    storage.reset()

  File "/usr/lib/python3.5/site-packages/blivet/threads.py", line 45, in run_with_lock
    return m(*args, **kwargs)

  File "/usr/lib/python3.5/site-packages/blivet/blivet.py", line 271, in reset
    self.devicetree.populate(cleanup_only=cleanup_only)

  File "/usr/lib/python3.5/site-packages/blivet/threads.py", line 45, in run_with_lock
    return m(*args, **kwargs)

  File "/usr/lib/python3.5/site-packages/blivet/populator/populator.py", line 451, in populate
    self._populate()

  File "/usr/lib/python3.5/site-packages/blivet/threads.py", line 45, in run_with_lock
    return m(*args, **kwargs)

  File "/usr/lib/python3.5/site-packages/blivet/populator/populator.py", line 518, in _populate
    self.handle_device(dev)

  File "/usr/lib/python3.5/site-packages/blivet/threads.py", line 45, in run_with_lock
    return m(*args, **kwargs)

  File "/usr/lib/python3.5/site-packages/blivet/populator/populator.py", line 306, in handle_device
    device = helper_class(self, info).run()

  File "/usr/lib/python3.5/site-packages/blivet/populator/helpers/partition.py", line 111, in run
    self._devicetree._add_device(device)

  File "/usr/lib/python3.5/site-packages/blivet/threads.py", line 45, in run_with_lock
    return m(*args, **kwargs)

  File "/usr/lib/python3.5/site-packages/blivet/devicetree.py", line 148, in _add_device
    raise ValueError("device is already in tree")

ValueError: device is already in tree


It looks to be an issue in the storage library. Reassigning.

Comment 38 Chris Burghart 2018-05-03 14:55:22 UTC
*** Bug 1574220 has been marked as a duplicate of this bug. ***

Comment 39 Alex G. 2018-07-17 23:09:59 UTC
Similar problem has been detected:

1. Run Fedora libe-usb on system with crapload[1] of drives. 
2. Click on "Install to hard drive" icon
3. profit (or get extremely frustrated and throw the Fedora live usb through the window)

[1] # lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
loop0         7:0    0   1.7G  1 loop  
loop1         7:1    0   6.1G  1 loop  
├─live-rw   253:0    0   6.1G  0 dm    /
└─live-base 253:1    0   6.1G  1 dm    
loop2         7:2    0    32G  0 loop  
└─live-rw   253:0    0   6.1G  0 dm    /
sda           8:0    0 167.7G  0 disk  
├─sda1        8:1    0   600M  0 part  
├─sda2        8:2    0 127.1G  0 part  
├─sda3        8:3    0    30G  0 part                                                                                                                                                                                                                                                                                                                                                                                                                                                        
├─sda4        8:4    0     1K  0 part                                                                                                                                                                                                                                                                                                                                                                                                                                                        
└─sda5        8:5    0    10G  0 part                                                                                                                                                                                                                                                                                                                                                                                                                                                        
sdb           8:16   0 931.5G  0 disk                                                                                                                                                                                                                                                                                                                                                                                                                                                        
└─sdb1        8:17   0 931.5G  0 part                                                                                                                                                                                                                                                                                                                                                                                                                                                        
sdc           8:32   0 238.5G  0 disk                                                                                                                                                                                                                                                                                                                                                                                                                                                        
├─sdc1        8:33   0     1G  0 part                                                                                                                                                                                                                                                                                                                                                                                                                                                        
│ └─md127     9:127  0  1023M  0 raid1                                                                                                                                                                                                                                                                                                                                                                                                                                                       
└─sdc2        8:34   0 237.5G  0 part                                                                                                                                                                                                                                                                                                                                                                                                                                                        
  └─md126     9:126  0 237.4G  0 raid1                                                                                                                                                                                                                                                                                                                                                                                                                                                       
sdd           8:48   0 238.5G  0 disk                                                                                                                                                                                                                                                                                                                                                                                                                                                        
├─sdd1        8:49   0     1G  0 part                                                                                                                                                                                                                                                                                                                                                                                                                                                        
│ └─md127     9:127  0  1023M  0 raid1                                                                                                                                                                                                                                                                                                                                                                                                                                                       
└─sdd2        8:50   0 237.5G  0 part                                                                                                                                                                                                                                                                                                                                                                                                                                                        
  └─md126     9:126  0 237.4G  0 raid1                                                                                                                                                                                                                                                                                                                                                                                                                                                       
sde           8:64   0 465.8G  0 disk                                                                                                                                                                                                                                                                                                                                                                                                                                                        
sdf           8:80   0 465.8G  0 disk                                                                                                                                                                                                                                                                                                                                                                                                                                                        
sdg           8:96   0 298.1G  0 disk                                                                                                                                                                                                                                                                                                                                                                                                                                                        
sdh           8:112  0 238.5G  0 disk                                                                                                                                                                                                                                                                                                                                                                                                                                                        
├─sdh1        8:113  0     1G  0 part                                                                                                                                                                                                                                                                                                                                                                                                                                                        
│ └─md127     9:127  0  1023M  0 raid1                                                                                                                                                                                                                                                                                                                                                                                                                                                       
└─sdh2        8:114  0 237.5G  0 part                                                                                                                                                                                                                                                                                                                                                                                                                                                        
  └─md126     9:126  0 237.4G  0 raid1                                                                                                                                                                                                                                                                                                                                                                                                                                                       
sdi           8:128  0 931.5G  0 disk                                                                                                                                                                                                                                                                                                                                                                                                                                                        
└─sdi1        8:129  0 931.5G  0 part                                                                                                                                                                                                                                                                                                                                                                                                                                                        
sdj           8:144  0 931.5G  0 disk                                                                                                                                                                                                                                                                                                                                                                                                                                                        
└─sdj1        8:145  0 931.5G  0 part                                                                                                                                                                                                                                                                                                                                                                                                                                                        
sdk           8:160  1   1.9G  0 disk                                                                                                                                                                                                                                                                                                                                                                                                                                                        
├─sdk1        8:161  1   1.8G  0 part  /run/initramfs/live                                                                                                                                                                                                                                                                                                                                                                                                                                   
├─sdk2        8:162  1   9.1M  0 part                                                                                                                                                                                                                                                                                                                                                                                                                                                        
└─sdk3        8:163  1  19.2M  0 part     

addons:         com_redhat_kdump
cmdline:        /usr/bin/python3  /sbin/anaconda --liveinst --method=livecd:///dev/mapper/live-base
cmdline_file:   BOOT_IMAGE=vmlinuz initrd=initrd.img root=live:CDLABEL=Fedora-KDE-Live-28-1-1 rd.live.image quiet
hashmarkername: anaconda
kernel:         4.16.3-301.fc28.x86_64
other involved packages: python3-libs-3.6.5-1.fc28.x86_64
package:        anaconda-core-28.22.10-1.fc28.x86_64
packaging.log:  
product:        Fedora
reason:         pyanaconda.payload.PayloadInstallError: Unable to find osimg for /dev/mapper/live-base
release:        Fedora release 28 (Twenty Eight)
version:        28

Comment 40 Jan Kurik 2018-08-14 10:21:12 UTC
This bug appears to have been reported against 'rawhide' during the Fedora 29 development cycle.
Changing version to '29'.

Comment 41 Jan 2019-10-10 13:00:51 UTC
Hi.

It seems that blivet encountered duplicate device UUID, which should be unique. 
Blivet uses UUID for device identification, and in this case it thinks it encountered the same device twice when it shouldn't.

Most common source of duplicate UUID is disk cloning. RAID devices tend to be created that way.

UUIDs can be seen using 'lsblk' command. 
After the error message shows up:

Press <Ctrl+Alt+F2> to get to console.
Run: 'lsblk -o NAME,LABEL,UUID,PARTUUID'

Now check that no two devices in list share the same UUID and/or PARTUUID value.
You can ignore all devices with LABEL 'Anaconda'. Empty values are OK as well.

Formatting/repartitioning the disk will change the UUID(s) to new values.
Since other applications may rely on UUID, I strongly recommend to do that.

It is also possible to workaround this issue by physically disconnecting one of the devices.

Sadly, we cannot do much in this case.

Newer blivet versions should produce more descriptive error message and print additional information to the logs so we can better address this kind of issue in the future.

Hope this helps,
Jan

Comment 42 Alex G. 2019-10-10 13:54:44 UTC
No! That doesn't solve it! Duplicate UUIDs happen with raid arrays. If I tried to install Fedora on the machine pasted below, it would crash.

blivet can't habdle what is otherwise a valid configuration. It's a bug!

$ lsblk -o NAME,LABEL,UUID,PARTUUID
NAME                         LABEL                          UUID                                   PARTUUID
sda                                                                                                
├─sda1                       localhost.localdomain:boot     f2f28ce5-f406-f55b-b6be-970809c31d97   e363a6a5-01
│ └─md127                    boot                           f2f55b35-2ec3-4461-83b7-edb5b06d768a   
└─sda2                                                                                             e363a6a5-02
sdb                                                                                                
└─sdb1                       g-prime.gtech:terraman         a79ef4a8-7451-a2d3-0e4f-3fa739645f9d   1a914d58-0c53-4f83-8d3b-6b7dab3361a5
  └─md125                    terrabutter                    1e5c06c7-73f6-4797-942d-4970ed9d9976   
sdc                                                                                                
└─sdc1                       g-prime.gtech:terraman         a79ef4a8-7451-a2d3-0e4f-3fa739645f9d   8c621433-01
  └─md125                    terrabutter                    1e5c06c7-73f6-4797-942d-4970ed9d9976   
sdd                                                                                                
├─sdd1                       localhost.localdomain:boot     f2f28ce5-f406-f55b-b6be-970809c31d97   7a4bc0c9-01
│ └─md127                    boot                           f2f55b35-2ec3-4461-83b7-edb5b06d768a   
└─sdd2                       nuclearis2-1.gtech:fedora-raid e9e22e66-7b5b-fbd0-901e-e4b6231eb50b   7a4bc0c9-02
  └─md126                                                   dYUlZI-g6Bb-GcUm-wAl4-G1RV-EgTB-ZjWiBK 
    ├─fedora-root            root                           5c72f268-997b-4fbc-933c-670154e67b55   
    ├─fedora-log             log                            1d09d587-3460-4162-a9fd-a32800d2df1a   
    └─fedora-home_corig                                                                            
      └─fedora-home          home                           b1e7654a-c5de-457a-886c-a147ae095af3   
sde                                                                                                
├─sde1                       localhost.localdomain:boot     f2f28ce5-f406-f55b-b6be-970809c31d97   7a7d0f42-01
│ └─md127                    boot                           f2f55b35-2ec3-4461-83b7-edb5b06d768a   
└─sde2                       nuclearis2-1.gtech:fedora-raid e9e22e66-7b5b-fbd0-901e-e4b6231eb50b   7a7d0f42-02
  └─md126                                                   dYUlZI-g6Bb-GcUm-wAl4-G1RV-EgTB-ZjWiBK 
    ├─fedora-root            root                           5c72f268-997b-4fbc-933c-670154e67b55   
    ├─fedora-log             log                            1d09d587-3460-4162-a9fd-a32800d2df1a   
    └─fedora-home_corig                                                                            
      └─fedora-home          home                           b1e7654a-c5de-457a-886c-a147ae095af3   
sdf                                                                                                
└─sdf1                       nuclearis2-1.gtech:124         508f4586-a2eb-4fb5-0db8-4acb81670458   8c626a04-3098-a045-a315-fb0c4e170cc4
  └─md124                                                   p4XXBG-WJSw-28Gz-mFG1-Qngm-CdpK-puWpHQ 
    └─cros-crap_corig                                                                              
      └─cros-crap            chromite                       3538b5aa-95cd-418b-9066-05d8f491defa   
sdg                                                                                                
└─sdg1                       nuclearis2-1.gtech:124         508f4586-a2eb-4fb5-0db8-4acb81670458   8c626a04-3098-a045-a315-fb0c4e170cc4
  └─md124                                                   p4XXBG-WJSw-28Gz-mFG1-Qngm-CdpK-puWpHQ 
    └─cros-crap_corig                                                                              
      └─cros-crap            chromite                       3538b5aa-95cd-418b-9066-05d8f491defa   
sdh                                                                                                
sdi                                                                                                
├─sdi1                       localhost.localdomain:boot     f2f28ce5-f406-f55b-b6be-970809c31d97   0e610c04-01
│ └─md127                    boot                           f2f55b35-2ec3-4461-83b7-edb5b06d768a   
└─sdi2                       nuclearis2-1.gtech:fedora-raid e9e22e66-7b5b-fbd0-901e-e4b6231eb50b   0e610c04-02
  └─md126                                                   dYUlZI-g6Bb-GcUm-wAl4-G1RV-EgTB-ZjWiBK 
    ├─fedora-root            root                           5c72f268-997b-4fbc-933c-670154e67b55   
    ├─fedora-log             log                            1d09d587-3460-4162-a9fd-a32800d2df1a   
    └─fedora-home_corig                                                                            
      └─fedora-home          home                           b1e7654a-c5de-457a-886c-a147ae095af3   
sdj                                                                                                
└─sdj1                                                      f155b8c5-f611-430b-96bd-e9f8633f5e61   1569945f-01
sdk                                                                                                
└─sdk1                                                      7fec3034-a957-4853-a0f1-3c340189c037   850b2f14-01
nvme0n1                                                                                            
├─nvme0n1p1                                                                                        0e610c04-01
├─nvme0n1p2                                                 eQqCf2-rORq-UT1i-UagA-AWMs-FbtM-2DZ9dv 0e610c04-02
│ ├─fedora-fedhomo--cachepool_cdata
│ │                                                                                                
│ │ └─fedora-home            home                           b1e7654a-c5de-457a-886c-a147ae095af3   
│ └─fedora-fedhomo--cachepool_cmeta
│                                                                                                  
│   └─fedora-home            home                           b1e7654a-c5de-457a-886c-a147ae095af3   
├─nvme0n1p3                                                 IdzZqG-X76F-XIsN-GQyY-sTSI-MVGp-Kf6dvE 0e610c04-03
│ ├─cros-cros_cachepool_cdata
│ │                                                                                                
│ │ └─cros-crap              chromite                       3538b5aa-95cd-418b-9066-05d8f491defa   
│ └─cros-cros_cachepool_cmeta
│                                                                                                  
│   └─cros-crap              chromite                       3538b5aa-95cd-418b-9066-05d8f491defa   
├─nvme0n1p4                                                                                        0e610c04-04
└─nvme0n1p5                                                                                        0e610c04-05

Comment 43 Jan 2019-10-15 12:41:27 UTC
Hi Alex,

I agree with you, duplicate UUID indeed happens when working with RAID arrays. UUID column in lsblk output contains "filesystem" UUIDs. When multiple RAID devices create the filesystem,
the UUIDs in lsblk output have to be the same, since underlying devices share the same filesystem. No duplicate UUID in this case. 
I was not entirely correct/clear about this. I apologize for that.

However PARTUUID (partition UUID) is a different story. Having two different partitions sharing the same UUID violates the RFC4122 which specifies uniqueness of UUID.
Partition UUID is also the one checked by blivet and quite probably the one causing trouble.

Rather than risking messing up the wrong device, blivet raises the exception.
So yes - the configuration you have provided would cause this version of blivet (2.1.6) to crash - sdf1 and sdg1 partition UUIDs.
That configuration is invalid.

What I was trying to say is that duplicate UUID is a problem that cannot be solved on blivet side. There already might be some data on the device which rely on partition UUID of it.
Changing UUID presents serious data corruption risk. "Tolerating" invalid configuration is not a good solution either.

What we can do, is to provide better information about the problem without anaconda crashing. That is already done in newer blivet version (https://github.com/storaged-project/blivet/pull/679).
Another pull request was created to include PARTUUID to the future logs.

Hope this helps,
Jan

Comment 44 Ben Cotton 2019-10-31 18:45:57 UTC
This message is a reminder that Fedora 29 is nearing its end of life.
Fedora will stop maintaining and issuing updates for Fedora 29 on 2019-11-26.
It is Fedora's policy to close all bug reports from releases that are no longer
maintained. At that time this bug will be closed as EOL if it remains open with a
Fedora 'version' of '29'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not 
able to fix it before Fedora 29 is end of life. If you would still like 
to see this bug fixed and are able to reproduce it against a later version 
of Fedora, you are encouraged  change the 'version' to a later Fedora 
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

Comment 45 Ben Cotton 2019-11-27 23:33:22 UTC
Fedora 29 changed to end-of-life (EOL) status on 2019-11-26. Fedora 29 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.


Note You need to log in before you can comment on or make changes to this bug.