Bug 1472999
Summary: | ValueError: device is already in tree | ||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | larsehauge | ||||||||||||||||||||||||||
Component: | python-blivet | Assignee: | Blivet Maintenance Team <blivet-maint-list> | ||||||||||||||||||||||||||
Status: | CLOSED NOTABUG | QA Contact: | Fedora Extras Quality Assurance <extras-qa> | ||||||||||||||||||||||||||
Severity: | unspecified | Docs Contact: | |||||||||||||||||||||||||||
Priority: | unspecified | ||||||||||||||||||||||||||||
Version: | 26 | CC: | anaconda-maint-list, blivet-maint-list, dlehman, dwt, g.kaviyarasu, jkonecny, jonathan, mkolman, rvykydal, sbueno, vanmeeuwen+fedora, vponcova, vtrefny | ||||||||||||||||||||||||||
Target Milestone: | --- | Keywords: | Reopened | ||||||||||||||||||||||||||
Target Release: | --- | ||||||||||||||||||||||||||||
Hardware: | x86_64 | ||||||||||||||||||||||||||||
OS: | Unspecified | ||||||||||||||||||||||||||||
Whiteboard: | abrt_hash:a6be6482f134a9b6db36de07b3a5b5328e542c4835164e9dd0100c5001244149;VARIANT_ID=workstation; | ||||||||||||||||||||||||||||
Fixed In Version: | Doc Type: | If docs needed, set a value | |||||||||||||||||||||||||||
Doc Text: | Story Points: | --- | |||||||||||||||||||||||||||
Clone Of: | Environment: | ||||||||||||||||||||||||||||
Last Closed: | 2018-05-03 17:28:54 UTC | Type: | --- | ||||||||||||||||||||||||||
Regression: | --- | Mount Type: | --- | ||||||||||||||||||||||||||
Documentation: | --- | CRM: | |||||||||||||||||||||||||||
Verified Versions: | Category: | --- | |||||||||||||||||||||||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||||||||||||||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||||||||||||||||||||
Embargoed: | |||||||||||||||||||||||||||||
Attachments: |
|
Description
larsehauge
2017-07-19 19:44:21 UTC
Created attachment 1301305 [details]
File: anaconda-tb
Created attachment 1301306 [details]
File: anaconda.log
Created attachment 1301307 [details]
File: environ
Created attachment 1301308 [details]
File: journalctl
Created attachment 1301309 [details]
File: lsblk_output
Created attachment 1301310 [details]
File: nmcli_dev_list
Created attachment 1301311 [details]
File: os_info
Created attachment 1301312 [details]
File: program.log
Created attachment 1301313 [details]
File: storage.log
Created attachment 1301314 [details]
File: ifcfg.log
Solved it by disconnecting the other drives physically (disconnecting SATA). Was then able to install Fedora. Closing the bug report. Similar problem has been detected: Just start the installer for f26. Local dvd iso, netinstall, doesn't matter. No user interaction is required. When the first language selection window appears there will be a brief pause--just a few seconds--then the unexpected error message dialog pops up. As usual since about f14, anaconda is puking on lvm-on-md. Some releases have worked, most have not, and on a few it's the presence of a luks layer in there somewhere which causes it to choke. I have lvm-on-luks-on-md (raid6). Strangely, when I very wisely tested this scenario on a vm it worked. Thus encouraged, I began the installation on real hardware and this is the result. This exact bug seems to have been reported in July. When I can get logged into Bugzilla I'll look that up and put a pointer here. addons: com_redhat_docker, com_redhat_kdump cmdline: /usr/libexec/system-python /sbin/anaconda cmdline_file: BOOT_IMAGE=vmlinuz initrd=initrd.img inst.stage2=hd:LABEL=Fedora-S-dvd-x86_64-26 quiet hashmarkername: anaconda kernel: 4.11.8-300.fc26.x86_64 package: anaconda-26.21.11-1 product: Fedora reason: ValueError: device is already in tree release: Cannot get release name. version: 26 Ah, I see the installer bug reporter dropped this into the existing bug report. This really should not have been closed. Having to rip open the system and physically disconnect random drives until the installer starts to work is >>NOT<< a solution. Hello Dennis, First, we didn't close the bug reporter does. Second, in general it is not that easy support everything users can create. There are really big number of configurations users can create and everything is still evolving so please bear with us. I'm opening this issue and changing components to the storage library. Dennis, can you attach the logs from your failed installation? Created attachment 1341336 [details]
Requested logs generated by anaconda crash reporter
This log set is one I had saved from the boot attempt just prior to the one which generated the auto-report. Nothing was changed between the two occasions and the observed behavior was identical.
Oh btw, this machine has two raid6 arrays of four drives each. The production set is 4x1TB and the other is 4x250GB. The latter is just used for as-needed bulk storage. The logs I just provided ran on that configuration.
I have since followed lars' inspiration and removed the SATA interface card for the bulk store array, leaving just the production array and a couple of DVD drives. Rebooting the installer produced the same result.
The root cause of the problem you are having, Dennis, is that sda and sdi both have the same "UUID" value of "aaaaaaaa", which leads to partitions sda1 and sdi1 both having the same UUID: "aaaaaaaa-1". There is a requirement that UUIDs actually be unique. I don't know off the top of my head how to change one of them to satisfy this requirement. To clarify, I'm referring to the partition table UUID, on which the partition UUIDs are based. The original reporter apparently cloned the partition table on one drive across at least two others without modifying the duplicates so their UUIDs were unique. OK, now that's just bizarre. Not the UUID uniqueness requirement, that's entirely reasonable. It's also new with f25, by the way, which explains why I've never run into this before. No, what was bugging me here is how my partition UUIDs came to be 0xaaaaaaaa in the first place. I could just--barely!--accept that I hit that lucky one in 2^32 chance on one of them. But that and two others as well? No. Just no. I'm pretty sure what happened is this... This is a multiboot system (was f18/16/14) and what's now the bulk store array used to be the production array. When the 4x1TB hardware arrived and it came time to install f20, I would have hooked up the new drives to the f18 system and tested them with badblocks. Probably at some point once it finished I started another round and then decided not to wait for it, interrupting part way into the write-pattern- 0xAA pass. Then I would have manually partitioned the first 1TB drive and cloned the remainders with sfdisk-d /dev/sde | sfdisk /dev/sdf ... then g and h. When this is done using f18 sfdisk, the label-id is not copied, at least not if the destination is already non-0, and neither is a new one generated. So there I had three drives with all 0xaa for UUIDs. And no installation I've ever done since has cared until now. Which is fine. I made up a VM to test this scenario, tried the f26 install on it, and got the result reported here. So far so good. Then I used hexedit to give the last three drives unique UUIDs, after which the installation showed the first language selection display, blanked the screen, and did nothing more. On vc 1 at the bottom of the screen it said "Pane is dead". Since editing the UUIDs didn't seem to cause any harm I tried this on the bare metal machine. There the install finished without issue, leaving me with a shiny new f26 system. No idea why that worked and the VM didn't. So, my immediate problem here is solved now that you've identified that crucial clue. Thanks for that! There remains the mystery of why the similarly configured VM wouldn't install and what the different failure mode means. I will attach the VM logs in case you care to pursue that. Created attachment 1344884 [details]
Archive of VM log files from failed f26 install described in previous post.
I've also prepared an archive containing these logs,
a README, and the qcow2 images of the four drives
used by the VM as well as the xml machine definition.
It's 6+ GB though. If you want to look at this I can
push it up to a host with a faster connection than
I've got. It'll take a while to make available.
Let me know if you want it.
This message is a reminder that Fedora 26 is nearing its end of life. Approximately 4 (four) weeks from now Fedora will stop maintaining and issuing updates for Fedora 26. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as EOL if it remains open with a Fedora 'version' of '26'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version. Thank you for reporting this issue and we are sorry that we were not able to fix it before Fedora 26 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior this bug is closed as described in the policy above. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete. |