abrt version: 2.0.5
reason: IOException: Partition(s) 3 on /dev/sda have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use. As a result, the old partition(s) will remain in use. You should reboot now before making further changes.
time: Wed Aug 10 20:53:32 2011
:The following was filed automatically by anaconda:
:anaconda 16.14.3 exception report
:Traceback (most recent call first):
: File "/usr/lib64/python2.7/site-packages/parted/disk.py", line 213, in commit
: return self.__disk.commit()
: File "/usr/lib64/python2.7/site-packages/parted/decorators.py", line 32, in new
: ret = fn(*args, **kwds)
: File "/usr/lib64/python2.7/site-packages/pyanaconda/storage/formats/disklabel.py", line 263, in commit
: File "/usr/lib64/python2.7/site-packages/pyanaconda/storage/devices.py", line 1547, in _destroy
: File "/usr/lib64/python2.7/site-packages/pyanaconda/storage/devices.py", line 825, in destroy
: File "/usr/lib64/python2.7/site-packages/pyanaconda/storage/deviceaction.py", line 286, in execute
: File "/usr/lib64/python2.7/site-packages/pyanaconda/storage/devicetree.py", line 316, in processActions
: File "/usr/lib64/python2.7/site-packages/pyanaconda/storage/__init__.py", line 383, in doIt
: File "/usr/lib64/python2.7/site-packages/pyanaconda/packages.py", line 122, in turnOnFilesystems
: File "/usr/lib64/python2.7/site-packages/pyanaconda/dispatch.py", line 348, in dispatch
: self.dir = self.steps[self.step].target(self.anaconda)
: File "/usr/lib64/python2.7/site-packages/pyanaconda/dispatch.py", line 235, in go_forward
: File "/usr/lib64/python2.7/site-packages/pyanaconda/gui.py", line 1198, in nextClicked
:IOException: Partition(s) 3 on /dev/sda have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use. As a result, the old partition(s) will remain in use. You should reboot now before making further changes.
It's an Alpha3-RC3 live CD. Previously I ran an anaconda on the same session (w/o reboot) but it failed. When attempting to re-run anaconda, it crashes writing out new partition table.
Please attach the complete /tmp/anaconda-tb-* to this bug report.
*** Bug 730862 has been marked as a duplicate of this bug. ***
*** Bug 731212 has been marked as a duplicate of this bug. ***
For my duplicate of this 731212 it occurred when I was going from an already formatted drive with LVM to a non-LVM layout called for by the ks. Will try and repeat but the reboot went ahead and installed.
(In reply to comment #5)
> For my duplicate of this 731212 it occurred when I was going from an already
> formatted drive with LVM to a non-LVM layout called for by the ks. Will try
> and repeat but the reboot went ahead and installed.
The kickstart partitioning looks fine to me. You are the only reporter not using live media, so yours may be a different issue.
I think the problem here, at least for the non-live case, is fedora-storage-init.target. It starts all the raid arrays, lvm volume groups, &c it can, which is undesirable in the installer environment. Now I just need to figure out how to keep it from running there.
Created attachment 915359 [details]
(This comment was longer than 65,535 characters and has been moved to an attachment by Red Hat Bugzilla).
Sorry, but due to a misoperation, I indeed attached the content of anaconda-tb-G0g9Qb file instead of uploading it as an attachment.
(In reply to comment #8)
> Sorry, but due to a misoperation, I indeed attached the content of
> anaconda-tb-G0g9Qb file instead of uploading it as an attachment.
Why are you running anaconda directly? You should be running liveinst.
it is a reproduce.
yes, when i first reported the issue a week ago, i first ran anaconda as a hobby act (to discover its options) ,when the first anaconda process failed, i used liveinst --lang=zh_CN to get a simplified chinese installer interface.
In my case I ran from the live cd (386), did a yum install wipe and wiped the disk before starting the installer. I did not mount the disk in any way. Just ran a quick wipe on /dev/sda
After a reboot the install went through smoothly.
Can anyone attach a /tmp/anaconda-tb-* file from an occurrence of this that began by running liveinst?
My expectation is that this will be resolved, for most of you, in F16-Beta.TC1. The fix was a change to lorax which appeared in lorax-16.4.2-1.
Still 100% reproducible in the currently available beta (from fedoraproject.org homepage) Fedora-16-Beta-x86_64-Live-Desktop.iso.
1. Boot from the Live CD.
2. dd for a few seconds to zero out the beginning of the drive.
3. Run installer (Install to Harddrive).
4. Accept US English, Basic Storage...
resulting message: We could not detect partitions or filesystem on this disk.
5. Choose "Yes, discard any data" keeping the "apply my choice to all...." checked.
6. Name computer, set time zone, set password.
7. Installation type "Use All Space" keeping "Use LVM" checked.
8. Confirm partitioning options dialog by clicking on "Write changes to Disk"
If I reverse steps 1 and 2, the problem does not occur.
What happens if you run partprobe after step 2?
So much for 100% reproducible. Yesterday it was each of 6 times with the original sequence in #14, and not reproducible each of 6 times with 1 and 2 reversed. Today, neither reproduces the exception. Kooky. Bug in the electricity.
Strange and funny because it was anaconda that found this BUG ID when I gave it bugzilla login information to self-report the exception, which kept happening while I was testing installs; blowing away previous installations with dd each time. And seemingly doing the exact same thing today - nothing. Same hardware, same LiveCD disk.
*** Bug 751449 has been marked as a duplicate of this bug. ***
biosboot on 2 drives results in an error because the kernel thinks something is still using one of the partitions.
/boot on RAID does not work with grub2
Fedora Bugzappers volunteer triage team
Ok, more information on this. The problem isn't the biosboot partitions.
This error is reproducible if you re-install over the top of a pre-existing RAID partitioning scheme. It is activating the partitions before it tries to re-partition things. This can be checked by looking at the output of cat /proc/mdstat
A workaround is to use mdadm --zero-subperblock /dev/sdX on the component partitions of the array to remove the metadata and re-try the install.
*** Bug 768110 has been marked as a duplicate of this bug. ***
Still happens in Fedora 17 Alpha, while I try to install it from the HD and try to re-format the entire disk.
adjusting release based on comment #24
Actually, this happens to me when trying to do an install on btrfs subvolumes on an encrypted partition; Automatic bug reporting tool said this was a duplicate, despite the error being DeviceCreateError('could not find disk (or something like tat', 'f17')
clearpart --all --drives=sda --initlabel
part biosboot --fstype=biosboot --ondisk=sda --size=1
part /boot --asprimary --size=512 --fstype=ext4 --ondisk=sda
part swap --fstype=swap --ondisk=sda --size=4096
part btrfs.01 --asprimary --encrypted --passphrase="pass" --fstype=btrfs --ondisk=sda --size=1 --grow
btrfs none --data=0 --metadata=1 --label=f17 btrfs.01
btrfs / --subvol --name=root LABEL=f17
btrfs /home --subvol --name=home LABEL=f17
(In reply to comment #26)
> Actually, this happens to me when trying to do an install on btrfs subvolumes
> on an encrypted partition;
It seems you ran into http://fedoraproject.org/wiki/Common_F17_bugs#Installation_crashes_if_btrfs_chosen_as_format_for_any_target_partition.2C_or_any_btrfs-formatted_partition_is_already_present_on_a_target_disk
which would be bug 787341
Patrick, I had several anaconda errors while trying multiple installation methods with btrfs, If I remember correctly the method which I described caused the error for me which abrt suggested was already reported here. In any case, abrt (prhaps erroneously) added me automatically to this bug report on F17.
This should be resolved for the md case in anaconda-17.8-1 by commit 9cf40ae163610c2ef042d63.
Thanks David. I presume 17.8 will make it into beta and I'll retest then. If this can be put into an updates.img, I'll be happy to test before beta comes out.
ignore my last comment; F17 Alpha comes up stating 'anaconda 17.11'.
David, can you please clarify, I am now unsure what you meant by comment 29
(In reply to comment #31)
> ignore my last comment; F17 Alpha comes up stating 'anaconda 17.11'.
> David, can you please clarify, I am now unsure what you meant by comment 29
The case where this particular bug is caused by preexisting md devices being auto-started by udev/systemd should be handled better in anaconda-17.8.
The case in comment 28 is related to btrfs and is generally not relevant to this bug report AFAICT. If it included logs I might be able to verify this, but it does not.
Patrick, is that clear enough for you? I'm not sure what the source of your confusion is. Are you saying (without saying) that you are seeing this problem in F17 Alpha?
(In reply to comment #32)
> (In reply to comment #31)
> > ignore my last comment; F17 Alpha comes up stating 'anaconda 17.11'.
> > David, can you please clarify, I am now unsure what you meant by comment 29
> The case where this particular bug is caused by preexisting md devices being
> auto-started by udev/systemd should be handled better in anaconda-17.8.
OK, re-testing is for me to do then. I ran into what looked like autostarted md devices when doing repeated kickstarts of a KVM guest with 2 defined disks (I was trying to see how /boot on RAID1 was going in 17 Alpha).
Upon seeing comment 24 I made wrong assumptions and stopped my testing. Clearly my bad.
> Patrick, is that clear enough for you? I'm not sure what the source of your
> confusion is. Are you saying (without saying) that you are seeing this problem
> in F17 Alpha?
Yes, much clearer. It means I need to retest (dropping /boot on RAID1 from the test), as opposed to wait for F17 beta (which I had understood first). Thanks for the clarification.
re-tested. I still can not kickstart with the following partition setup twice in a row;
[start part of kickstart]
part biosboot --fstype=biosboot --size=1 --ondisk=vda
part /boot --fstype=ext4 --size=500 --ondisk=vda
part raid.bigonvda --grow --size=4096 --ondisk=vda
part raid.bigonvdb --grow --size=4096 --ondisk=vdb
raid pv.bigstripe --level=0 --device=md0 raid.bigonvda raid.bigonvdb
volgroup vg_bz729640 --pesize=32768 pv.bigstripe
logvol swap --name=lv_swap --vgname=vg_bz729640 --size=4224
logvol / --fstype=ext4 --name=lv_slash_f17_alpha --vgname=vg_bz729640 --size=10240
bootloader --location=mbr --timeout=5 --driveorder=vda,vdb --append="rhgb quiet"
The installation was started with
MENU LABEL Fedora 17 ^RAID x86_64 kickstart
This will WIPE all your DISKS
append initrd=images/F17-Alpha-x86_64/initrd.img ks=ftp://hp-microserver.internal.pcfe.net/pub/kickstart/F17-alpha-raid-x86_64-ks.cfg rd.luks=0 rd.md=0 rd.dm=0 ksdevice=bootif root=live:http://hp-microserver/pub/redhat/Fedora/linux/releases/test/17-Alpha/Fedora/x86_64/os/LiveOS/squashfs.img
rd.luks=0 rd.md=0 rd.dm=0 added because of bug 7887744
extract from virsh dumpxml of the used KVM guest
<disk type='block' device='disk'>
<driver name='qemu' type='raw'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
<disk type='block' device='disk'>
<driver name='qemu' type='raw'/>
<target dev='vdb' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
the error I get is
DeviceTreeError: MD RAID device md127 not in devicetree after scanning all slaves
I'll attach the full log momentarily
Created attachment 567724 [details]
anaconda log from test in comment 34
note to self: if full tarball requested, morn:/home/pcfe/tmp/abrt-upload-2012-03-05-19:24:18-773.tar.gz
That bug ("device ... not in devicetree after scanning all slaves") should be better handled in anaconda-17.12-1, which has not yet been built. It is caused by a somehow broken md array's presence on the system. At some point, something decided that starting broken/incomplete arrays was a good idea, and anaconda has had to play catch-up and decide how to deal with them. More systemd/udev love.
It seems that unmounting /mnt/sysimage/* manually after a failed anaconda session helped to avoid the issue. I found it when trying to upgrade an existing Fedora 16 to 17 by running anaconda in another installed Fedora 17 Beta-TC1.
(ps.Just append "upgradeany enforcing=0" in kernel parameter and run
su -c 'anaconda --repo=http://mirrors.sohu.com/fedora/development/17/x86_64/os/ --selinux --noipv6'
in gnome-terminal and it seems to be working.
I just want to use neither preupgrade nor a burned DVD.)
ps2.For F16->17, additionally run
dracut --force --add convertfs
before actually upgrading.
(In reply to comment #37)
> It seems that unmounting /mnt/sysimage/* manually after a failed anaconda
> session helped to avoid the issue. I found it when trying to upgrade an
> existing Fedora 16 to 17 by running anaconda in another installed Fedora 17
> (ps.Just append "upgradeany enforcing=0" in kernel parameter and run
> su -c 'anaconda --repo=http://mirrors.sohu.com/fedora/development/17/x86_64/os/
> --selinux --noipv6'
> in gnome-terminal and it seems to be working.
> I just want to use neither preupgrade nor a burned DVD.)
You are running anaconda to upgrade the same system it is running from? This is not, never has been and, most likely will never be supported. If you try this and it breaks you will be on your own to sort it out.
(In reply to comment #39)
> (In reply to comment #37)
> > It seems that unmounting /mnt/sysimage/* manually after a failed anaconda
> > session helped to avoid the issue. I found it when trying to upgrade an
> > existing Fedora 16 to 17 by running anaconda in another installed Fedora 17
> > Beta-TC1.
> > (ps.Just append "upgradeany enforcing=0" in kernel parameter and run
> > su -c 'anaconda --repo=http://mirrors.sohu.com/fedora/development/17/x86_64/os/
> > --selinux --noipv6'
> > in gnome-terminal and it seems to be working.
> > I just want to use neither preupgrade nor a burned DVD.)
> You are running anaconda to upgrade the same system it is running from? This is
> not, never has been and, most likely will never be supported. If you try this
> and it breaks you will be on your own to sort it out.
I'm not that foolish. It's in another Fedora 17 installation where I performed the upgrade from F-16 to F-17.
To be clear, this is caused by trying to use disks for install that contain devices already in use somewhere on the system. That could be any of: mounted filesystems, active swap space, active md arrays, active lvm, active luks. Find what's mounted/active and deactivate it before starting the installer. It could be from previously failed install attempts on live media or it could be from systemd or udev or udisks automatically starting everything they can find. We're working on handling this automatically, but it's hard to keep up with all the various software that is bent on automatically starting everything it can.
Second try, being careful about checking for mounts, succeeded. This is an experiment, an external disk box, used by two different laptops, booted thru USB. I had F17 Alpha working until this morning. It would not boot this AM, so I downloaded and burned the beta. Updates are in progress.
Laptop in use has an old Nvidia GeoForce 520. It has been working with VESA and gnome3, now the Noveau driver seems to start, but gnome3 is not working.
Some of these reports could seem to have something in common with Bug 758159 "Loop devices not detachable/detached after umount" ... especially the cases where a livecd with cups was used.
So, this report got pretty confused and messy and then mostly died. It looks like two or three different bugs were fixed along the way, and the report is now dormant. Doesn't seem to be much point in keeping it open: let's close it. If anyone has a case which is still problematic with F19, it would probably be best filed as a new report. Thanks!