Created attachment 692881 [details] Full log. +++ This bug was initially created as a clone of Bug #863978 +++ Description of problem: If you create a btrfs filesystem on a partition, mount and unmount it, then immediately create an ntfs-3g filesystem on the same partition, then mount it, the second mount fails: $ guestfish -N part <<EOF mkfs btrfs /dev/sda1 mount /dev/sda1 / umount / mkfs ntfs /dev/sda1 mount /dev/sda1 / EOF *stdin*:5: libguestfs: error: mount: /dev/sda1 on / (options: ''): mount: wrong fs type, bad option, bad superblock on /dev/sda1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so I believe this is related to bug 863978, but it is sufficiently different so I have create a new BZ for it. Attached is the full log.
Note if you change ntfs -> ext2, it no longer fails. So there's some sort of bizarre interaction between btrfs and FUSE (used by ntfs-3g). If you change btrfs -> ext2, it also no longer fails, so the error is provoked by the preceding btrfs mkfs/mount.
Formatting btrfs volumes without erasing certain (or maybe all) superblocks seems to cause problems. I wonder if this is related to that. In bug 889888, I found that even when using wipefs -a on a btrfs formatted partition, while it was correctly kill it could be resurrected into a zombie btrfs by formatting it as ext4. In my case, all tools including mount saw it as ext4, but btrfs-progs saw it as btrfs. In your case it seems that NTFS formatting a btrfs formatted partition is causing additional confusion.
(In reply to comment #2) > Formatting btrfs volumes without erasing certain (or maybe all) superblocks > seems to cause problems. I wonder if this is related to that. In bug 889888, > I found that even when using wipefs -a on a btrfs formatted partition, while > it was correctly kill it could be resurrected into a zombie btrfs by > formatting it as ext4. > > In my case, all tools including mount saw it as ext4, but btrfs-progs saw it > as btrfs. In your case it seems that NTFS formatting a btrfs formatted > partition is causing additional confusion. That is interesting, because inserting a 'wipefs' call before the second mkfs does indeed fix the problem. Here is my new test script: guestfish <<EOF sparse test1.img 1G run part-init /dev/sda mbr part-add /dev/sda p 64 999999 # create /dev/sda1 part-add /dev/sda p 1000000 -64 # create /dev/sda2 mkfs btrfs /dev/sda1 mount /dev/sda1 / umount / wipefs /dev/sda1 # See note mkfs ntfs /dev/sda1 mount /dev/sda1 / EOF Note that I have changed it to create two partitions this time. That's so I can run wipefs on either, to prove that adding the wipefs call didn't just fix things by changing the timing. So it succeeds with wipefs /dev/sda1 and still fails with wipefs /dev/sda2. However note that this is not a dupe of bug 889888. There is still a bug in btrfs, probably in the kernel detection of btrfs superblock magic. Other filesystems don't look at backup superblocks unless you explicitly tell them to.
https://www.redhat.com/archives/libguestfs/2013-February/msg00001.html
http://www.spinics.net/lists/linux-btrfs/msg21197.html If the first superblock magic field is zero'd, btrfs considers the volume intentionally destroyed. If the first superblock appears damaged, it goes looking for additional superblocks, if they are valid it considers the btrfs volume valid (apparently). So it seems mkfs.ntfs damages btrfs superblocks in a way that causes confusion for mount.
This bug appears to have been reported against 'rawhide' during the Fedora 19 development cycle. Changing version to '19'. (As we did not run this process for some time, it could affect also pre-Fedora 19 development cycle bugs. We are very sorry. It will help us with cleanup during Fedora 19 End Of Life. Thank you.) More information and reason for this action is here: https://fedoraproject.org/wiki/BugZappers/HouseKeeping/Fedora19
Do you need any input from btrfs? Seems like you just need to use wipefs.
(In reply to comment #7) > Do you need any input from btrfs? Seems like you just need to use wipefs. We have worked around this by calling wipefs before each time we use mkfs.
I would expect mkfs to wipe out previous signatures, so this is likely something that mkfs will need to fix one day.