Bug 851275 - Fedora doesn't support btrfs RAID1 volumes properly / missing "btrfs device scan" prior to mounting leads to "open_ctree failed" error
Summary: Fedora doesn't support btrfs RAID1 volumes properly / missing "btrfs device s...
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: Fedora
Classification: Fedora
Component: systemd
Version: 17
Hardware: All
OS: Linux
high
high
Target Milestone: ---
Assignee: systemd-maint
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-08-23 16:18 UTC by Jaromír Cápík
Modified: 2016-02-01 01:57 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-10-07 22:42:54 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)

Description Jaromír Cápík 2012-08-23 16:18:36 UTC
Description of problem:
btrfs RAID1 volumes do not get assembled throwing the "open_ctree failed"
message in the dmesg log when mounting them using the volume UUID or the first
from total two used device names until the "btrfs device scan" command is
entered ... that's a feature.
This might be a cause of dropping to the emergency shell when booting with having such btrfs UUID entry defined in the fstab.

The following links point to the same problem experienced in case of different
linux distributions:

https://features.opensuse.org/313130
http://www.spinics.net/lists/linux-btrfs/msg02902.html

I experienced similar behaviour in Mageia 2 and reported a similar bug in the Mageia bug tracking system (https://bugs.mageia.org/show_bug.cgi?id=7117)

Please, fix initscripts to call "/sbin/btrfs device scan" prior to mounting
volumes via fstab or do anything else needed for the proper btrfs raid1 mounts (using the volume UUID) during the boot.

Version-Release number of selected component (if applicable):
initscripts-9.37-1.fc17

How reproducible:
always

Steps to Reproduce (example for sda4 and sdb4):

1. Create btrfs RAID1 volume
# mkfs.btrfs -m raid1 -d raid1 /dev/sda4 /dev/sdb4

2. Create a new mountpoint
# mkdir /mnt/whatever

3. Use blkid to identify the btrfs RAID1 volume
# blkid

4. Define the volume in the fstab
UUID=01234567-89ab-cdef-0123-456789abcdef /mnt/whatever btrfs relatime 0 0

5. Reboot
# reboot

6. Check if the device gets mounted
...
  

Actual results:
Dropping into the emergency shell (but surprisingly manual mounting using the UUID from there works even without "btrfs device scan" ... so, maybe it's more complicated than just a missing btrfs device scan).


Expected results:
Volume gets mounted without dropping to the emergency shell ... 

Additional notes:
If I define the volume entry in fstab and then comment it with hash, then the consequent reboot is successfull. When I uncomment the fstab entry in successfully booted system and try to mount the volume using the mountpoint, it fails with "open_ctree failed". That's why I assume the "btrfs device scan" call is missing. Maybe it's called according to the fstab content, but in that case I can't explain why the boot fails and I'm being dropped to the emergency shell ... maybe anything else is missing?
This might be a bit tricky and lead to confusions ...

Comment 1 Lennart Poettering 2012-09-13 14:01:41 UTC
Hmm, so the general problem here is that nobody knows yet how we should assemble btrfs raid arrays the right way. How should they be listed in fstab? Who tells the kernel about raid components showing up? What to do about degraded arrays?

Comment 2 Kay Sievers 2012-09-13 16:18:38 UTC
The current idea is to let udev register all found brtfs members with
the kernel.

All members will all create the same /dev/disk/by-uuid/ link. The register
call with the kernel will return if the device has all the members ready
to be able to get mounted. If the registration signifies the readiness,
we will set SYSTEMD_READY=1 and only then systemd will try to mount the
device.

This might need significant changes in systemd and therefore take some time
until it can be provided.

Comment 4 Fedora Update System 2012-09-20 19:54:28 UTC
systemd-190-1.fc18 has been submitted as an update for Fedora 18.
https://admin.fedoraproject.org/updates/systemd-190-1.fc18

Comment 5 Fedora Update System 2012-09-22 06:35:42 UTC
Package systemd-191-2.fc18, rtkit-0.11-3.fc18:
* should fix your issue,
* was pushed to the Fedora 18 testing repository,
* should be available at your local mirror within two days.
Update it with:
# su -c 'yum update --enablerepo=updates-testing systemd-191-2.fc18 rtkit-0.11-3.fc18'
as soon as you are able to, then reboot.
Please go to the following url:
https://admin.fedoraproject.org/updates/FEDORA-2012-14581/rtkit-0.11-3.fc18,systemd-191-2.fc18
then log in and leave karma (feedback).

Comment 6 Fedora Update System 2012-09-28 00:16:07 UTC
Package glibc-2.16-17.fc18, systemd-192-1.fc18, selinux-policy-3.11.1-23.fc18, rtkit-0.11-3.fc18:
* should fix your issue,
* was pushed to the Fedora 18 testing repository,
* should be available at your local mirror within two days.
Update it with:
# su -c 'yum update --enablerepo=updates-testing glibc-2.16-17.fc18 systemd-192-1.fc18 selinux-policy-3.11.1-23.fc18 rtkit-0.11-3.fc18'
as soon as you are able to, then reboot.
Please go to the following url:
https://admin.fedoraproject.org/updates/FEDORA-2012-14581/selinux-policy-3.11.1-23.fc18,rtkit-0.11-3.fc18,systemd-192-1.fc18,glibc-2.16-17.fc18
then log in and leave karma (feedback).

Comment 7 Fedora Update System 2012-10-01 20:07:50 UTC
Package glibc-2.16-17.fc18, rtkit-0.11-3.fc18, systemd-193-1.fc18:
* should fix your issue,
* was pushed to the Fedora 18 testing repository,
* should be available at your local mirror within two days.
Update it with:
# su -c 'yum update --enablerepo=updates-testing glibc-2.16-17.fc18 rtkit-0.11-3.fc18 systemd-193-1.fc18'
as soon as you are able to, then reboot.
Please go to the following url:
https://admin.fedoraproject.org/updates/FEDORA-2012-14581/rtkit-0.11-3.fc18,systemd-193-1.fc18,glibc-2.16-17.fc18
then log in and leave karma (feedback).


Note You need to log in before you can comment on or make changes to this bug.