Bug 1829792 - kernel cannot mount XFS in read-only mode from a read-only backing device
Summary: kernel cannot mount XFS in read-only mode from a read-only backing device
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Fedora
Classification: Fedora
Component: kernel
Version: 32
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Kernel Maintainer List
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-04-30 11:47 UTC by Zbigniew Jędrzejewski-Szmek
Modified: 2021-02-13 17:51 UTC (History)
19 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-02-11 20:56:25 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)

Description Zbigniew Jędrzejewski-Szmek 2020-04-30 11:47:15 UTC
1. Please describe the problem:
When attempting to mount an XFS file system with 'mount -r', the mount fails if the backing device is read-only, and results in oops spew in the logs:

XFS (dm-7): Mounting V5 Filesystem
------------[ cut here ]------------
generic_make_request: Trying to write to read-only block-device loop6p3 (partno 3)
WARNING: CPU: 3 PID: 905878 at block/blk-core.c:800 generic_make_request_checks+0xdd/0x5f0
Modules linked in: xfs vhost_net vhost tap ipt_REJECT nf_reject_ipv4 xt_conntrack xt_CHECKSUM ip6table_mangle ip6table_nat iptable_mangle ebtable_>
 bluetooth snd_soc_sst_dsp cfg80211 snd_hda_ext_core snd_soc_acpi_intel_match snd_soc_acpi snd_soc_core snd_hda_codec_hdmi snd_hda_codec_conexant >
 [last unloaded: ip_tables]
CPU: 3 PID: 905878 Comm: mount Tainted: G        W         5.6.4-300.fc32.x86_64 #1
Hardware name: LENOVO 20FB003RGE/20FB003RGE, BIOS N1FET64W (1.38 ) 07/25/2018
RIP: 0010:generic_make_request_checks+0xdd/0x5f0
Code: 2c 03 00 00 48 89 ef 48 8d 74 24 08 c6 05 50 76 35 01 01 e8 d5 5b 01 00 48 c7 c7 c8 82 3a 9f 48 89 c6 44 89 f2 e8 9a 10 c2 ff <0f> 0b 8b 45 >
RSP: 0018:ffffbf91c3b8b978 EFLAGS: 00010296
RAX: 0000000000000052 RBX: ffff9c844e3762a0 RCX: 0000000000000007
RDX: 00000000fffffff8 RSI: 0000000000000096 RDI: ffff9c845a599cc0
RBP: ffff9c8271cee2f0 R08: 000000000000234d R09: 0000000000000003
R10: 0000000000000000 R11: 0000000000000001 R12: 0000000000100000
R13: ffff9c8245bd9400 R14: 0000000000000003 R15: ffff9c8271cee2f0
FS:  00007f1f45325c80(0000) GS:ffff9c845a580000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fe2bf340000 CR3: 0000000004e80003 CR4: 00000000003606e0
Call Trace:
 ? blkg_lookup_create+0x3f/0xa0
 generic_make_request+0x1a/0x2f0
 ? __map_bio+0x42/0x1a0
 __split_and_process_non_flush+0x182/0x1f0
 __split_and_process_bio+0xb4/0x210
 ? kmem_cache_alloc+0x168/0x220
 dm_process_bio+0x94/0x230
 ? generic_make_request_checks+0x300/0x5f0
 dm_make_request+0x3c/0x110
 generic_make_request+0xbb/0x2f0
 xfs_rw_bdev+0x175/0x200 [xfs]
 xlog_do_io+0x8a/0x130 [xfs]
 ? xlog_add_record+0x37/0xc0 [xfs]
 xlog_write_log_records+0x186/0x250 [xfs]
 xlog_find_tail+0x21b/0x340 [xfs]
 ? try_to_wake_up+0x26a/0x770
 xlog_recover+0x1c/0x140 [xfs]
 xfs_log_mount+0x156/0x2b0 [xfs]
 xfs_mountfs+0x431/0x8a0 [xfs]
 ? xfs_mru_cache_create+0x12d/0x180 [xfs]
 xfs_fc_fill_super+0x35f/0x590 [xfs]
 ? xfs_setup_devices+0x80/0x80 [xfs]
 get_tree_bdev+0x15c/0x250
 vfs_get_tree+0x25/0xb0
 do_mount+0x7b7/0xa90
 ? memdup_user+0x4e/0x90
 __x64_sys_mount+0x8e/0xd0
 do_syscall_64+0x5b/0xf0
 entry_SYSCALL_64_after_hwframe+0x44/0xa9
RIP: 0033:0x7f1f4557852e
Code: 48 8b 0d 6d 09 0c 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 49 89 ca b8 a5 00 00 00 0f 05 <48> 3d 01 f0 >
RSP: 002b:00007fff986f5fc8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5
RAX: ffffffffffffffda RBX: 00007f1f4569f204 RCX: 00007f1f4557852e
RDX: 000055d2d36988c0 RSI: 000055d2d3692c40 RDI: 000055d2d36938f0
RBP: 000055d2d36929b0 R08: 0000000000000000 R09: 00007f1f45639a40
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000055d2d36938f0 R14: 000055d2d36988c0 R15: 000055d2d36929b0
---[ end trace ffe2cc9b2145a3b7 ]---
blk_update_request: I/O error, dev loop6, sector 9007239 op 0x1:(WRITE) flags 0x5800 phys_seg 128 prio class 0
blk_update_request: I/O error, dev loop6, sector 9008263 op 0x1:(WRITE) flags 0x1800 phys_seg 128 prio class 0
blk_update_request: I/O error, dev loop6, sector 9009287 op 0x1:(WRITE) flags 0x5800 phys_seg 128 prio class 0
blk_update_request: I/O error, dev loop6, sector 9010311 op 0x1:(WRITE) flags 0x1800 phys_seg 128 prio class 0
XFS (dm-7): log recovery write I/O error at daddr 0x1057 len 4096 error -5
XFS (dm-7): failed to locate log tail
XFS (dm-7): log mount/recovery failed: error -5
XFS (dm-7): log mount failed
XFS (dm-7): Mounting V5 Filesystem
blk_update_request: I/O error, dev loop6, sector 9007239 op 0x1:(WRITE) flags 0x5800 phys_seg 128 prio class 0
blk_update_request: I/O error, dev loop6, sector 9008263 op 0x1:(WRITE) flags 0x1800 phys_seg 128 prio class 0
blk_update_request: I/O error, dev loop6, sector 9009287 op 0x1:(WRITE) flags 0x5800 phys_seg 128 prio class 0
blk_update_request: I/O error, dev loop6, sector 9010311 op 0x1:(WRITE) flags 0x1800 phys_seg 128 prio class 0
XFS (dm-7): log recovery write I/O error at daddr 0x1057 len 4096 error -5
XFS (dm-7): failed to locate log tail
XFS (dm-7): log mount/recovery failed: error -5
XFS (dm-7): log mount failed

2. What is the Version-Release number of the kernel:
5.6.4-300.fc32.x86_64

3. Did it work previously in Fedora? If so, what kernel version did the issue
   *first* appear?  Old kernels are available for download at
   https://koji.fedoraproject.org/koji/packageinfo?packageID=8 :
No idea.

4. Can you reproduce this issue? If so, please provide the steps to reproduce
   the issue below:

curl -L https://download.fedoraproject.org/pub/fedora/linux/releases/32/Server/aarch64/images/Fedora-Server-32-1.6.aarch64.raw.xz|xzcat > Fedora-Server-32-1.6.aarch64.raw
sudo losetup -f -r --show -P /var/tmp/Fedora-Server-32-1.6.aarch64.raw
sudo vgchange -ay fedora
sudo mount /dev/fedora/root -r -o x-mount.mkdir /tmp/root

5. Does this problem occur with the latest Rawhide kernel? To install the
   Rawhide kernel, run ``sudo dnf install fedora-repos-rawhide`` followed by
   ``sudo dnf update --enablerepo=rawhide kernel``:
Didn't check that yet.

6. Are you running any modules that not shipped with directly Fedora's kernel?:
No.

7. Please attach the kernel logs. You can get the complete kernel log
   for a boot with ``journalctl --no-hostname -k > dmesg.txt``. If the
   issue occurred on a previous boot, use the journalctl ``-b`` flag.
See above.

Comment 1 Zbigniew Jędrzejewski-Szmek 2020-04-30 11:52:23 UTC
To clarify: ideally, it would be possible to simply mount the file system ro from ro image,
especially if it has been cleanly unmounted (which I assume is true in this case, since this
is an official Fedora image). If that is not possible, a reasonable error should be printed.

Comment 2 Eric Sandeen 2020-12-08 20:08:09 UTC
FWIW you can do "mount -o ro,norecovery" for a readonly-snapshot device with xfs on it.

xfs_freeze puts a dummy transaction in the log to force a log replay to clear out orphan inodes, but this makes a plain RO mount on a RO device fail due to the required recovery.

I'd like to get this fixed upstream but for now that's the workaround.

(FWIW the kernel message is not an OOPS, it's just a very verbose warning from the block layer)

Comment 3 Eric Sandeen 2021-02-11 20:56:25 UTC
Also, ideally the images such as  https://download.fedoraproject.org/pub/fedora/linux/releases/32/Server/aarch64/images/Fedora-Server-32-1.6.aarch64.raw.xz  would not be frozen snapshots, but properly unmounted and fully quiesced - that would also avoid this behavior.

Anyway, this isn't really a bug per se, xfs requires "mount -o ro,norecovery" on a read-only device with a dirty journal (which is somewhat surprisingly the case with a frozen image)

Comment 4 Zbigniew Jędrzejewski-Szmek 2021-02-12 07:45:46 UTC
I created https://pagure.io/pungi/issue/1495 against pungi.

Comment 5 Zbigniew Jędrzejewski-Szmek 2021-02-13 17:51:13 UTC
https://github.com/redhat-imaging/imagefactory/issues/444 against imagefactory.


Note You need to log in before you can comment on or make changes to this bug.