Bug 1665248 - Kernel panic in msgr-worker while running with kernel 3.10.0-957.1.3.el7.x86_64
Summary: Kernel panic in msgr-worker while running with kernel 3.10.0-957.1.3.el7.x86_64
Keywords:
Status: CLOSED DUPLICATE of bug 1647460
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RBD
Version: 4.0
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: rc
: 4.0
Assignee: Ilya Dryomov
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-01-10 19:29 UTC by rom
Modified: 2019-02-12 20:56 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-02-12 20:56:18 UTC
Embargoed:


Attachments (Terms of Use)

Description rom 2019-01-10 19:29:24 UTC
Once moved to rhel 7.6 about a week ago, we suddenly started encountering kernel panics during writes that performed via rbd.

[ 8941.026611] libceph: mon2 2.104.194.77:6789 session established

[ 8941.033738] libceph: client50241 fsid ea0df043-7b25-4447-a43d-e9b2af8fe069

[ 8941.053063] rbd: rbd0: capacity 1073741824 features 0x1

[ 8941.252456] XFS (rbd0): Mounting V5 Filesystem

[ 8941.362658] XFS (rbd0): Ending clean mount

[ 8947.674186] rbd: rbd1: capacity 1073741824 features 0x1

[ 8952.568957] rbd: rbd2: capacity 1073741824 features 0x1

[ 8953.147799] rbd2: p1

[ 8974.184516] XFS (rbd0): Unmounting Filesystem

[ 8974.192754] usercopy: kernel memory exposure attempt detected from ffff8d83b56f7000 (kmalloc-512) (1024 bytes)

[ 8974.203999] ------------[ cut here ]------------

[ 8974.209149] kernel BUG at mm/usercopy.c:72!

[ 8974.213817] invalid opcode: 0000 [#1] SMP 

[ 8974.218410] Modules linked in: ip6table_raw xt_physdev xt_CHECKSUM veth iptable_mangle iptable_raw ebtable_filter rbd libceph ebtables dns_resolver ip6table_filter ip6_tables fuse btrfs raid6_pq xor msdos fat xt_comment mlx4_en(OE) mlx4_core(OE) xt_multiport ipt_REJECT nf_reject_ipv4 nf_conntrack_netlink nfnetlink iptable_nat xt_addrtype iptable_filter xt_conntrack br_netfilter bridge stp llc xfs openvswitch nf_conntrack_ipv6 nf_nat_ipv6 nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_defrag_ipv6 nf_nat mlx5_core(OE) nf_conntrack mlxfw(OE) iTCO_wdt iTCO_vendor_support intel_powerclamp coretemp intel_rapl iosf_mbi kvm_intel kvm irqbypass pcspkr joydev sg lpc_ich i2c_i801 mei_me mei ioatdma ipmi_si ipmi_devintf ipmi_msghandler acpi_pad acpi_power_meter dm_multipath ip_tables ext4 mbcache jbd2 dm_thin_pool dm_persistent_data dm_bio_prison dm_bufio libcrc32c sd_mod crc_t10dif crct10dif_generic crct10dif_pclmul crct10dif_common crc32_pclmul crc32c_intel igb mgag200 ghash_clmulni_intel dca drm_kms_helper ahci aesni_intel syscopyarea lrw gf128mul sysfillrect glue_helper sysimgblt fb_sys_fops ablk_helper ttm cryptd libahci ptp drm mlx_compat(OE) pps_core i2c_algo_bit libata drm_panel_orientation_quirks devlink wmi scsi_transport_iscsi sunrpc dm_mirror dm_region_hash dm_log dm_mod [last unloaded: mlx4_core]

[ 8974.348153] CPU: 4 PID: 97983 Comm: msgr-worker-1 Kdump: loaded Tainted: G IOE ------------ 3.10.0-957.1.3.el7.x86_64 #1

[ 8974.361450] Hardware name: Intel Corporation S2600KP/S2600KP, BIOS SE5C610.86B.01.01.0005.101720141054 10/17/2014

[ 8974.372908] task: ffff8d8d676f1040 ti: ffff8d8cb8b70000 task.ti: ffff8d8cb8b70000

[ 8974.381263] RIP: 0010:[<ffffffffbd83e167>] [<ffffffffbd83e167>] __check_object_size+0x87/0x250

[ 8974.390991] RSP: 0018:ffff8d8cb8b73b98 EFLAGS: 00010246

[ 8974.396921] RAX: 0000000000000062 RBX: ffff8d83b56f7000 RCX: 0000000000000000

[ 8974.404886] RDX: 0000000000000000 RSI: ffff8d9378913898 RDI: ffff8d9378913898

[ 8974.412851] RBP: ffff8d8cb8b73bb8 R08: 0000000000000000 R09: 0000000000000000

[ 8974.420814] R10: 0000000000000e17 R11: ffff8d8cb8b73896 R12: 0000000000000400

[ 8974.428777] R13: 0000000000000001 R14: ffff8d83b56f7400 R15: 0000000000000400

[ 8974.436739] FS: 00007fe4c0dc3700(0000) GS:ffff8d9378900000(0000) knlGS:0000000000000000

[ 8974.445768] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033

[ 8974.452180] CR2: 00007f82a46f0018 CR3: 00000007867f8000 CR4: 00000000001607e0

[ 8974.460142] Call Trace:

[ 8974.462875] [<ffffffffbd98c0dd>] memcpy_toiovec+0x4d/0xb0

[ 8974.469015] [<ffffffffbdc2a858>] skb_copy_datagram_iovec+0x128/0x280

[ 8974.476193] [<ffffffffbdc9172a>] tcp_recvmsg+0x22a/0xb30

[ 8974.482218] [<ffffffffbdcc00e0>] inet_recvmsg+0x80/0xb0

[ 8974.488146] [<ffffffffbdc186ec>] sock_aio_read.part.9+0x14c/0x170

[ 8974.495043] [<ffffffffbdc18731>] sock_aio_read+0x21/0x30

[ 8974.501069] [<ffffffffbd840743>] do_sync_read+0x93/0xe0

[ 8974.507002] [<ffffffffbd841225>] vfs_read+0x145/0x170

[ 8974.512737] [<ffffffffbd84203f>] SyS_read+0x7f/0xf0

[ 8974.518277] [<ffffffffbdd74ddb>] system_call_fastpath+0x22/0x27

[ 8974.524978] Code: 45 d1 48 c7 c6 d4 b6 07 be 48 c7 c1 e0 4b 08 be 48 0f 45 f1 49 89 c0 4d 89 e1 48 89 d9 48 c7 c7 d0 1a 08 be 31 c0 e8 20 d5 51 00 <0f> 0b 0f 1f 80 00 00 00 00 48 c7 c0 00 00 60 bd 4c 39 f0 73 0d 

[ 8974.546748] RIP [<ffffffffbd83e167>] __check_object_size+0x87/0x250

[ 8974.553851] RSP <ffff8d8cb8b73b98>


kernel: 3.10.0-957.1.3.el7.x86_64


[root@stratonode1 ~]# modinfo libceph
filename: /lib/modules/3.10.0-957.1.3.el7.x86_64/kernel/net/ceph/libceph.ko.xz
license: GPL
description: Ceph core library
author: Patience Warnick <patience>
author: Yehuda Sadeh <yehuda.net>
author: Sage Weil <sage>
retpoline: Y
rhelversion: 7.6
srcversion: 4F8CE6AEFA99B11C267981D
depends: libcrc32c,dns_resolver
intree: Y
vermagic: 3.10.0-957.1.3.el7.x86_64 SMP mod_unload modversions 

[root@stratonode1 ~]# modinfo rbd
filename: /lib/modules/3.10.0-957.1.3.el7.x86_64/kernel/drivers/block/rbd.ko.xz
license: GPL
description: RADOS Block Device (RBD) driver
author: Jeff Garzik <jeff>
author: Yehuda Sadeh <yehuda.net>
author: Sage Weil <sage>
author: Alex Elder <elder>
retpoline: Y
rhelversion: 7.6
srcversion: 5386BBBD00C262C66CB81F5
depends: libceph
intree: Y
vermagic: 3.10.0-957.1.3.el7.x86_64 SMP mod_unload modversions 
parm: single_major:Use a single major number for all rbd devices (default: true) (bool)

Comment 1 Ilya Dryomov 2019-01-10 20:00:30 UTC
This has been fixed in https://bugzilla.redhat.com/show_bug.cgi?id=1647460.  I'll try to expedite the backport to 7.6.z.

Comment 2 rom 2019-01-10 20:01:48 UTC
Nice!!! I dont have access to that bug. How can I see it's content?

Comment 3 rom 2019-01-10 20:02:45 UTC
Also, any idea what kernel I have to go back to for a mean while? I cannot keep my system up and running with so frequent panics.

Comment 4 Ilya Dryomov 2019-01-10 20:07:28 UTC
Any 7.5 (i.e. 862) kernel.

Comment 5 rom 2019-01-10 20:13:41 UTC
mmm Will I be able to run 7.5 kernel on 7.6 distro? Or I should revert the distro as well?
Can you elaborate regarding the root cause? Any way I can just avoid it from happening? To stop using rbd?

Comment 6 Ilya Dryomov 2019-01-10 20:31:39 UTC
This bug is specific to the co-location scenario.  If you avoid co-locating the kernel client on the OSD nodes (i.e. map your rbd devices on a separate node), you shouldn't see it.

The root cause is an unfortunate interaction between one of the new kernel hardening asserts in 7.6 and the way loopback works.  Nothing is actually wrong, but under certain circumstances the new assert gets triggered when the OSD attempts to receive packets from the kernel client over loopback.

I'm not sure about 7.5 kernel on 7.6 distro.  Reverting the distro is obviously more reliable, but if you can avoid co-location for now, you shouldn't have to.

Comment 7 rom 2019-01-10 20:36:18 UTC
Got it. Any eta for the fix?

Comment 8 Ilya Dryomov 2019-01-11 09:27:59 UTC
Nothing definitive, watch for "libceph: fall back to sendmsg for slab pages" in 7.6 advisories.

BTW another workaround would be to temporarily switch to ext4 on top of your rbd devices instead of xfs.  Again, nothing is actually wrong, but it just so happens that one of the conditions needed to trigger this assert should never be true with ext4, so if your images are ephemeral or can be easily recreated with ext4, you can keep on co-locating.

Comment 9 Jason Dillaman 2019-02-12 20:56:18 UTC

*** This bug has been marked as a duplicate of bug 1647460 ***


Note You need to log in before you can comment on or make changes to this bug.