RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1902412 - [kernel-rt] BUG: using smp_processor_id() in preemptible [00000000] code: kworker/u129:3/1367837 observed with blktests nvme-tcp tests
Summary: [kernel-rt] BUG: using smp_processor_id() in preemptible [00000000] code: kwo...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: kernel-rt
Version: 8.4
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 8.0
Assignee: Juri Lelli
QA Contact: Zhang Yi
URL:
Whiteboard:
Depends On: 1921591
Blocks: 1903942
TreeView+ depends on / blocked
 
Reported: 2020-11-28 16:39 UTC by Zhang Yi
Modified: 2021-11-09 18:33 UTC (History)
9 users (show)

Fixed In Version: kernel-rt-4.18.0-307.rt7.88.el8
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-11-09 17:28:30 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2021:4140 0 None None None 2021-11-09 17:29:31 UTC

Description Zhang Yi 2020-11-28 16:39:20 UTC
Description of problem:
BUG: using smp_processor_id() in preemptible [00000000] code: kworker/u129:3/1367837 observed with blktests nvme-tcp tests

Version-Release number of selected component (if applicable):
4.18.0-252.rt7.17.el8.dt4.x86_64

How reproducible:
100%

Steps to Reproduce:
1. git clone https://github.com/yizhanglinux/blktests.git
2. nvme_trtype=tcp ./check nvme/003
3.

Actual results:


Expected results:




Additional info:
Found this issue with CKI jobs:
beaker job:
https://beaker.engineering.redhat.com/recipes/9114861
https://beaker.engineering.redhat.com/recipes/9114806

[ 8238.060525] run blktests nvme/003 at 2020-11-28 09:19:14 
[ 8238.102199] TECH PREVIEW: NVMe/TCP Target may not be fully supported. 
[ 8238.102199] Please review provided documentation for limitations. 
[ 8238.134232] TECH PREVIEW: NVMe/TCP may not be fully supported. 
[ 8238.134232] Please review provided documentation for limitations. 
[ 8238.172104] nvmet: adding nsid 1 to subsystem blktests-subsystem-1 
[ 8238.179856] nvmet_tcp: enabling port 0 (127.0.0.1:4420) 
[ 8238.189968] nvmet: creating controller 1 for subsystem nqn.2014-08.org.nvmexpress.discovery for NQN nqn.2014-08.org.nvmexpress:uuid:8782c6e0-b72c-4f32-bb9d-d5cdf4ce9b8d. 
[ 8238.205482] nvme nvme0: new ctrl: NQN "nqn.2014-08.org.nvmexpress.discovery", addr 127.0.0.1:4420 
[ 8238.205582] BUG: using smp_processor_id() in preemptible [00000000] code: kworker/u129:3/1367837 
[ 8238.205592] caller is nvme_tcp_submit_async_event+0x117/0x150 [nvme_tcp] 
[ 8238.205596] CPU: 34 PID: 1367837 Comm: kworker/u129:3 Kdump: loaded Tainted: G                 --------- -t - 4.18.0-252.rt7.17.el8.dt4.x86_64 #1 
[ 8238.205597] Hardware name: HPE ProLiant DL385 Gen10/ProLiant DL385 Gen10, BIOS A40 03/09/2018 
[ 8238.205606] Workqueue: nvme-wq nvme_async_event_work [nvme_core] 
[ 8238.205609] Call Trace: 
[ 8238.205619]  dump_stack+0x5c/0x80 
[ 8238.205625]  check_preemption_disabled.cold.0+0x46/0x51 
[ 8238.205629]  nvme_tcp_submit_async_event+0x117/0x150 [nvme_tcp] 
[ 8238.205634]  nvme_async_event_work+0x5d/0xc0 [nvme_core] 
[ 8238.205640]  process_one_work+0x18f/0x420 
[ 8238.205643]  worker_thread+0x30/0x370 
[ 8238.205645]  ? process_one_work+0x420/0x420 
[ 8238.205646]  kthread+0x15d/0x180 
[ 8238.205648]  ? __kthread_parkme+0xa0/0xa0 
[ 8238.205650]  ret_from_fork+0x22/0x40 
[ 8248.309843] nvme nvme0: Removing ctrl: NQN "nqn.2014-08.org.nvmexpress.discovery" 
[ 8249.017813] run blktests nvme/004 at 2020-11-28 09:19:25 
[ 8249.046650] TECH PREVIEW: NVMe/TCP Target may not be fully supported. 
[ 8249.046650] Please review provided documentation for limitations. 
[ 8249.069776] TECH PREVIEW: NVMe/TCP may not be fully supported. 
[ 8249.069776] Please review provided documentation for limitations. 
[ 8249.114881] nvmet: adding nsid 1 to subsystem blktests-subsystem-1 
[ 8249.128409] nvmet_tcp: enabling port 0 (127.0.0.1:4420) 
[ 8249.138210] nvmet: creating controller 1 for subsystem blktests-subsystem-1 for NQN nqn.2014-08.org.nvmexpress:uuid:8782c6e0-b72c-4f32-bb9d-d5cdf4ce9b8d. 
[ 8249.152493] nvme nvme0: creating 64 I/O queues. 
[ 8249.166233] nvme nvme0: mapped 64/0/0 default/read/poll queues. 
[ 8249.185546] nvme nvme0: new ctrl: NQN "blktests-subsystem-1", addr 127.0.0.1:4420 
[ 8249.185549] BUG: using smp_processor_id() in preemptible [00000000] code: kworker/u129:3/1367837 
[ 8249.185558] caller is nvme_tcp_submit_async_event+0x117/0x150 [nvme_tcp] 
[ 8249.185563] CPU: 32 PID: 1367837 Comm: kworker/u129:3 Kdump: loaded Tainted: G                 --------- -t - 4.18.0-252.rt7.17.el8.dt4.x86_64 #1 
[ 8249.185564] Hardware name: HPE ProLiant DL385 Gen10/ProLiant DL385 Gen10, BIOS A40 03/09/2018 
[ 8249.185574] Workqueue: nvme-wq nvme_async_event_work [nvme_core] 
[ 8249.185577] Call Trace: 
[ 8249.185587]  dump_stack+0x5c/0x80 
[ 8249.185593]  check_preemption_disabled.cold.0+0x46/0x51 
[ 8249.185597]  nvme_tcp_submit_async_event+0x117/0x150 [nvme_tcp] 
[ 8249.185603]  nvme_async_event_work+0x5d/0xc0 [nvme_core] 
[ 8249.185609]  process_one_work+0x18f/0x420 
[ 8249.185612]  worker_thread+0x30/0x370 
[ 8249.185614]  ? process_one_work+0x420/0x420 
[ 8249.185616]  kthread+0x15d/0x180 
[ 8249.185619]  ? __kthread_parkme+0xa0/0xa0 
[ 8249.185622]  ret_from_fork+0x22/0x40 
[ 8249.192638] BUG: using smp_processor_id() in preemptible [00000000] code: kworker/u129:1/1719672 
[ 8249.192646] caller is nvme_tcp_queue_rq+0x303/0x350 [nvme_tcp] 
[ 8249.192649] CPU: 34 PID: 1719672 Comm: kworker/u129:1 Kdump: loaded Tainted: G                 --------- -t - 4.18.0-252.rt7.17.el8.dt4.x86_64 #1 
[ 8249.192650] Hardware name: HPE ProLiant DL385 Gen10/ProLiant DL385 Gen10, BIOS A40 03/09/2018 
[ 8249.192659] Workqueue: nvme-wq nvme_scan_work [nvme_core] 
[ 8249.192660] Call Trace: 
[ 8249.192670]  dump_stack+0x5c/0x80 
[ 8249.192675]  check_preemption_disabled.cold.0+0x46/0x51 
[ 8249.192678]  nvme_tcp_queue_rq+0x303/0x350 [nvme_tcp] 
[ 8249.192682]  __blk_mq_try_issue_directly+0x128/0x1d0 
[ 8249.192684]  blk_mq_try_issue_directly+0x46/0xa0 
[ 8249.192686]  blk_mq_make_request+0x552/0x6f0 
[ 8249.192688]  ? blk_queue_enter+0x11d/0x220 
[ 8249.192693]  nvme_ns_head_make_request+0xd0/0x490 [nvme_core] 
[ 8249.192696]  generic_make_request+0xed/0x350 
[ 8249.192698]  submit_bio+0x3c/0x160 
[ 8249.192701]  submit_bh_wbc+0x16a/0x190 
[ 8249.192703]  ? buffer_io_error+0x40/0x40 
[ 8249.192705]  block_read_full_page+0x1e3/0x330 
[ 8249.192707]  ? check_disk_change+0x60/0x60 
[ 8249.192709]  ? migrate_enable+0x118/0x3a0 
[ 8249.192712]  do_read_cache_page+0x32f/0x780 
[ 8249.192714]  ? blkdev_writepages+0x10/0x10 
[ 8249.192718]  ? get_page_from_freelist+0x1581/0x1940 
[ 8249.192721]  read_part_sector+0x3b/0xe6 
[ 8249.192723]  read_lba+0x110/0x220 
[ 8249.192726]  ? kmem_cache_alloc_trace+0xcf/0x1d0 
[ 8249.192727]  efi_partition+0x1e5/0x6ea 
[ 8249.192731]  ? snprintf+0x49/0x60 
[ 8249.192733]  ? is_gpt_valid.part.6+0x430/0x430 
[ 8249.192734]  blk_add_partitions+0x164/0x3f0 
[ 8249.192735]  ? blk_drop_partitions+0x94/0xc0 
[ 8249.192737]  bdev_disk_changed+0x65/0xd0 
[ 8249.192740]  __blkdev_get+0x3cd/0x520 
[ 8249.192742]  blkdev_get+0xb0/0x190 
[ 8249.192745]  __device_add_disk+0x44f/0x520 
[ 8249.192749]  nvme_mpath_set_live+0x114/0x130 [nvme_core] 
[ 8249.192753]  nvme_mpath_add_disk+0xe7/0x100 [nvme_core] 
[ 8249.192757]  nvme_validate_ns+0x413/0x950 [nvme_core] 
[ 8249.192762]  ? __nvme_submit_sync_cmd+0xe5/0x1c0 [nvme_core] 
[ 8249.192764]  ? __switch_to_asm+0x41/0x70 
[ 8249.192769]  nvme_scan_work+0x151/0x300 [nvme_core] 
[ 8249.192771]  ? _raw_spin_unlock_irq+0x1d/0x50 
[ 8249.192774]  process_one_work+0x18f/0x420 
[ 8249.192776]  ? process_one_work+0x420/0x420 
[ 8249.192777]  worker_thread+0x30/0x370 
[ 8249.192778]  ? process_one_work+0x420/0x420 
[ 8249.192780]  kthread+0x15d/0x180 
[ 8249.192782]  ? __kthread_parkme+0xa0/0xa0 
[ 8249.192783]  ret_from_fork+0x22/0x40 
[ 8249.192913] BUG: using smp_processor_id() in preemptible [00000000] code: kworker/u129:1/1719672 
[ 8249.192915] caller is nvme_tcp_queue_rq+0x303/0x350 [nvme_tcp] 
[ 8249.192917] CPU: 34 PID: 1719672 Comm: kworker/u129:1 Kdump: loaded Tainted: G                 --------- -t - 4.18.0-252.rt7.17.el8.dt4.x86_64 #1 
[ 8249.192917] Hardware name: HPE ProLiant DL385 Gen10/ProLiant DL385 Gen10, BIOS A40 03/09/2018 
[ 8249.192921] Workqueue: nvme-wq nvme_scan_work [nvme_core] 
[ 8249.192922] Call Trace: 
[ 8249.192924]  dump_stack+0x5c/0x80 
[ 8249.192926]  check_preemption_disabled.cold.0+0x46/0x51 
[ 8249.192928]  nvme_tcp_queue_rq+0x303/0x350 [nvme_tcp] 
[ 8249.192930]  __blk_mq_try_issue_directly+0x128/0x1d0 
[ 8249.192932]  blk_mq_try_issue_directly+0x46/0xa0 
[ 8249.192933]  blk_mq_make_request+0x552/0x6f0 
[ 8249.192935]  ? blk_queue_enter+0x11d/0x220 
[ 8249.192939]  nvme_ns_head_make_request+0xd0/0x490 [nvme_core] 
[ 8249.192941]  generic_make_request+0xed/0x350 
[ 8249.192942]  submit_bio+0x3c/0x160 
[ 8249.192944]  submit_bh_wbc+0x16a/0x190 
[ 8249.192946]  ? buffer_io_error+0x40/0x40 
[ 8249.192947]  block_read_full_page+0x1e3/0x330 
[ 8249.192948]  ? check_disk_change+0x60/0x60 
[ 8249.192949]  ? migrate_enable+0x118/0x3a0 
[ 8249.192951]  do_read_cache_page+0x32f/0x780 
[ 8249.192953]  ? blkdev_writepages+0x10/0x10 
[ 8249.192954]  ? blkdev_writepages+0x10/0x10 
[ 8249.192956]  ? put_partition+0x90/0x90 
[ 8249.192957]  read_part_sector+0x3b/0xe6 
[ 8249.192958]  amiga_partition+0x48/0x49c 
[ 8249.192960]  ? read_part_sector+0x3b/0xe6 
[ 8249.192961]  ? sun_partition+0x32/0x390 
[ 8249.192962]  ? sgi_partition+0x2c/0x290 
[ 8249.192964]  ? snprintf+0x49/0x60 
[ 8249.192965]  ? put_partition+0x90/0x90 
[ 8249.192966]  blk_add_partitions+0x164/0x3f0 
[ 8249.192967]  ? blk_drop_partitions+0x94/0xc0 
[ 8249.192969]  bdev_disk_changed+0x65/0xd0 
[ 8249.192970]  __blkdev_get+0x3cd/0x520 
[ 8249.192972]  blkdev_get+0xb0/0x190 
[ 8249.192974]  __device_add_disk+0x44f/0x520 
[ 8249.192978]  nvme_mpath_set_live+0x114/0x130 [nvme_core] 
[ 8249.192982]  nvme_mpath_add_disk+0xe7/0x100 [nvme_core] 
[ 8249.192986]  nvme_validate_ns+0x413/0x950 [nvme_core] 
[ 8249.192989]  ? __nvme_submit_sync_cmd+0xe5/0x1c0 [nvme_core] 
[ 8249.192991]  ? __switch_to_asm+0x41/0x70 
[ 8249.192995]  nvme_scan_work+0x151/0x300 [nvme_core] 
[ 8249.192997]  ? _raw_spin_unlock_irq+0x1d/0x50 
[ 8249.192999]  process_one_work+0x18f/0x420 
[ 8249.193001]  ? process_one_work+0x420/0x420 
[ 8249.193002]  worker_thread+0x30/0x370 
[ 8249.193003]  ? process_one_work+0x420/0x420 
[ 8249.193004]  kthread+0x15d/0x180 
[ 8249.193006]  ? __kthread_parkme+0xa0/0xa0 
[ 8249.193007]  ret_from_fork+0x22/0x40 
[ 8249.195611] BUG: using smp_processor_id() in preemptible [00000000] code: systemd-udevd/1739014 
[ 8249.195622] caller is nvme_tcp_queue_rq+0x303/0x350 [nvme_tcp] 
[ 8249.195626] CPU: 49 PID: 1739014 Comm: systemd-udevd Kdump: loaded Tainted: G                 --------- -t - 4.18.0-252.rt7.17.el8.dt4.x86_64 #1 
[ 8249.195627] Hardware name: HPE ProLiant DL385 Gen10/ProLiant DL385 Gen10, BIOS A40 03/09/2018 
[ 8249.195629] Call Trace: 
[ 8249.195641]  dump_stack+0x5c/0x80 
[ 8249.195648]  check_preemption_disabled.cold.0+0x46/0x51 
[ 8249.195651]  nvme_tcp_queue_rq+0x303/0x350 [nvme_tcp] 
[ 8249.195657]  __blk_mq_try_issue_directly+0x128/0x1d0 
[ 8249.195660]  blk_mq_request_issue_directly+0x4b/0x80 
[ 8249.195662]  blk_mq_try_issue_list_directly+0x5d/0xe0 
[ 8249.195665]  blk_mq_sched_insert_requests+0xb8/0x100 
[ 8249.195667]  blk_mq_flush_plug_list+0x123/0x240 
[ 8249.195672]  blk_flush_plug_list+0xd7/0x100 
[ 8249.195675]  blk_finish_plug+0x21/0x2e 
[ 8249.195680]  read_pages+0x82/0x1a0 
[ 8249.195683]  __do_page_cache_readahead+0x1c1/0x1e0 
[ 8249.195686]  force_page_cache_readahead+0x93/0xf0 
[ 8249.195689]  generic_file_buffered_read+0x746/0xb80 
[ 8249.195694]  ? preempt_count_add+0x49/0xa0 
[ 8249.195700]  ? __handle_mm_fault+0x808/0xb50 
[ 8249.195704]  new_sync_read+0x10f/0x150 
[ 8249.195709]  vfs_read+0x91/0x140 
[ 8249.195711]  ksys_read+0x52/0xc0 
[ 8249.195717]  do_syscall_64+0x87/0x1a0 
[ 8249.195721]  entry_SYSCALL_64_after_hwframe+0x65/0xca 
[ 8249.195724] RIP: 0033:0x7f05f52f68b2 
[ 8249.195726] Code: 97 20 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b6 0f 1f 80 00 00 00 00 f3 0f 1e fa 8b 05 96 db 20 00 85 c0 75 12 31 c0 0f 05 <48> 3d 00 f0 de: 97 20 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b6 0f 1f 80 00 00 00 00 f3 0f 1e fa 8b 05 96 db 20 00 85 c0 75 12 31 c0 0f 05 <48> 3d 00 f0 ff ff 77 56 c3 0f 1f 44 00 00 41 54 49 89 d4 55 48 89 
[ 8249.196204] RSP: 002b:00007ffec56f4c48 EFLAGS: 00000246 ORIG_RAX: 0000000000000000 
[ 8249.196207] RAX: ffffffffffffffda RBX: 000055b151d6bed0 RCX: 00007f05f52f68b2 
[ 8249.196207] RDX: 0000000000000100 RSI: 000055b151d6bef8 RDI: 0000000000000011 
[ 8249.196208] RBP: 000055b151c39560 R08: 000055b151d6bed0 R09: 000055b151bc9010 
[ 8249.196209] R10: 00007f05f52dfba0 R11: 0000000000000246 R12: 0000000000000000 
[ 8249.196209] R13: 0000000000000100 R14: 000055b151c395b0 R15: 000055b151d6bee8 
[ 8249.196304] BUG: using smp_processor_id() in preemptible [00000000] code: systemd-udevd/1739014 
[ 8249.196307] caller is nvme_tcp_queue_rq+0x303/0x350 [nvme_tcp] 
[ 8249.196310] CPU: 49 PID: 1739014 Comm: systemd-udevd Kdump: loaded Tainted: G                 --------- -t - 4.18.0-252.rt7.17.el8.dt4.x86_64 #1 
[ 8249.196311] Hardware name: HPE ProLiant DL385 Gen10/ProLiant DL385 Gen10,  --------- -t - 4.18.0-252.rt7.17.el8.dt4.x86_64 #1 
[ 8249.196638] Hardware name: HPE ProLiant DL385 Gen10/ProLiant DL385 Gen10, BIOS A40 03/09/2018 
[ 8249.196638] Call Trace: 
[ 8249.196641]  dump_stack+0x5c/0x80 
[ 8249.196644]  check_preemption_disabled.cold.0+0x46/0x51 
[ 8249.196649]  nvme_tcp_queue_rq+0x303/0x350 [nvme_tcp] 
[ 8249.196652]  __blk_mq_try_issue_directly+0x128/0x1d0 
[ 8249.196655]  blk_mq_request_issue_directly+0x4b/0x80 
[ 8249.196657]  blk_mq_try_issue_list_directly+0x5d/0xe0 
[ 8249.196660]  blk_mq_sched_insert_requests+0xb8/0x100 
[ 8249.196663]  blk_mq_flush_plug_list+0x123/0x240 
[ 8249.196666]  blk_flush_plug_list+0xd7/0x100 
[ 8249.196669]  blk_finish_plug+0x21/0x2e 
[ 8249.196672]  read_pages+0x82/0x1a0 
[ 8249.196676]  __do_page_cache_readahead+0x1c1/0x1e0 
[ 8249.196679]  force_page_cache_readahead+0x93/0xf0 
[ 8249.196681]  generic_file_buffered_read+0x746/0xb80 
[ 8249.196683]  ? preempt_count_add+0x49/0xa0 
[ 8249.196687]  ? file_check_and_advance_wb_err+0x130/0x130 
[ 8249.196692]  new[ 8252.159454] nvme nvme0: Removing ctrl: NQN "blktests-subsystem-1"

Comment 24 errata-xmlrpc 2021-11-09 17:28:30 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: kernel-rt security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:4140


Note You need to log in before you can comment on or make changes to this bug.