Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1399219

Summary: dlm 'BUG: scheduling while atomic'
Product: Red Hat Enterprise Linux 7 Reporter: Roman Bednář <rbednar>
Component: dlmAssignee: David Teigland <teigland>
Status: CLOSED DUPLICATE QA Contact: cluster-qe <cluster-qe>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 7.3CC: cluster-maint
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-11-28 15:24:37 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Roman Bednář 2016-11-28 15:05:34 UTC
Description of problem:
dlm sometimes randomly crashes while running lvm tests in cluster.
Reproducer not known since there is no pattern preceding this issue.

Version-Release number of selected component (if applicable):
dlm-lib-4.0.6-1.el7.x86_64
dlm-4.0.6-1.el7.x86_64
kernel-3.10.0-514.el7.x86_64

How reproducible:
difficult to reproduce

Steps to Reproduce:
N/A

Console log:

[  286.852890] BUG: scheduling while atomic: kworker/u2:1/36/0x10000200 
[  286.853465] Modules linked in: dlm sd_mod crc_t10dif crct10dif_generic crct10dif_common sg iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi iptable_filter sctp dm_multipath ppdev pcspkr i2c_piix4 i2c_core i6300esb virtio_balloon parport_pc parport nfsd auth_rpcgss nfs_acl lockd grace sunrpc ip_tables xfs libcrc32c ata_generic pata_acpi virtio_net virtio_blk ata_piix libata serio_raw virtio_pci virtio_ring virtio floppy dm_mirror dm_region_hash dm_log dm_mod 
[  286.857379] CPU: 0 PID: 36 Comm: kworker/u2:1 Not tainted 3.10.0-514.el7.x86_64 #1 
[  286.857983] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 
[  286.858442] Workqueue: dlm_send process_send_sockets [dlm] 
[  286.858961]  0000000000000000 000000002b9953ad ffff88003a1c3b90 ffffffff81685eac 
[  286.859598]  ffff88003a1c3ba0 ffffffff8167fb4a ffff88003a1c3c00 ffffffff8168b45e 
[  286.860210]  ffff88003a1c8fb0 ffff88003a1c3fd8 ffff88003a1c3fd8 ffff88003a1c3fd8 
[  286.860852] Call Trace: 
[  286.861055]  [<ffffffff81685eac>] dump_stack+0x19/0x1b 
[  286.861409]  [<ffffffff8167fb4a>] __schedule_bug+0x4d/0x5b 
[  286.861833]  [<ffffffff8168b45e>] __schedule+0x89e/0x990 
[  286.862284]  [<ffffffff810c1a66>] __cond_resched+0x26/0x30 
[  286.862732]  [<ffffffff8168b82a>] _cond_resched+0x3a/0x50 
[  286.863165]  [<ffffffff81557828>] lock_sock_nested+0x18/0x50 
[  286.863598]  [<ffffffffa03ca467>] add_sock+0x47/0xe0 [dlm] 
[  286.863996]  [<ffffffffa03cc051>] tcp_connect_to_sock+0x121/0x340 [dlm] 
[  286.864440]  [<ffffffff810ce35c>] ? dequeue_entity+0x11c/0x5d0 
[  286.864871]  [<ffffffff810cec2e>] ? dequeue_task_fair+0x41e/0x660 
[  286.865295]  [<ffffffff810cbcdc>] ? set_next_entity+0x3c/0xe0 
[  286.865695]  [<ffffffffa03cc481>] process_send_sockets+0x191/0x280 [dlm] 
[  286.866312]  [<ffffffff810a7f3b>] process_one_work+0x17b/0x470 
[  286.866783]  [<ffffffff810a8d76>] worker_thread+0x126/0x410 
[  286.867204]  [<ffffffff810a8c50>] ? rescuer_thread+0x460/0x460 
[  286.867672]  [<ffffffff810b052f>] kthread+0xcf/0xe0 
[  286.868043]  [<ffffffff810b0460>] ? kthread_create_on_node+0x140/0x140 
[  286.868566]  [<ffffffff81696418>] ret_from_fork+0x58/0x90 
[  286.868977]  [<ffffffff810b0460>] ? kthread_create_on_node+0x140/0x140

Comment 1 Roman Bednář 2016-11-28 15:24:37 UTC

*** This bug has been marked as a duplicate of bug 1377391 ***