RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1303184 - kernel bug during removal of final snapshot of exclusively activated raid volume
Summary: kernel bug during removal of final snapshot of exclusively activated raid volume
Keywords:
Status: CLOSED DUPLICATE of bug 1220555
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.8
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: LVM and device-mapper development team
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-01-29 19:45 UTC by Corey Marthaler
Modified: 2016-01-29 19:50 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-01-29 19:50:30 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Corey Marthaler 2016-01-29 19:45:07 UTC
Description of problem:

SCENARIO (raid10) - [snaphot_exclusive_raid]
Snapshot an exclusively activated raid
host-115.virt.lab.msp.redhat.com: lvcreate  --type raid10 -i 2 -n exclusive_origin -L 100M raid_sanity

Deactivate and then exclusively activate raid
host-115.virt.lab.msp.redhat.com: lvchange -an /dev/raid_sanity/exclusive_origin
host-115.virt.lab.msp.redhat.com: lvchange -aye /dev/raid_sanity/exclusive_origin

Taking multiple snapshots of exclusive raid
        1 2 3 4 5 
Removing snapshots of exclusive raid
        1 2 3 4 5 
        couldn't remove volume rsnap_5

BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
IP: [<ffffffff8109ed7b>] worker_thread+0x13b/0x2a0
PGD 0 
Oops: 0002 [#1] SMP 
last sysfs file: /sys/devices/virtual/block/dm-2/uevent
CPU 0 
Modules linked in: dm_snapshot dm_bufio iptable_filter ip_tables autofs4 dm_raid raid10 raid1 raid456 async_raid6_recov async_pq raid6_pq async_xor xor async_memcpy async_tx sg sd_mod crc_t10dif be2iscsi iscsi_boot_sysfs bnx2i cnic uio cxgb4i iw_cxgb4 cxgb4 cxgb3i libcxgbi iw_cxgb3 cxgb3 mdio ib_iser rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr ipv6 iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi dm_multipath microcode i6300esb virtio_balloon virtio_net i2c_piix4 i2c_core ext4 jbd2 mbcache virtio_blk virtio_pci virtio_ring virtio pata_acpi ata_generic ata_piix dm_mirror dm_region_hash dm_log dm_mod [last unloaded: speedstep_lib]

Pid: 29, comm: md_misc/0 Not tainted 2.6.32-604.el6.x86_64 #1 Red Hat KVM
RIP: 0010:[<ffffffff8109ed7b>]  [<ffffffff8109ed7b>] worker_thread+0x13b/0x2a0
RSP: 0018:ffff88003ea6be40  EFLAGS: 00010046
RAX: ffff88003cfa13f8 RBX: ffff88000221a340 RCX: 0000000000000000
RDX: ffff88003cfa13f0 RSI: 0000000000000000 RDI: ffff88000221a340
RBP: ffff88003ea6bee0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000000 R12: ffff88003ea5eab0
R13: 0000000000000000 R14: ffff88003ea6bfd8 R15: ffff88000221a348
FS:  0000000000000000(0000) GS:ffff880002200000(0000) knlGS:0000000000000000
CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
CR2: 0000000000000008 CR3: 000000003b79e000 CR4: 00000000000006f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process md_misc/0 (pid: 29, threadinfo ffff88003ea68000, task ffff88003ea5eab0)
Stack:
 0000000000000000 0000000000000000 ffff88003ea6be60 ffff88003ea5f128
<d> ffff88003ea5eab0 ffff88003ea5eab0 ffff88003ea5eab0 ffff88000221a358
<d> 0000000000000000 ffff88003ea5eab0 ffffffff810a5ab0 ffff88003ea6be98
Call Trace:
 [<ffffffff810a5ab0>] ? autoremove_wake_function+0x0/0x40
 [<ffffffff8109ec40>] ? worker_thread+0x0/0x2a0
 [<ffffffff810a561e>] kthread+0x9e/0xc0
 [<ffffffff8109ec40>] ? worker_thread+0x0/0x2a0
 [<ffffffff8100c28a>] child_rip+0xa/0x20
 [<ffffffff810a5580>] ? kthread+0x0/0xc0
 [<ffffffff8100c280>] ? child_rip+0x0/0x20
Code: 44 8b 05 51 08 b4 00 48 83 ea 08 4c 8b 63 40 4c 8b 6a 18 45 85 c0 0f 85 de 00 00 00 48 8b 43 08 48 89 53 30 48 8b 30 48 8b 48 08 <48> 89 4e 08 48 89 31 48 89 00 48 89 40 08 c7 03 00 00 00 00 fb 
RIP  [<ffffffff8109ed7b>] worker_thread+0x13b/0x2a0
 RSP <ffff88003ea6be40>
CR2: 0000000000000008
---[ end trace 6f5093bce0d99787 ]---
Kernel panic - not syncing: Fatal exception
Pid: 29, comm: md_misc/0 Tainted: G      D    -- ------------    2.6.32-604.el6.x86_64 #1
Call Trace:
 [<ffffffff81542000>] ? panic+0xa7/0x179
 [<ffffffff81546e24>] ? oops_end+0xe4/0x100
 [<ffffffff810508cb>] ? no_context+0xfb/0x260
 [<ffffffff81045f28>] ? pvclock_clocksource_read+0x58/0xd0
 [<ffffffff81050b55>] ? __bad_area_nosemaphore+0x125/0x1e0
 [<ffffffff81063c5e>] ? account_entity_enqueue+0x7e/0x90
 [<ffffffff81050c23>] ? bad_area_nosemaphore+0x13/0x20
 [<ffffffff8105131c>] ? __do_page_fault+0x30c/0x500
 [<ffffffff81073f24>] ? enqueue_task_fair+0x64/0x100
 [<ffffffff8105ec8c>] ? check_preempt_curr+0x7c/0x90
 [<ffffffff8106b5ae>] ? try_to_wake_up+0x24e/0x3e0
 [<ffffffff8106b752>] ? default_wake_function+0x12/0x20
 [<ffffffff81548d8e>] ? do_page_fault+0x3e/0xa0
 [<ffffffff815460f5>] ? page_fault+0x25/0x30
 [<ffffffff8109ed7b>] ? worker_thread+0x13b/0x2a0
 [<ffffffff810a5ab0>] ? autoremove_wake_function+0x0/0x40
 [<ffffffff8109ec40>] ? worker_thread+0x0/0x2a0
 [<ffffffff810a561e>] ? kthread+0x9e/0xc0
 [<ffffffff8109ec40>] ? worker_thread+0x0/0x2a0
 [<ffffffff8100c28a>] ? child_rip+0xa/0x20
 [<ffffffff810a5580>] ? kthread+0x0/0xc0
 [<ffffffff8100c280>] ? child_rip+0x0/0x20


Version-Release number of selected component (if applicable):
2.6.32-604.el6.x86_64

lvm2-2.02.140-3.el6    BUILT: Thu Jan 21 05:40:10 CST 2016
lvm2-libs-2.02.140-3.el6    BUILT: Thu Jan 21 05:40:10 CST 2016
lvm2-cluster-2.02.140-3.el6    BUILT: Thu Jan 21 05:40:10 CST 2016
udev-147-2.66.el6    BUILT: Mon Jan 18 02:42:20 CST 2016
device-mapper-1.02.114-3.el6    BUILT: Thu Jan 21 05:40:10 CST 2016
device-mapper-libs-1.02.114-3.el6    BUILT: Thu Jan 21 05:40:10 CST 2016
device-mapper-event-1.02.114-3.el6    BUILT: Thu Jan 21 05:40:10 CST 2016
device-mapper-event-libs-1.02.114-3.el6    BUILT: Thu Jan 21 05:40:10 CST 2016
device-mapper-persistent-data-0.6.0-1.el6    BUILT: Wed Jan 20 11:23:29 CST 2016
cmirror-2.02.140-3.el6    BUILT: Thu Jan 21 05:40:10 CST 2016


How reproducible:
Often

Comment 1 Corey Marthaler 2016-01-29 19:50:30 UTC
This should be fixed in kernel-2.6.32-608.el6

*** This bug has been marked as a duplicate of bug 1220555 ***


Note You need to log in before you can comment on or make changes to this bug.