Bug 1283491 - kernel BUG at drivers/md/md.c:317
kernel BUG at drivers/md/md.c:317
Status: CLOSED CURRENTRELEASE
Product: Fedora
Classification: Fedora
Component: kernel (Show other bugs)
rawhide
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Heinz Mauelshagen
Fedora Extras Quality Assurance
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-11-19 02:18 EST by Marian Csontos
Modified: 2016-05-17 10:37 EDT (History)
9 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-05-17 10:37:15 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
A patch for this bug (6.00 KB, patch)
2015-11-25 12:24 EST, Mikulas Patocka
no flags Details | Diff

  None (edit)
Description Marian Csontos 2015-11-19 02:18:44 EST
Description of problem:
kernel BUG at drivers/md/md.c:317 while running `lvchange --refresh VG/LV` after a PV was disabled and enabled.

Version-Release number of selected component (if applicable):
Fedora 24 (Rawhide)
kernel 4.4.0-0.rc1.git0.1.fc24.x86_64 on an x86_64

How reproducible:
100%

Steps to Reproduce:
Kernel oops while running LVM2 testsuite, cases shell/lvchange-raid.sh and shell/lvchange-raid10.sh.
This fails when running `lvchange --refresh VG/LV` after a PV was disabled and enabled.


[23986.759639] md: mdX: data-check done.^M
[23986.946151] md: requested-resync of RAID array mdX^M
[23986.948057] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.^M
[23986.950021] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for requested-resync.^M
[23986.952468] md: using 128k window, over a total of 2048k.^M
[23986.981663] md: mdX: requested-resync done.^M
[23987.057175] md: data-check of RAID array mdX^M
[23987.059019] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.^M
[23987.060983] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for data-check.^M
[23987.063385] md: using 128k window, over a total of 2048k.^M
[23987.091150] md: mdX: data-check done.^M
[23987.782706] md/raid1:mdX: Disk failure on dm-9, disabling device.^M
[23987.782706] md/raid1:mdX: Operation continuing on 1 devices.^M
[23988.187427] device-mapper: raid: Faulty raid1 device #1 has readable super block.  Attempting to revive it.^M
[23988.190138] ------------[ cut here ]------------^M
[23988.191068] kernel BUG at drivers/md/md.c:317!^M
[23988.191068] invalid opcode: 0000 [#1] SMP ^M
[23988.191068] Modules linked in: raid1 raid10 dm_raid raid456 async_raid6_recov async_memcpy async_pq async_xor xor async_tx raid6_pq dm_cache_cleaner           dm_cache_smq dm_cache_mq dm_cache dm_delay dm_thin_pool dm_persistent_data dm_bio_prison libcrc32c loop ppdev 8139too joydev serio_raw virtio_balloon             acpi_cpufreq tpm_tis 8139cp parport_pc parport tpm mii i2c_piix4 cirrus drm_kms_helper virtio_blk ttm drm virtio_pci ata_generic virtio_ring virtio pata_acpi^M
[23988.191068] CPU: 0 PID: 25202 Comm: lvm Not tainted 4.4.0-0.rc0.git9.1.fc24.x86_64 #1^M
[23988.191068] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2007^M
[23988.191068] task: ffff88003b4c0000 ti: ffff88003c7e4000 task.ti: ffff88003c7e4000^M
[23988.191068] RIP: 0010:[<ffffffff8169fef7>]  [<ffffffff8169fef7>] mddev_suspend+0xd7/0xe0^M
[23988.191068] RSP: 0018:ffff88003c7e7b88  EFLAGS: 00010202^M
[23988.191068] RAX: 0000000000000001 RBX: 0000000000000000 RCX: 0000000000000001^M
[23988.191068] RDX: 0000000000000041 RSI: ffff88003c3d19f0 RDI: ffff88003c3d1010^M
[23988.191068] RBP: ffff88003c7e7bc8 R08: 0000000000000002 R09: 0000000000000001^M
[23988.191068] R10: 0000000000000268 R11: 0000000000000001 R12: 0000000000000001^M
[23988.191068] R13: ffff88003c3d1010 R14: ffff88003c3d19f0 R15: ffff88003bddd000^M
[23988.191068] FS:  00007f326c2c5840(0000) GS:ffff88003fc00000(0000) knlGS:0000000000000000^M
[23988.191068] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033^M
[23988.191068] CR2: 00007f462aa2e1f0 CR3: 000000003cb54000 CR4: 00000000000006f0^M
[23988.191068] Stack:^M
[23988.191068]  ffff88003c7e7b98 ffffffff8111b8c9 ffff88003c7e7c00 ffffffff811e0a46^M
[23988.191068]  0000000000000018 00000000bd09410d 0000000000000000 0000000000000001^M
[23988.191068]  ffff88003c7e7c00 ffffffffa020a719 ffff88003c3d1000 0000000000000001^M
[23988.191068] Call Trace:^M
[23988.191068]  [<ffffffff8111b8c9>] ? vprintk_default+0x29/0x40^M
[23988.191068]  [<ffffffff811e0a46>] ? printk+0x57/0x73^M
[23988.191068]  [<ffffffffa020a719>] raid1_add_disk+0xd9/0x1e0 [raid1]^M
[23988.191068]  [<ffffffffa01f3dc1>] raid_resume+0x1c1/0x210 [dm_raid]^M
[23988.191068]  [<ffffffff816bcd04>] dm_table_resume_targets+0x94/0xe0^M
[23988.191068]  [<ffffffff816ba39c>] dm_resume+0x10c/0x140^M
[23988.191068]  [<ffffffff816bf130>] ? table_load+0x350/0x350^M
[23988.191068]  [<ffffffff816bf336>] dev_suspend+0x206/0x280^M
[23988.191068]  [<ffffffff816bf130>] ? table_load+0x350/0x350^M
[23988.191068]  [<ffffffff816bfbc2>] ctl_ioctl+0x282/0x550^M
[23988.191068]  [<ffffffff810e5777>] ? sched_clock_local+0x17/0x80^M
[23988.191068]  [<ffffffff816bfea3>] dm_ctl_ioctl+0x13/0x20^M
[23988.191068]  [<ffffffff81288e61>] do_vfs_ioctl+0x2f1/0x550^M
[23988.191068]  [<ffffffff8110156f>] ? percpu_up_read+0x2f/0x60^M
[23988.191068]  [<ffffffff81276710>] ? __sb_end_write+0x20/0x30^M
[23988.191068]  [<ffffffff811251dd>] ? debug_lockdep_rcu_enabled+0x1d/0x20^M
[23988.191068]  [<ffffffff81289139>] SyS_ioctl+0x79/0x90^M
[23988.191068]  [<ffffffff81873af2>] entry_SYSCALL_64_fastpath+0x12/0x76^M
[23988.191068] Code: df ff 90 98 00 00 00 48 8d bb a0 04 00 00 e8 e1 fd a8 ff 48 8b 45 e8 65 48 33 04 25 28 00 00 00 75 0b 48 83 c4 30 5b 41 5c 5d c3 <0f> 0b e8  92 b4 a0 ff 66 90 0f 1f 44 00 00 55 48 89 e5 41 54 53 ^M
[23988.191068] RIP  [<ffffffff8169fef7>] mddev_suspend+0xd7/0xe0^M
[23988.191068]  RSP <ffff88003c7e7b88>^M
[23988.302941] ---[ end trace a28a7d2419f5410d ]---^M


[25797.319341] lvm             D ffff88001410fb90 13952 25278  25263 0x00000000^M                                                                                 
[25797.321297]  ffff88001410fb90 ffff88001410fbb8 0000000000000000 ffff88003b423100^M
[25797.323331]  ffff88003c200000 ffff880014110000 0000000000000246 ffff88003c7371b8^M
[25797.325762]  ffff88003c200000 00000000ffffffff ffff88001410fba8 ffffffff8186d06c^M
[25797.327852] Call Trace:^M
[25797.329269]  [<ffffffff8186d06c>] schedule+0x3c/0x90^M
[25797.330965]  [<ffffffff8186d445>] schedule_preempt_disabled+0x15/0x20^M
[25797.332838]  [<ffffffff8186fe40>] mutex_lock_nested+0x190/0x400^M
[25797.334924]  [<ffffffff816b8783>] ? __dm_destroy+0xc3/0x2c0^M
[25797.336849]  [<ffffffff816b8783>] ? __dm_destroy+0xc3/0x2c0^M
[25797.338582]  [<ffffffff816b8783>] __dm_destroy+0xc3/0x2c0^M
[25797.340292]  [<ffffffff816ba43d>] ? dm_kobject_uevent+0x6d/0xa0^M
[25797.342062]  [<ffffffff816b9d93>] dm_destroy+0x13/0x20^M
[25797.343733]  [<ffffffff816bf4bd>] dev_remove+0x10d/0x170^M
[25797.345573]  [<ffffffff816bf3b0>] ? dev_suspend+0x280/0x280^M
[25797.347649]  [<ffffffff816bfbc2>] ctl_ioctl+0x282/0x550^M
[25797.349270]  [<ffffffff816bfea3>] dm_ctl_ioctl+0x13/0x20^M
[25797.351009]  [<ffffffff81288e61>] do_vfs_ioctl+0x2f1/0x550^M
[25797.352973]  [<ffffffff8110156f>] ? percpu_up_read+0x2f/0x60^M
[25797.354827]  [<ffffffff81276710>] ? __sb_end_write+0x20/0x30^M
[25797.356826]  [<ffffffff81274173>] ? vfs_write+0x163/0x1a0^M
[25797.358870]  [<ffffffff81289139>] SyS_ioctl+0x79/0x90^M
[25797.360578]  [<ffffffff81873af2>] entry_SYSCALL_64_fastpath+0x12/0x76^M
Comment 1 Marian Csontos 2015-11-19 03:13:41 EST
Heinz, are you able to reproduce, please? If not, what other information do you need?

Though this is failing in md but let's start with dm.

strace | tail:

17:40:32.583843 write(3, "libdm-deptree.c:2728   Suppressed LVMTEST23827vg-LV1_rmeta_0 (253:66) identical table reload.\n", 94) = 94
17:40:32.583938 write(3, "libdm-deptree.c:2693   Loading LVMTEST23827vg-LV1 table (253:70)\n", 65) = 65
17:40:32.584015 write(3, "libdm-deptree.c:2637   Adding target to (253:70): 0 4096 raid raid1 3 0 region_size 1024 2 253:66 253:67 253:68 253:69\n", 119) = 119
17:40:32.584094 write(3, "ioctl/libdm-iface.c:1838   dm table   (253:70) OF   [16384] (*1)\n", 65) = 65
17:40:32.584161 ioctl(5, DM_TABLE_STATUS, 0x55ec50bbf5c0) = 0
17:40:32.584389 write(3, "libdm-deptree.c:2728   Suppressed LVMTEST23827vg-LV1 (253:70) identical table reload.\n", 86) = 86
17:40:32.584483 write(3, "libdm-deptree.c:1347   Resuming LVMTEST23827vg-LV1_rimage_1 (253:69)\n", 69) = 69
17:40:32.584575 open("/dev/urandom", O_RDONLY) = 14
17:40:32.584656 read(14, "\244\202", 2) = 2
17:40:32.584943 semget(0xd4d82a4, 1, IPC_CREAT|IPC_EXCL|0600) = 29917188
17:40:32.585021 write(3, "libdm-common.c:2338   Udev cookie 0xd4d82a4 (semid 29917188) created\n", 69) = 69
17:40:32.585088 semctl(29917188, 0, SETVAL, 0x1) = 0
17:40:32.585149 semctl(29917188, 0, GETVAL, 0x7f65b2a2fdfa) = 1
17:40:32.585222 write(3, "libdm-common.c:2358   Udev cookie 0xd4d82a4 (semid 29917188) incremented to 1\n", 78) = 78
17:40:32.585319 close(14)               = 0
17:40:32.585541 semop(29917188, {{0, 1, 0}}, 1) = 0
17:40:32.585619 semctl(29917188, 0, GETVAL, 0x7f65b2a2fd97) = 2
17:40:32.585683 write(3, "libdm-common.c:2230   Udev cookie 0xd4d82a4 (semid 29917188) incremented to 2\n", 78) = 78
17:40:32.585756 write(3, "libdm-common.c:2480   Udev cookie 0xd4d82a4 (semid 29917188) assigned to RESUME task(5) with flags DISABLE_SUBSYSTEM_RULES DISABLE_DISK_RULES DISABLE_OTHER_RULES         (0xe)\n", 176) = 176
17:40:32.586053 write(3, "ioctl/libdm-iface.c:1838   dm resume   (253:69) NF   [16384] (*1)\n", 66) = 66
17:40:32.586124 ioctl(5, DM_DEV_SUSPEND, 0x55ec50bbf5c0) = 0
17:40:32.589359 write(3, "libdm-common.c:1427   LVMTEST23827vg-LV1_rimage_1: Stacking NODE_ADD (253,69) 0:6 0660 [verify_udev]\n", 101) = 101
17:40:32.590361 write(3, "libdm-common.c:1437   LVMTEST23827vg-LV1_rimage_1: Stacking NODE_READ_AHEAD 256 (flags=1)\n", 90) = 90
17:40:32.590441 write(3, "libdm-common.c:230   Suspended device counter reduced to 4\n", 59) = 59
17:40:32.590686 write(3, "libdm-deptree.c:1347   Resuming LVMTEST23827vg-LV1_rmeta_1 (253:68)\n", 68) = 68
17:40:32.590764 semget(0xd4d82a4, 1, 0) = 29917188
17:40:32.590837 semop(29917188, {{0, 1, 0}}, 1) = 0
17:40:32.590905 semctl(29917188, 0, GETVAL, 0x7f65b2a2fd97) = 3
17:40:32.591179 write(3, "libdm-common.c:2230   Udev cookie 0xd4d82a4 (semid 29917188) incremented to 3\n", 78) = 78
17:40:32.591359 write(3, "libdm-common.c:2480   Udev cookie 0xd4d82a4 (semid 29917188) assigned to RESUME task(5) with flags DISABLE_SUBSYSTEM_RULES DISABLE_DISK_RULES DISABLE_OTHER_RULES         (0xe)\n", 176) = 176
17:40:32.592368 write(3, "ioctl/libdm-iface.c:1838   dm resume   (253:68) NF   [16384] (*1)\n", 66) = 66
17:40:32.592442 ioctl(5, DM_DEV_SUSPEND, 0x55ec50bbf5c0) = 0
17:40:32.592545 write(3, "libdm-common.c:1427   LVMTEST23827vg-LV1_rmeta_1: Stacking NODE_ADD (253,68) 0:6 0660 [verify_udev]\n", 100) = 100
17:40:32.592623 write(3, "libdm-common.c:1437   LVMTEST23827vg-LV1_rmeta_1: Stacking NODE_READ_AHEAD 256 (flags=1)\n", 89) = 89
17:40:32.592855 write(3, "libdm-common.c:230   Suspended device counter reduced to 3\n", 59) = 59
17:40:32.592933 write(3, "libdm-deptree.c:1347   Resuming LVMTEST23827vg-LV1_rimage_0 (253:67)\n", 69) = 69
17:40:32.593004 semget(0xd4d82a4, 1, 0) = 29917188
17:40:32.593064 semop(29917188, {{0, 1, 0}}, 1) = 0
17:40:32.593809 semctl(29917188, 0, GETVAL, 0x7f65b2a2fd97) = 4
17:40:32.593883 write(3, "libdm-common.c:2230   Udev cookie 0xd4d82a4 (semid 29917188) incremented to 4\n", 78) = 78
17:40:32.593957 write(3, "libdm-common.c:2480   Udev cookie 0xd4d82a4 (semid 29917188) assigned to RESUME task(5) with flags DISABLE_SUBSYSTEM_RULES DISABLE_DISK_RULES DISABLE_OTHER_RULES         (0xe)\n", 176) = 176
17:40:32.594033 write(3, "ioctl/libdm-iface.c:1838   dm resume   (253:67) NF   [16384] (*1)\n", 66) = 66
17:40:32.594100 ioctl(5, DM_DEV_SUSPEND, 0x55ec50bbf5c0) = 0
17:40:32.594373 write(3, "libdm-common.c:1427   LVMTEST23827vg-LV1_rimage_0: Stacking NODE_ADD (253,67) 0:6 0660 [verify_udev]\n", 101) = 101
17:40:32.595328 write(3, "libdm-common.c:1437   LVMTEST23827vg-LV1_rimage_0: Stacking NODE_READ_AHEAD 256 (flags=1)\n", 90) = 90
17:40:32.595409 write(3, "libdm-common.c:230   Suspended device counter reduced to 2\n", 59) = 59
17:40:32.595483 write(3, "libdm-deptree.c:1347   Resuming LVMTEST23827vg-LV1_rmeta_0 (253:66)\n", 68) = 68
17:40:32.595582 semget(0xd4d82a4, 1, 0) = 29917188
17:40:32.595885 semop(29917188, {{0, 1, 0}}, 1) = 0
17:40:32.596356 semctl(29917188, 0, GETVAL, 0x7f65b2a2fd97) = 5
17:40:32.596425 write(3, "libdm-common.c:2230   Udev cookie 0xd4d82a4 (semid 29917188) incremented to 5\n", 78) = 78
17:40:32.596499 write(3, "libdm-common.c:2480   Udev cookie 0xd4d82a4 (semid 29917188) assigned to RESUME task(5) with flags DISABLE_SUBSYSTEM_RULES DISABLE_DISK_RULES DISABLE_OTHER_RULES         (0xe)\n", 176) = 176
17:40:32.596596 write(3, "ioctl/libdm-iface.c:1838   dm resume   (253:66) NF   [16384] (*1)\n", 66) = 66
17:40:32.597325 ioctl(5, DM_DEV_SUSPEND, 0x55ec50bbf5c0) = 0
17:40:32.597583 write(3, "libdm-common.c:1427   LVMTEST23827vg-LV1_rmeta_0: Stacking NODE_ADD (253,66) 0:6 0660 [verify_udev]\n", 100) = 100
17:40:32.597664 write(3, "libdm-common.c:1437   LVMTEST23827vg-LV1_rmeta_0: Stacking NODE_READ_AHEAD 256 (flags=1)\n", 89) = 89
17:40:32.597736 write(3, "libdm-common.c:230   Suspended device counter reduced to 1\n", 59) = 59
17:40:32.597826 write(3, "libdm-deptree.c:1347   Resuming LVMTEST23827vg-LV1 (253:70)\n", 60) = 60
17:40:32.598497 semget(0xd4d82a4, 1, 0) = 29917188
17:40:32.598577 semop(29917188, {{0, 1, 0}}, 1) = 0
17:40:32.599336 semctl(29917188, 0, GETVAL, 0x7f65b2a2fd97) = 6
17:40:32.599425 write(3, "libdm-common.c:2230   Udev cookie 0xd4d82a4 (semid 29917188) incremented to 6\n", 78) = 78
17:40:32.599516 write(3, "libdm-common.c:2480   Udev cookie 0xd4d82a4 (semid 29917188) assigned to RESUME task(5) with flags         (0x0)\n", 113) = 113
17:40:32.599776 write(3, "ioctl/libdm-iface.c:1838   dm resume   (253:70) NF   [16384] (*1)\n", 66) = 66
17:40:32.599862 ioctl(5, DM_DEV_SUSPEND <unfinished ...>
17:40:32.732179 +++ killed by SIGSEGV +++
Comment 3 Mikulas Patocka 2015-11-24 11:34:45 EST
I'm also seeing this error on 4.4-rc, LVM test shell/lvchange-raid.sh causes it.
Comment 4 Mikulas Patocka 2015-11-25 12:24 EST
Created attachment 1098924 [details]
A patch for this bug

See also this https://www.redhat.com/archives/dm-devel/2015-November/msg00228.html
Comment 5 Alexey Mochkin 2015-11-26 02:37:23 EST
Could you explain me, is "mdadm --create --level raid10" nested?
Also if I have scheme with raid10 → lvm → raid1, is this nested md or not?
Comment 6 Alasdair Kergon 2015-11-26 08:31:35 EST
"This fails when running `lvchange --refresh VG/LV` after a PV was disabled and enabled."

Can we break this down into the sequence of steps that is leading to this problem, and how the patch resolves it?
Comment 7 Alasdair Kergon 2015-11-26 08:33:30 EST
(In reply to Alexey Mochkin from comment #5)
> Could you explain me, is "mdadm --create --level raid10" nested?
> Also if I have scheme with raid10 → lvm → raid1, is this nested md or not?

The "nesting" of suspends referred to is internal nesting within the kernel - there is an attempt to suspend md to update the integrity profile while it is already suspended.
Comment 8 Alexey Mochkin 2015-11-26 08:37:27 EST
(In reply to Alasdair Kergon from comment #7)
> (In reply to Alexey Mochkin from comment #5)
> > Could you explain me, is "mdadm --create --level raid10" nested?
> > Also if I have scheme with raid10 → lvm → raid1, is this nested md or not?
> 
> The "nesting" of suspends referred to is internal nesting within the kernel
> - there is an attempt to suspend md to update the integrity profile while it
> is already suspended.

I already read it in bug description. But I can't understand is case which I described in my comment "nested" or not?
Comment 9 Mikulas Patocka 2015-11-26 09:12:15 EST
> I already read it in bug description. But I can't understand is case which I
> described in my comment "nested" or not?

The first case isn't nested, the second case is. But it has nothing to do with "nesting" that caused this bug.
Comment 10 Heinz Mauelshagen 2016-02-29 10:57:16 EST
Marian,

I presume this is not reproducible with recent kernel any more?

Note You need to log in before you can comment on or make changes to this bug.