RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1537033 - [RHEL7.6] BUG_ON(s->uptodate < disks - 1) condition encountered in RAID6 array with 6 disks and while one disk had failed.
Summary: [RHEL7.6] BUG_ON(s->uptodate < disks - 1) condition encountered in RAID6 arra...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: kernel
Version: 7.4
Hardware: All
OS: Linux
high
medium
Target Milestone: rc
: ---
Assignee: Nigel Croxon
QA Contact: ChanghuiZhong
URL:
Whiteboard:
Depends On:
Blocks: 1594286 1698109
TreeView+ depends on / blocked
 
Reported: 2018-01-22 10:17 UTC by Nitin Yewale
Modified: 2021-09-09 13:04 UTC (History)
18 users (show)

Fixed In Version: kernel-3.10.0-1037.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1698109 (view as bug list)
Environment:
Last Closed: 2019-08-06 12:10:33 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Don't jump to compute_result state from check_result state (1.32 KB, patch)
2018-09-25 15:29 UTC, David Jeffery
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 3320491 0 None None None 2018-01-22 16:04:50 UTC
Red Hat Product Errata RHSA-2019:2029 0 None None None 2019-08-06 12:11:00 UTC

Comment 2 Nitin Yewale 2018-01-22 10:21:09 UTC

"drivers/md/raid5.c"

	3912 static void handle_parity_checks6(struct r5conf *conf, struct stripe_head *sh,
	3913                                   struct stripe_head_state *s,
	3914                                   int disks)
	3915 {
	3916         int pd_idx = sh->pd_idx;
	3917         int qd_idx = sh->qd_idx;
	3918         struct r5dev *dev;
	3919 
	3920         BUG_ON(sh->batch_head);
	3921         set_bit(STRIPE_HANDLE, &sh->state);
	3922 
	3923         BUG_ON(s->failed > 2);
	3924 
	3925         /* Want to check and possibly repair P and Q.
	3926          * However there could be one 'failed' device, in which
	3927          * case we can only check one of them, possibly using the
	3928          * other to generate missing data
	3929          */
	3930 
	3931         switch (sh->check_state) {
	3932         case check_state_idle:
	3933                 /* start a new check operation if there are < 2 failures */
	3934                 if (s->failed == s->q_failed) {
	3935                         /* The only possible failed device holds Q, so it
	3936                          * makes sense to check P (If anything else were failed,
	3937                          * we would have used P to recreate it).
	3938                          */
	3939                         sh->check_state = check_state_run;
	3940                 }
	3941                 if (!s->q_failed && s->failed < 2) {
	3942                         /* Q is not failed, and we didn't use it to generate
	3943                          * anything, so it makes sense to check it
	3944                          */
	3945                         if (sh->check_state == check_state_run)
	3946                                 sh->check_state = check_state_run_pq;  <--------- /* pq dual parity check */ 
	3947                         else
	3948                                 sh->check_state = check_state_run_q;   
	3949                 }
	3950 
	3951                 /* discard potentially stale zero_sum_result */
	3952                 sh->ops.zero_sum_result = 0;   <------- after this 3 conditions were checked 
	3953 
	3954                 if (sh->check_state == check_state_run) {
	3955                         /* async_xor_zero_sum destroys the contents of P */
	3956                         clear_bit(R5_UPTODATE, &sh->dev[pd_idx].flags);
	3957                         s->uptodate--;
	3958                 }
	3959                 if (sh->check_state >= check_state_run &&
	3960                     sh->check_state <= check_state_run_pq) {
	3961                         /* async_syndrome_zero_sum preserves P and Q, so
	3962                          * no need to mark them !uptodate here
	3963                          */
	3964                         set_bit(STRIPE_OP_CHECK, &s->ops_request);
	3965                         break;
	3966                 }
	3967 
	3968                 /* we have 2-disk failure */
	3969                 BUG_ON(s->failed != 2);
	3970                 /* fall through */
	3971         case check_state_compute_result:         <---- intermediate states that are visible outside of STRIPE_ACTIVE.
	                                                       check_state_compute_result - set outside lock when compute result is valid
	3972                 sh->check_state = check_state_idle;  <---- check operations are quiesced
	3973 
	3974                 /* check that a write has not made the stripe insync */
	3975                 if (test_bit(STRIPE_INSYNC, &sh->state))
	3976                         break;
	3977 
	3978                 /* now write out any block on a failed drive,
	3979                  * or P or Q if they were recomputed
	3980                  */
	3981                 BUG_ON(s->uptodate < disks - 1); /* We don't need Q to recover */   <------ crashed here [1]
	3982                 if (s->failed == 2) {
	3983                         dev = &sh->dev[s->failed_num[1]];
	3984                         s->locked++;
	3985                         set_bit(R5_LOCKED, &dev->flags);
	3986                         set_bit(R5_Wantwrite, &dev->flags);
	3987                 }

Comment 3 Stanislav Kozina 2018-01-30 14:07:30 UTC
Fixing the component.

Can you please list the kernel version used? Thanks!

Comment 5 guazhang@redhat.com 2018-01-31 10:31:38 UTC
Hello

any reproduces here ?

Comment 14 loberman 2018-02-04 14:06:32 UTC
Hello

Busy with the reproducer here.
So far a few loops have been well behaved.

Sent a patch yesterday upstream for the scsi_debug driver to enable a sector start and length of fault injection.

Using this on kernel 3.10.0-693.11.6.el7.scsi_debug.x86_64 and injecting the fault as follows

Create the scsi_debug drive and create the array

modprobe scsi_debug dev_size_mb=11000  max_luns=1 num_tgts=1
mdadm --create --verbose /dev/md0 --level=6 --raid-devices=6 /dev/sdi1 /dev/sde1 /dev/sdj1 /dev/sdg1 /dev/sdf1 /dev/sdk1 --spare-devices=1 /dev/sdh1

Note sdk is the scsi_debug driver where we induce medium errors

Set up the fault injection

in /sys/module/scsi_debug/parameters

echo 1000 > medium_error_start 
echo 1000000 > medium_error_count

Write data covering this range and try read back.

First few tests we fail properly with 5 drives left

[ 1083.490877] md/raid:md0: read error NOT corrected!! (sector 26976 on sdk1).
[ 1083.537148] md/raid:md0: read error NOT corrected!! (sector 26976 on sdk1).
[ 1083.585145] md/raid:md0: read error NOT corrected!! (sector 26976 on sdk1).
[ 1083.633141] md/raid:md0: read error NOT corrected!! (sector 26976 on sdk1).
[ 1083.681139] md/raid:md0: read error NOT corrected!! (sector 26984 on sdk1).
[ 1083.729140] md/raid:md0: read error NOT corrected!! (sector 26984 on sdk1).
[ 1083.777136] md/raid:md0: read error NOT corrected!! (sector 26984 on sdk1).
[ 1083.824137] md/raid:md0: read error NOT corrected!! (sector 26984 on sdk1).
[ 1083.873138] md/raid:md0: read error NOT corrected!! (sector 26992 on sdk1).
[ 1083.921131] md/raid:md0: read error NOT corrected!! (sector 26992 on sdk1).
[ 1084.406105] md/raid:md0: Too many read errors, failing device sdk1.
[ 1084.438720] md/raid:md0: Disk failure on sdk1, disabling device.
[ 1084.438720] md/raid:md0: Operation continuing on 5 devices.
[ 1084.512491] md: recovery of RAID array md0


Truncated log
--------------
[ 1055.705293] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1055.705294] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1055.705295] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1055.705296] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1055.705297] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1055.705298] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 5b 72 00 00 01 00
[ 1055.705299] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 5b 70 00 00 08 00
[ 1057.400256] raid5_end_read_request: 525 callbacks suppressed
[ 1057.431581] md/raid:md0: read error NOT corrected!! (sector 21696 on sdk1).
[ 1057.479247] md/raid:md0: read error NOT corrected!! (sector 21696 on sdk1).
[ 1057.529248] md/raid:md0: read error NOT corrected!! (sector 21696 on sdk1).
[ 1057.578297] md/raid:md0: read error NOT corrected!! (sector 21696 on sdk1).
[ 1057.626243] md/raid:md0: read error NOT corrected!! (sector 21704 on sdk1).
[ 1057.674240] md/raid:md0: read error NOT corrected!! (sector 21704 on sdk1).
[ 1057.722237] md/raid:md0: read error NOT corrected!! (sector 21704 on sdk1).
[ 1057.769234] md/raid:md0: read error NOT corrected!! (sector 21704 on sdk1).
[ 1057.817240] md/raid:md0: read error NOT corrected!! (sector 21704 on sdk1).
[ 1057.862227] md/raid:md0: read error NOT corrected!! (sector 21712 on sdk1).
[ 1060.501095] blk_update_request: 1307 callbacks suppressed
[ 1060.528404] blk_update_request: critical medium error, dev sdk, sector 24416
[ 1060.568167] blk_update_request: critical medium error, dev sdk, sector 24417
[ 1060.569077] blk_update_request: critical medium error, dev sdk, sector 24416
[ 1060.650125] blk_update_request: critical medium error, dev sdk, sector 24418
[ 1060.650279] blk_update_request: critical medium error, dev sdk, sector 24416
[ 1060.742109] scsi_io_completion: 1267 callbacks suppressed
[ 1060.742116] blk_update_request: critical medium error, dev sdk, sector 24420
[ 1060.743121] blk_update_request: critical medium error, dev sdk, sector 24421
[ 1060.744080] blk_update_request: critical medium error, dev sdk, sector 24422
[ 1060.745083] blk_update_request: critical medium error, dev sdk, sector 24423
[ 1060.746076] blk_update_request: critical medium error, dev sdk, sector 24424
[ 1060.966111] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1061.010017] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1061.044099] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1061.078146] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 5f 60 00 00 08 00
[ 1061.125138] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1061.126063] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1061.126065] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1061.126067] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1061.126069] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 5f 68 00 00 08 00
[ 1061.319287] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1061.352960] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1061.385506] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 5f 69 00 00 01 00
[ 1061.425127] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1061.425305] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1061.425307] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1061.425309] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1061.425311] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 5f 6a 00 00 01 00
[ 1061.426161] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1061.426163] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1061.426165] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1061.426167] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 5f 6b 00 00 01 00
[ 1061.773094] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1061.807120] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1061.841088] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 5f 68 00 00 08 00
[ 1061.897061] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1061.897070] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1061.897074] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1061.897078] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1061.897082] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 5f 6e 00 00 01 00
[ 1061.898037] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1061.898039] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1061.898041] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1061.898043] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 5f 6f 00 00 01 00
[ 1061.899073] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1061.899075] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1061.899077] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1061.899079] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 5f 70 00 00 01 00
[ 1062.381457] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1062.416491] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1062.450443] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 5f 68 00 00 08 00
[ 1062.488043] raid5_end_read_request: 336 callbacks suppressed
[ 1062.520008] md/raid:md0: read error NOT corrected!! (sector 22376 on sdk1).
[ 1062.569071] md/raid:md0: read error NOT corrected!! (sector 22384 on sdk1).
[ 1062.617003] md/raid:md0: read error NOT corrected!! (sector 22384 on sdk1).
[ 1062.668000] md/raid:md0: read error NOT corrected!! (sector 22384 on sdk1).
[ 1062.720003] md/raid:md0: read error NOT corrected!! (sector 22384 on sdk1).
[ 1062.767994] md/raid:md0: read error NOT corrected!! (sector 22392 on sdk1).
[ 1062.817994] md/raid:md0: read error NOT corrected!! (sector 22392 on sdk1).
[ 1062.865989] md/raid:md0: read error NOT corrected!! (sector 22392 on sdk1).
[ 1062.912994] md/raid:md0: read error NOT corrected!! (sector 22392 on sdk1).
[ 1062.960985] md/raid:md0: read error NOT corrected!! (sector 22400 on sdk1).
[ 1065.569905] blk_update_request: 565 callbacks suppressed
[ 1065.597073] blk_update_request: critical medium error, dev sdk, sector 26912
[ 1065.641938] blk_update_request: critical medium error, dev sdk, sector 26921
[ 1065.642869] blk_update_request: critical medium error, dev sdk, sector 26920
[ 1065.722004] blk_update_request: critical medium error, dev sdk, sector 26920
[ 1065.722905] blk_update_request: critical medium error, dev sdk, sector 26922
[ 1065.724824] blk_update_request: critical medium error, dev sdk, sector 26923
[ 1065.725870] blk_update_request: critical medium error, dev sdk, sector 26924
[ 1065.726873] blk_update_request: critical medium error, dev sdk, sector 26925
[ 1065.727864] blk_update_request: critical medium error, dev sdk, sector 26926
[ 1065.728900] blk_update_request: critical medium error, dev sdk, sector 26927
[ 1066.127916] scsi_io_completion: 701 callbacks suppressed
[ 1066.155117] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1066.197609] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1066.232676] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1066.266882] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 69 90 00 00 08 00
[ 1066.522912] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1066.523828] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1066.523830] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1066.523831] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1066.523834] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 69 98 00 00 08 00
[ 1066.717120] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1066.751974] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1066.784711] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 69 99 00 00 01 00
[ 1066.822825] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1066.822829] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1066.822831] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1066.822833] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1066.822835] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 69 9a 00 00 01 00
[ 1066.823850] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1066.823852] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1066.823854] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1066.823856] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 69 9b 00 00 01 00
[ 1066.824899] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1066.824901] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1066.824903] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1066.824905] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 69 9c 00 00 01 00
[ 1066.825884] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1066.825886] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1066.825888] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1066.825890] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 69 9d 00 00 01 00
[ 1066.826875] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1066.826877] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1066.826879] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1066.826881] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 69 9e 00 00 01 00
[ 1066.827886] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1066.827888] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1066.827890] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1066.827891] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 69 9f 00 00 01 00
[ 1067.770848] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1067.806568] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1067.846810] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 69 98 00 00 08 00
[ 1067.885233] raid5_end_read_request: 272 callbacks suppressed
[ 1067.917417] md/raid:md0: read error NOT corrected!! (sector 24984 on sdk1).
[ 1067.970753] md/raid:md0: read error NOT corrected!! (sector 24992 on sdk1).
[ 1068.018804] md/raid:md0: read error NOT corrected!! (sector 24992 on sdk1).
[ 1068.065798] md/raid:md0: read error NOT corrected!! (sector 24992 on sdk1).
[ 1068.112804] md/raid:md0: read error NOT corrected!! (sector 24992 on sdk1).
[ 1068.160793] md/raid:md0: read error NOT corrected!! (sector 25000 on sdk1).
[ 1068.208836] md/raid:md0: read error NOT corrected!! (sector 25000 on sdk1).
[ 1068.258796] md/raid:md0: read error NOT corrected!! (sector 25000 on sdk1).
[ 1068.306792] md/raid:md0: read error NOT corrected!! (sector 25000 on sdk1).
[ 1068.353788] md/raid:md0: read error NOT corrected!! (sector 25008 on sdk1).
[ 1070.646690] blk_update_request: 878 callbacks suppressed
[ 1070.673589] blk_update_request: critical medium error, dev sdk, sector 27616
[ 1070.717691] blk_update_request: critical medium error, dev sdk, sector 27625
[ 1070.718650] blk_update_request: critical medium error, dev sdk, sector 27624
[ 1070.795598] blk_update_request: critical medium error, dev sdk, sector 27624
[ 1070.796696] blk_update_request: critical medium error, dev sdk, sector 27626
[ 1070.797689] blk_update_request: critical medium error, dev sdk, sector 27627
[ 1070.798654] blk_update_request: critical medium error, dev sdk, sector 27628
[ 1070.799657] blk_update_request: critical medium error, dev sdk, sector 27629
[ 1070.800650] blk_update_request: critical medium error, dev sdk, sector 27630
[ 1070.802614] blk_update_request: critical medium error, dev sdk, sector 27631
[ 1071.524650] scsi_io_completion: 793 callbacks suppressed
[ 1071.553068] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1071.596996] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1071.632091] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1071.666542] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 6c 18 00 00 08 00
[ 1071.707759] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1071.707765] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1071.707769] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1071.707773] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1071.707777] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 6c 19 00 00 01 00
[ 1071.708661] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1071.708663] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1071.708665] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1071.708667] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 6c 1a 00 00 01 00
[ 1071.709706] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1071.709708] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1071.709710] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1071.709712] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 6c 1b 00 00 01 00
[ 1071.710715] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1071.710717] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1071.710719] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1071.710721] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 6c 1c 00 00 01 00
[ 1071.711660] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1071.711662] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1071.711664] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1071.711666] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 6c 1d 00 00 01 00
[ 1071.713604] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1071.713607] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1071.713608] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1071.713610] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 6c 1e 00 00 01 00
[ 1071.714606] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1071.714608] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1071.714610] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1071.714612] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 6c 1f 00 00 01 00
[ 1072.792679] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1072.830734] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1072.867425] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 6c 18 00 00 08 00
[ 1072.909753] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1072.954793] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1072.987954] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1073.021972] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 6c 20 00 00 01 00
[ 1073.062677] raid5_end_read_request: 309 callbacks suppressed
[ 1073.091653] md/raid:md0: read error NOT corrected!! (sector 25632 on sdk1).
[ 1073.139620] md/raid:md0: read error NOT corrected!! (sector 25632 on sdk1).
[ 1073.187562] md/raid:md0: read error NOT corrected!! (sector 25632 on sdk1).
[ 1073.234613] md/raid:md0: read error NOT corrected!! (sector 25632 on sdk1).
[ 1073.281668] md/raid:md0: read error NOT corrected!! (sector 25632 on sdk1).
[ 1073.331608] md/raid:md0: read error NOT corrected!! (sector 25640 on sdk1).
[ 1073.378551] md/raid:md0: read error NOT corrected!! (sector 25640 on sdk1).
[ 1073.425549] md/raid:md0: read error NOT corrected!! (sector 25640 on sdk1).
[ 1073.469544] md/raid:md0: read error NOT corrected!! (sector 25640 on sdk1).
[ 1073.516547] md/raid:md0: read error NOT corrected!! (sector 25640 on sdk1).
[ 1075.718473] blk_update_request: 776 callbacks suppressed
[ 1075.747060] blk_update_request: critical medium error, dev sdk, sector 28240
[ 1075.795511] blk_update_request: critical medium error, dev sdk, sector 28249
[ 1075.796441] blk_update_request: critical medium error, dev sdk, sector 28248
[ 1075.876481] blk_update_request: critical medium error, dev sdk, sector 28250
[ 1075.876616] blk_update_request: critical medium error, dev sdk, sector 28248
[ 1075.963466] blk_update_request: critical medium error, dev sdk, sector 28248
[ 1075.963509] blk_update_request: critical medium error, dev sdk, sector 28252
[ 1075.965392] blk_update_request: critical medium error, dev sdk, sector 28253
[ 1076.089456] blk_update_request: critical medium error, dev sdk, sector 28248
[ 1076.089467] blk_update_request: critical medium error, dev sdk, sector 28256
[ 1076.711440] scsi_io_completion: 895 callbacks suppressed
[ 1076.740623] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1076.783026] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1076.816305] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1076.850359] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 6e e0 00 00 08 00
[ 1076.896522] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1076.897395] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1076.897397] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1076.897398] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1076.897400] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 6e e8 00 00 08 00
[ 1077.087464] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1077.122005] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1077.156031] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 6e e9 00 00 01 00
[ 1077.195424] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1077.195429] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1077.195431] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1077.195433] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1077.195435] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 6e ea 00 00 01 00
[ 1077.196495] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1077.196497] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1077.196499] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1077.196501] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 6e eb 00 00 01 00
[ 1077.197443] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1077.197445] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1077.197447] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1077.197449] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 6e ec 00 00 01 00
[ 1077.199428] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1077.199430] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1077.199432] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1077.199434] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 6e ed 00 00 01 00
[ 1077.843788] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1077.877403] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1077.911429] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 6e e8 00 00 08 00
[ 1077.958384] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1077.958406] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1077.958409] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1077.958413] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1077.958417] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 6e f0 00 00 01 00
[ 1078.160532] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1078.195532] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1078.230100] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 6e e8 00 00 08 00
[ 1078.268056] raid5_end_read_request: 354 callbacks suppressed
[ 1078.300122] md/raid:md0: read error NOT corrected!! (sector 26344 on sdk1).
[ 1078.348415] md/raid:md0: read error NOT corrected!! (sector 26352 on sdk1).
[ 1078.394368] md/raid:md0: read error NOT corrected!! (sector 26352 on sdk1).
[ 1078.441358] md/raid:md0: read error NOT corrected!! (sector 26352 on sdk1).
[ 1078.490360] md/raid:md0: read error NOT corrected!! (sector 26352 on sdk1).
[ 1078.538362] md/raid:md0: read error NOT corrected!! (sector 26360 on sdk1).
[ 1078.586371] md/raid:md0: read error NOT corrected!! (sector 26360 on sdk1).
[ 1078.634356] md/raid:md0: read error NOT corrected!! (sector 26360 on sdk1).
[ 1078.686349] md/raid:md0: read error NOT corrected!! (sector 26360 on sdk1).
[ 1078.734352] md/raid:md0: read error NOT corrected!! (sector 26368 on sdk1).
[ 1080.797255] blk_update_request: 670 callbacks suppressed
[ 1080.823916] blk_update_request: critical medium error, dev sdk, sector 28784
[ 1080.868406] blk_update_request: critical medium error, dev sdk, sector 28793
[ 1080.869273] blk_update_request: critical medium error, dev sdk, sector 28792
[ 1080.946242] blk_update_request: critical medium error, dev sdk, sector 28792
[ 1080.947300] blk_update_request: critical medium error, dev sdk, sector 28794
[ 1080.948231] blk_update_request: critical medium error, dev sdk, sector 28795
[ 1080.949235] blk_update_request: critical medium error, dev sdk, sector 28796
[ 1080.951183] blk_update_request: critical medium error, dev sdk, sector 28797
[ 1080.952220] blk_update_request: critical medium error, dev sdk, sector 28798
[ 1080.953226] blk_update_request: critical medium error, dev sdk, sector 28799
[ 1081.898218] scsi_io_completion: 777 callbacks suppressed
[ 1081.927474] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1081.970006] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1082.004293] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1082.038892] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 71 50 00 00 08 00
[ 1082.086217] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1082.087177] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1082.087179] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1082.087181] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1082.087183] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 71 58 00 00 08 00
[ 1082.281757] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1082.315048] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1082.348670] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 71 59 00 00 01 00
[ 1082.388230] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1082.388235] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1082.388237] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1082.388239] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1082.388241] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 71 5a 00 00 01 00
[ 1082.389287] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1082.389289] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1082.389291] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1082.389293] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 71 5b 00 00 01 00
[ 1082.390238] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1082.390240] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1082.390242] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1082.390244] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 71 5c 00 00 01 00
[ 1082.391289] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1082.391291] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1082.391293] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1082.391295] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 71 5d 00 00 01 00
[ 1082.393189] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1082.393191] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1082.393193] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1082.393195] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 71 5e 00 00 01 00
[ 1083.194980] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1083.229083] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1083.262523] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 71 58 00 00 08 00
[ 1083.310064] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[ 1083.353102] sd 6:0:0:0: [sdk] Sense Key : Medium Error [current] 
[ 1083.386421] sd 6:0:0:0: [sdk] Add. Sense: Unrecovered read error
[ 1083.419704] sd 6:0:0:0: [sdk] CDB: Read(10) 28 00 00 00 71 60 00 00 01 00
[ 1083.460195] raid5_end_read_request: 308 callbacks suppressed
[ 1083.490877] md/raid:md0: read error NOT corrected!! (sector 26976 on sdk1).
[ 1083.537148] md/raid:md0: read error NOT corrected!! (sector 26976 on sdk1).
[ 1083.585145] md/raid:md0: read error NOT corrected!! (sector 26976 on sdk1).
[ 1083.633141] md/raid:md0: read error NOT corrected!! (sector 26976 on sdk1).
[ 1083.681139] md/raid:md0: read error NOT corrected!! (sector 26984 on sdk1).
[ 1083.729140] md/raid:md0: read error NOT corrected!! (sector 26984 on sdk1).
[ 1083.777136] md/raid:md0: read error NOT corrected!! (sector 26984 on sdk1).
[ 1083.824137] md/raid:md0: read error NOT corrected!! (sector 26984 on sdk1).
[ 1083.873138] md/raid:md0: read error NOT corrected!! (sector 26992 on sdk1).
[ 1083.921131] md/raid:md0: read error NOT corrected!! (sector 26992 on sdk1).
[ 1084.406105] md/raid:md0: Too many read errors, failing device sdk1.
[ 1084.438720] md/raid:md0: Disk failure on sdk1, disabling device.
[ 1084.438720] md/raid:md0: Operation continuing on 5 devices.
[ 1084.512491] md: recovery of RAID array md0

Comment 15 loberman 2018-02-05 13:14:10 UTC
Multiple loops through this test have not reproduced the issue here.
Each time after multiple read failures we get a stable raid.

..
..
[ 1084.406105] md/raid:md0: Too many read errors, failing device sdk1.
[ 1084.438720] md/raid:md0: Disk failure on sdk1, disabling device.
[ 1084.438720] md/raid:md0: Operation continuing on 5 devices.
[ 1084.512491] md: recovery of RAID array md0

Comment 20 loberman 2018-02-06 19:15:49 UTC
I changed tactics and set the device to stall I/O long enough to timeout and the one of the raid members offline


This never recovers, so another bug :)


[ 1208.473834] sd 6:0:0:0: [sdk] Medium access timeout failure. Offlining disk!
[ 1208.477236] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_TIME_OUT driverbyte=DRIVER_OK
[ 1208.480609] sd 6:0:0:0: [sdk] CDB: Write(10) 2a 00 00 27 2b c0 00 00 40 00
[ 1208.483945] blk_update_request: I/O error, dev sdk, sector 2567104
[ 1208.487287] sd 6:0:0:0: rejecting I/O to offline device
[ 1208.490606] sd 6:0:0:0: [sdk] killing request
[ 1208.493876] sd 6:0:0:0: rejecting I/O to offline device
[ 1208.493923] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
[ 1208.493929] sd 6:0:0:0: [sdk] CDB: Write(10) 2a 00 00 27 2c 00 00 00 40 00
[ 1208.493933] blk_update_request: I/O error, dev sdk, sector 2567168
[ 1208.506990] sd 6:0:0:0: rejecting I/O to offline device
[ 1208.510284] sd 6:0:0:0: rejecting I/O to offline device
[ 1208.513522] sd 6:0:0:0: rejecting I/O to offline device
[ 1208.516756] sd 6:0:0:0: rejecting I/O to offline device
[ 1208.519984] sd 6:0:0:0: rejecting I/O to offline device
[ 1208.523208] sd 6:0:0:0: rejecting I/O to offline device
[ 1208.526428] sd 6:0:0:0: rejecting I/O to offline device
[ 1208.529642] sd 6:0:0:0: rejecting I/O to offline device
[ 1208.532792] sd 6:0:0:0: rejecting I/O to offline device
[ 1208.535860] sd 6:0:0:0: rejecting I/O to offline device
[ 1208.538864] sd 6:0:0:0: rejecting I/O to offline device
[ 1208.541736] sd 6:0:0:0: rejecting I/O to offline device
[ 1208.544480] sd 6:0:0:0: rejecting I/O to offline device
[ 1208.547135] sd 6:0:0:0: rejecting I/O to offline device
[ 1208.549795] sd 6:0:0:0: rejecting I/O to offline device
[ 1208.552438] sd 6:0:0:0: rejecting I/O to offline device
[ 1208.555016] sd 6:0:0:0: rejecting I/O to offline device
..
..
[ 1208.663595] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_TIME_OUT driverbyte=DRIVER_OK
[ 1208.664221] sd 6:0:0:0: [sdk] CDB: Write(10) 2a 00 00 27 2b 40 00 00 40 00
[ 1208.664900] blk_update_request: I/O error, dev sdk, sector 2566976
[ 1208.665656] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_TIME_OUT driverbyte=DRIVER_OK
[ 1208.666521] sd 6:0:0:0: [sdk] CDB: Write(10) 2a 00 00 27 2b 00 00 00 40 00
[ 1208.667443] blk_update_request: I/O error, dev sdk, sector 2566912
[ 1208.668439] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_TIME_OUT driverbyte=DRIVER_OK
[ 1208.669551] sd 6:0:0:0: [sdk] CDB: Write(10) 2a 00 00 27 2a c0 00 00 40 00
[ 1208.670651] sd 6:0:0:0: rejecting I/O to offline device
[ 1208.670655] blk_update_request: I/O error, dev sdk, sector 2056

[ 1208.670658] md: super_written gets error=-5, uptodate=0
[ 1208.670660] md/raid:md0: Disk failure on sdk1, disabling device.
md/raid:md0: Operation continuing on 5 devices.
[ 1208.670664] sd 6:0:0:0: rejecting I/O to offline device
[ 1208.670665] blk_update_request: I/O error, dev sdk, sector 2192
[ 1208.670666] md: super_written gets error=-5, uptodate=0

We land up here

[ 1208.691849] md: recovery of RAID array md0
[ 1208.701926] ------------[ cut here ]------------
[ 1208.701943] WARNING: CPU: 17 PID: 2562 at drivers/md/raid5.c:4667 handle_stripe+0x23d6/0x2460 [raid456]
[ 1208.702036] blk_update_request: I/O error, dev sdk, sector 2566848
[ 1208.702047] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_TIME_OUT driverbyte=DRIVER_OK
[ 1208.702048] sd 6:0:0:0: [sdk] CDB: Write(10) 2a 00 00 27 2a 80 00 00 40 00
[ 1208.702049] blk_update_request: I/O error, dev sdk, sector 2566784
[ 1208.702059] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_TIME_OUT driverbyte=DRIVER_OK
[ 1208.702060] sd 6:0:0:0: [sdk] CDB: Write(10) 2a 00 00 27 2a 40 00 00 40 00
[ 1208.702061] blk_update_request: I/O error, dev sdk, sector 2566720
[ 1208.702070] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_TIME_OUT driverbyte=DRIVER_OK
[ 1208.702071] sd 6:0:0:0: [sdk] CDB: Write(10) 2a 00 00 27 2a 00 00 00 40 00
[ 1208.702072] blk_update_request: I/O error, dev sdk, sector 2566656
[ 1208.702082] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_TIME_OUT driverbyte=DRIVER_OK
[ 1208.702084] sd 6:0:0:0: [sdk] CDB: Write(10) 2a 00 00 27 29 c0 00 00 40 00
[ 1208.702093] sd 6:0:0:0: [sdk] FAILED Result: hostbyte=DID_TIME_OUT driverbyte=DRIVER_OK
[ 1208.702095] sd 6:0:0:0: [sdk] CDB: Write(10) 2a 00 00 27 29 80 00 00 40 00

This loops over and over

[ 1208.701944] Modules linked in: target_core_user uio target_core_pscsi target_core_file target_core_iblock scsi_debug xt_CHECKSUM iptable_mangle ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_nat_ipv4 nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack ipt_REJECT nf_reject_ipv4 tun bridge stp llc ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter raid456 async_raid6_recov async_memcpy async_pq raid6_pq async_xor xor async_tx rpcrdma ib_isert iscsi_target_mod ib_iser libiscsi scsi_transport_iscsi ib_srpt target_core_mod ib_srp scsi_transport_srp ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm dm_mirror dm_region_hash dm_log mlx4_ib dm_mod ses enclosure joydev intel_powerclamp ib_qib rdmavt ib_core dca wmi coretemp kvm_intel kvm hpilo gpio_ich pcspkr
[ 1208.765559]  megaraid_sas ipmi_ssif hpwdt iTCO_wdt iTCO_vendor_support i7core_edac edac_core irqbypass lpc_ich shpchp sg acpi_power_meter ipmi_si ipmi_devintf ipmi_msghandler pcc_cpufreq nfsd auth_rpcgss nfs_acl lockd grace sunrpc ip_tables xfs libcrc32c mlx4_en ptp pps_core sd_mod sr_mod cdrom radeon ata_generic pata_acpi lpfc i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm crc_t10dif mlx4_core drm crct10dif_generic ata_piix crc32c_intel serio_raw libata scsi_transport_fc nvme hpsa i2c_core devlink nvme_core netxen_nic scsi_tgt scsi_transport_sas crct10dif_common
[ 1208.785077] CPU: 17 PID: 2562 Comm: md0_raid6 Not tainted 3.10.0-693.11.6.el7.scsi_debug.x86_64 #1
[ 1208.789113] Hardware name: HP ProLiant DL580 G7, BIOS P65 10/01/2013
[ 1208.793166] Call Trace:
[ 1208.797205]  [<ffffffff816a5ea1>] dump_stack+0x19/0x1b
[ 1208.801276]  [<ffffffff81089448>] __warn+0xd8/0x100
[ 1208.805312]  [<ffffffff8108958d>] warn_slowpath_null+0x1d/0x20
[ 1208.809347]  [<ffffffffc0ab25a6>] handle_stripe+0x23d6/0x2460 [raid456]
[ 1208.813391]  [<ffffffff810c61c3>] ? try_to_wake_up+0x183/0x340
[ 1208.817493]  [<ffffffff816b9ca0>] ? apic_timer_interrupt+0x1a0/0x240
[ 1208.821639]  [<ffffffff816b9c99>] ? apic_timer_interrupt+0x199/0x240
[ 1208.825774]  [<ffffffff810c6452>] ? default_wake_function+0x12/0x20
[ 1208.830012]  [<ffffffff810bc0f8>] ? __wake_up_common+0x58/0x90
[ 1208.834275]  [<ffffffffc0ab29dd>] handle_active_stripes.isra.55+0x3ad/0x530 [raid456]
[ 1208.838637]  [<ffffffffc0ab3058>] raid5d+0x4f8/0x7f0 [raid456]
[ 1208.843056]  [<ffffffff8150ff55>] md_thread+0x155/0x1a0
[ 1208.847519]  [<ffffffff810b34b0>] ? wake_up_atomic_t+0x30/0x30
[ 1208.852027]  [<ffffffff8150fe00>] ? find_pers+0x80/0x80
[ 1208.856578]  [<ffffffff810b252f>] kthread+0xcf/0xe0
[ 1208.861153]  [<ffffffff810b2460>] ? insert_kthread_work+0x40/0x40
[ 1208.865767]  [<ffffffff816b8798>] ret_from_fork+0x58/0x90
[ 1208.870397]  [<ffffffff810b2460>] ? insert_kthread_work+0x40/0x40
[ 1208.875041] ---[ end trace 293211e7e4a1fab0 ]---
[ 1208.879641] ------------[ cut here ]------------

..
..
..
[ 1216.052241] CPU: 49 PID: 2562 Comm: md0_raid6 Tainted: G        W      ------------   3.10.0-693.11.6.el7.scsi_debug.x86_64 #1
[ 1216.063180] Hardware name: HP ProLiant DL580 G7, BIOS P65 10/01/2013
[ 1216.068596] Call Trace:
[ 1216.073885]  [<ffffffff816a5ea1>] dump_stack+0x19/0x1b
[ 1216.079119]  [<ffffffff81089448>] __warn+0xd8/0x100
[ 1216.084237]  [<ffffffff8108958d>] warn_slowpath_null+0x1d/0x20
[ 1216.089266]  [<ffffffffc0ab25a6>] handle_stripe+0x23d6/0x2460 [raid456]
[ 1216.094214]  [<ffffffff810c61c3>] ? try_to_wake_up+0x183/0x340
[ 1216.099066]  [<ffffffff816b9ca0>] ? apic_timer_interrupt+0x1a0/0x240
[ 1216.103819]  [<ffffffff810c6452>] ? default_wake_function+0x12/0x20
[ 1216.108469]  [<ffffffff810bc0f8>] ? __wake_up_common+0x58/0x90
[ 1216.113013]  [<ffffffffc0ab29dd>] handle_active_stripes.isra.55+0x3ad/0x530 [raid456]
[ 1216.117508]  [<ffffffffc0ab3058>] raid5d+0x4f8/0x7f0 [raid456]
[ 1216.121910]  [<ffffffff8150ff55>] md_thread+0x155/0x1a0
[ 1216.126270]  [<ffffffff810b34b0>] ? wake_up_atomic_t+0x30/0x30
[ 1216.130613]  [<ffffffff8150fe00>] ? find_pers+0x80/0x80
[ 1216.134934]  [<ffffffff810b252f>] kthread+0xcf/0xe0
[ 1216.139249]  [<ffffffff810b2460>] ? insert_kthread_work+0x40/0x40
[ 1216.143584]  [<ffffffff816b8798>] ret_from_fork+0x58/0x90
[ 1216.147903]  [<ffffffff810b2460>] ? insert_kthread_work+0x40/0x40
[ 1216.152224] ---[ end trace 293211e7e4a1fadb ]---
[ 1216.156546] ------------[ cut here ]------------
..
..
[ 1235.300374] WARNING: CPU: 21 PID: 2562 at drivers/md/raid5.c:4667 handle_stripe+0x23d6/0x2460 [raid456]
[ 1235.304822] Modules linked in: target_core_user uio target_core_pscsi target_core_file target_core_iblock scsi_debug xt_CHECKSUM iptable_mangle ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_nat_ipv4 nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack ipt_REJECT nf_reject_ipv4 tun bridge stp llc ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter raid456 async_raid6_recov async_memcpy async_pq raid6_pq async_xor xor async_tx rpcrdma ib_isert iscsi_target_mod ib_iser libiscsi scsi_transport_iscsi ib_srpt target_core_mod ib_srp scsi_transport_srp ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm dm_mirror dm_region_hash dm_log mlx4_ib dm_mod ses enclosure joydev intel_powerclamp ib_qib rdmavt ib_core dca wmi coretemp kvm_intel kvm hpilo gpio_ich pcspkr
[ 1235.333967]  megaraid_sas ipmi_ssif hpwdt iTCO_wdt iTCO_vendor_support i7core_edac edac_core irqbypass lpc_ich shpchp sg acpi_power_meter ipmi_si ipmi_devintf ipmi_msghandler pcc_cpufreq nfsd auth_rpcgss nfs_acl lockd grace sunrpc ip_tables xfs libcrc32c mlx4_en ptp pps_core sd_mod sr_mod cdrom radeon ata_generic pata_acpi lpfc i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm crc_t10dif mlx4_core drm crct10dif_generic ata_piix crc32c_intel serio_raw libata scsi_transport_fc[ 1235.300374] WARNING: CPU: 21 PID: 2562 at drivers/md/raid5.c:4667 handle_stripe+0x23d6/0x2460 [raid456]
[ 1235.304822] Modules linked in: target_core_user uio target_core_pscsi target_core_file target_core_iblock scsi_debug xt_CHECKSUM iptable_mangle ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_nat_ipv4 nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack ipt_REJECT nf_reject_ipv4 tun bridge stp llc ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter raid456 async_raid6_recov async_memcpy async_pq raid6_pq async_xor xor async_tx rpcrdma ib_isert iscsi_target_mod ib_iser libiscsi scsi_transport_iscsi ib_srpt target_core_mod ib_srp scsi_transport_srp ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm dm_mirror dm_region_hash dm_log mlx4_ib dm_mod ses enclosure joydev intel_powerclamp ib_qib rdmavt ib_core dca wmi coretemp kvm_intel kvm hpilo gpio_ich pcspkr
[ 1235.333967]  megaraid_sas ipmi_ssif hpwdt iTCO_wdt iTCO_vendor_support i7core_edac edac_core irqbypass lpc_ich shpchp sg acpi_power_meter ipmi_si ipmi_devintf ipmi_msghandler pcc_cpufreq nfsd auth_rpcgss nfs_acl lockd grace sunrpc ip_tables xfs libcrc32c mlx4_en ptp pps_core sd_mod sr_mod cdrom radeon ata_generic pata_acpi lpfc i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm crc_t10dif mlx4_core drm crct10dif_generic ata_piix crc32c_intel serio_raw libata scsi_transport_fc nvme hpsa i2c_core devlink nvme_core netxen_nic scsi_tgt scsi_transport_sas crct10dif_common
[ 1235.360900] CPU: 21 PID: 2562 Comm: md0_raid6 Tainted: G        W      ------------   3.10.0-693.11.6.el7.scsi_debug.x86_64 #1
[ 1235.371850] Hardware name: HP ProLiant DL580 G7, BIOS P65 10/01/2013
[ 1235.377271] Call Trace:
[ 1235.382567]  [<ffffffff816a5ea1>] dump_stack+0x19/0x1b
[ 1235.387806]  [<ffffffff81089448>] __warn+0xd8/0x100
[ 1235.392931]  [<ffffffff8108958d>] warn_slowpath_null+0x1d/0x20
[ 1235.397966]  [<ffffffffc0ab25a6>] handle_stripe+0x23d6/0x2460 [raid456]
[ 1235.402921]  [<ffffffff810c61c3>] ? try_to_wake_up+0x183/0x340
[ 1235.407782]  [<ffffffff816b9ca0>] ? apic_timer_interrupt+0x1a0/0x240
[ 1235.412543]  [<ffffffff810c6452>] ? default_wake_function+0x12/0x20
[ 1235.417199]  [<ffffffff810bc0f8>] ? __wake_up_common+0x58/0x90
[ 1235.421750]  [<ffffffffc0ab29dd>] handle_active_stripes.isra.55+0x3ad/0x530 [raid456]
[ 1235.426254]  [<ffffffffc0ab3058>] raid5d+0x4f8/0x7f0 [raid456]
[ 1235.430664]  [<ffffffff8150ff55>] md_thread+0x155/0x1a0
[ 1235.435031]  [<ffffffff810b34b0>] ? wake_up_atomic_t+0x30/0x30
[ 1235.439382]  [<ffffffff8150fe00>] ? find_pers+0x80/0x80
[ 1235.443711]  [<ffffffff810b252f>] kthread+0xcf/0xe0
[ 1235.448038]  [<ffffffff810b2460>] ? insert_kthread_work+0x40/0x40
[ 1235.452383]  [<ffffffff816b8798>] ret_from_fork+0x58/0x90
[ 1235.456715]  [<ffffffff810b2460>] ? insert_kthread_work+0x40/0x40
[ 1235.461042] ---[ end trace 293211e7e4a1fb4d ]---
[ 1235.465372] ------------[ cut here ]------------
 nvme hpsa i2c_core devlink nvme_core netxen_nic scsi_tgt scsi_transport_sas crct10dif_common
[ 1235.360900] CPU: 21 PID: 2562 Comm: md0_raid6 Tainted: G        W      ------------   3.10.0-693.11.6.el7.scsi_debug.x86_64 #1
[ 1235.371850] Hardware name: HP ProLiant DL580 G7, BIOS P65 10/01/2013
[ 1235.377271] Call Trace:
[ 1235.382567]  [<ffffffff816a5ea1>] dump_stack+0x19/0x1b
[ 1235.387806]  [<ffffffff81089448>] __warn+0xd8/0x100
[ 1235.392931]  [<ffffffff8108958d>] warn_slowpath_null+0x1d/0x20
[ 1235.397966]  [<ffffffffc0ab25a6>] handle_stripe+0x23d6/0x2460 [raid456]
[ 1235.402921]  [<ffffffff810c61c3>] ? try_to_wake_up+0x183/0x340
[ 1235.407782]  [<ffffffff816b9ca0>] ? apic_timer_interrupt+0x1a0/0x240
[ 1235.412543]  [<ffffffff810c6452>] ? default_wake_function+0x12/0x20
[ 1235.417199]  [<ffffffff810bc0f8>] ? __wake_up_common+0x58/0x90
[ 1235.421750]  [<ffffffffc0ab29dd>] handle_active_stripes.isra.55+0x3ad/0x530 [raid456]
[ 1235.426254]  [<ffffffffc0ab3058>] raid5d+0x4f8/0x7f0 [raid456]
[ 1235.430664]  [<ffffffff8150ff55>] md_thread+0x155/0x1a0
[ 1235.435031]  [<ffffffff810b34b0>] ? wake_up_atomic_t+0x30/0x30
[ 1235.439382]  [<ffffffff8150fe00>] ? find_pers+0x80/0x80
[ 1235.443711]  [<ffffffff810b252f>] kthread+0xcf/0xe0
[ 1235.448038]  [<ffffffff810b2460>] ? insert_kthread_work+0x40/0x40
[ 1235.452383]  [<ffffffff816b8798>] ret_from_fork+0x58/0x90
[ 1235.456715]  [<ffffffff810b2460>] ? insert_kthread_work+0x40/0x40
[ 1235.461042] ---[ end trace 293211e7e4a1fb4d ]---
[ 1235.465372] ------------[ cut here ]------------

Comment 29 David Jeffery 2018-08-20 19:37:38 UTC
I am reopening this bug as I believe I have found the cause.  Another customer has hit this issue, capturing 2 vmcores from crashing in the same BUG check.


The customer's systems were using 8 disk RAID6 arrays.  The crashes occurred with one failing disk while scrubbing was occurring with the "check" action.  After the parity information for a stripe was checked for a scrubbing operation, the uptodate value was expected to be 7 or 8, a maximum of 1 disk failed, but the uptodate value was 6 which triggered the BUG_ON(s->uptodate < disk - 1).

crash> struct stripe_head_state ffff8aba77affbf0
struct stripe_head_state {
  syncing = 1,
...
  uptodate = 6,
  to_read = 0,
  to_write = 0,
  failed = 1,
...


For a stripe read into memory with a single failed disk, the failed counter was 1 as expected but the uptodate counter didn't consider all 7 healthy disks as uptodate.  Looking at the individual disk states for the stripe, 6 of the 8 paths had a r5dev flags value of 17, or R5_Insync | R5_UPTODATE.  The remaining working disk had a flags value of only 16, or R5_Insync. The R5_UPTODATE flag was not set.

crash> p ((struct stripe_head *)0xffff8a9c83273280)->dev[5]
$8 = {
...
  sector = 0,
  flags = 16,
  log_checksum = 0
}

This would appear to be why the update count was lower than expected.  The last disk was the failed device, with flags R5_ReadNoMerge | R5_ReadError | R5_ReWrite.

crash> p ((struct stripe_head *)0xffff8a9c83273280)->dev[6]
$9 = {
...
  sector = 0,
  flags = 1792,
  log_checksum = 0
}


These 2 not uptodate disks held the parity information for this RAID6 stripe.

crash> struct stripe_head ffff8a9c83273280
struct stripe_head {
...
  pd_idx = 5,
  qd_idx = 6,
...

The normal data disks of the stripe were as expected, the "q" parity device was failed, but the "p" parity device was the issue from not being marked uptodate while being considered healthy.


The "p" parity disk uses the same parity format as the RAID5 code.  When the "p" disk is available but the "q" disk is failed, the RAID6 code does not use the RAID6-specific function which can check both sets of parity data.  Instead, it uses the RAID5 parity check code which also has the side effect of destroying the in-memory copy of the parity information.  This would be why the "p" parity disk was no longer marked uptodate.  The parity check destroyed its data.  

handle_parity_checks6() cleared the R5_UPTODATE flag and decreased uptodate itself to account for it using the parity check which will destroy the parity data:


        case check_state_idle:
                /* start a new check operation if there are < 2 failures */
                if (s->failed == s->q_failed) {
                        /* The only possible failed device holds Q, so it
                         * makes sense to check P (If anything else were failed,
                         * we would have used P to recreate it).
                         */
                        sh->check_state = check_state_run;
                }
...
                if (sh->check_state == check_state_run) {
                        /* async_xor_zero_sum destroys the contents of P */
                        clear_bit(R5_UPTODATE, &sh->dev[pd_idx].flags);
                        s->uptodate--;
                }


When the parity check succeeds, the handle_parity_checks6 function fails to account for this difference in behavior.   The parity check succeeds while having a failed device, and the raid6 code attempts to jump from "check_state_check_result" state to "check_state_compute_result" state without re-reading or re-computing the destroyed parity data.  This results in the uptodate count being too low for the "check_state_compute_result" state, triggering the bug.

        case check_state_check_result:
                sh->check_state = check_state_idle;

                /* handle a successful check operation, if parity is correct
                 * we are done.  Otherwise update the mismatch count and repair
                 * parity if !MD_RECOVERY_CHECK
                 */
                if (sh->ops.zero_sum_result == 0) {
                        /* both parities are correct */
                        if (!s->failed)
                                set_bit(STRIPE_INSYNC, &sh->state);
                        else {
                                /* in contrast to the raid5 case we can validate
                                 * parity, but still have a failure to write
                                 * back
                                 */
                                sh->check_state = check_state_compute_result;
                                /* Returning at this point means that we may go
                                 * off and bring p and/or q uptodate again so
                                 * we make sure to check zero_sum_result again
                                 * to verify if p or q need writeback
                                 */
                        }


To trigger this issue, you not only need a failed disk but to also perform a scrubbing operation.  The customer's systems both crashed early Sunday morning when the raid-check script is run by default from cron.

Comment 38 David Jeffery 2018-09-25 15:29:49 UTC
Created attachment 1486781 [details]
Don't jump to compute_result state from check_result state

Changing state from check_state_check_result to check_state_compute_result not only is unsafe but also doesn't appear to serve a valid purpose.  A raid6 check should only be pushing out extra writes if doing repair and a mis-match occurs.  The stripe dev management will already try and do repair writes for failing sectors.

This patch makes the raid6 check_state_check_result handling work more like raid5's.  If somehow too many failures for a check, just quit the check operation for the stripe.  When any checks pass, don't try and use check_state_compute_result for a purpose it isn't needed for and is unsafe for.  Just mark the stripe as in sync for passing its parity checks and let the stripe dev read/write code and the bad blocks list do their job handling I/O errors.

I've been running this patch against upstream.  It fixes the crash and so far has handled the error scenarios I've run it through without issue.

Comment 58 loberman 2019-03-27 13:48:47 UTC
This is a kernel fix. 
A zstream once targeted to come out within a month with the fix can then be used as a base for a hotfix kernel request.
I will approach Jon Magrini to get the process started, next week would only be a test kernel, not a hotfix.

Comment 61 Steffen Froemer 2019-04-01 11:24:02 UTC
(In reply to loberman from comment #58)
> This is a kernel fix. 
> A zstream once targeted to come out within a month with the fix can then be
> used as a base for a hotfix kernel request.
> I will approach Jon Magrini to get the process started, next week would only
> be a test kernel, not a hotfix.

test-kernel would be sufficient for the first time. My customer would like to test the fix on his non-production systems.

Comment 65 Bruno Meneguele 2019-04-09 13:36:41 UTC
Patch(es) committed on kernel-3.10.0-1037.el7

Comment 79 ChanghuiZhong 2019-06-12 06:47:18 UTC
Hi, all

According to comment#36, I reproduce this bug at kernel-3.10.0-693.11.6.el7.x86_64:

      KERNEL: /usr/lib/debug/lib/modules/3.10.0-693.11.6.el7.x86_64/vmlinux
    DUMPFILE: /var/crash/hpe-bl465cgen8-01.hpe1.lab.eng.bos.redhat.com/10.16.203.19-2019-06-12-02:11:12/vmcore  [PARTIAL DUMP]
        CPUS: 16
        DATE: Wed Jun 12 02:10:11 2019
      UPTIME: 02:45:36
LOAD AVERAGE: 0.00, 0.01, 0.04
       TASKS: 302
    NODENAME: hpe-bl465cgen8-01.hpe1.lab.eng.bos.redhat.com
     RELEASE: 3.10.0-693.11.6.el7.x86_64
     VERSION: #1 SMP Thu Dec 28 14:23:39 EST 2017
     MACHINE: x86_64  (2099 Mhz)
      MEMORY: 16 GB
       PANIC: "kernel BUG at drivers/md/raid5.c:3981!"
         PID: 1845
     COMMAND: "md6_raid6"
        TASK: ffff8802bd2b4f10  [THREAD_INFO: ffff880417740000]
         CPU: 12
       STATE: TASK_RUNNING (PANIC)

crash> sys | grep -ie RELEASE -e PANIC
     RELEASE: 3.10.0-693.11.6.el7.x86_64
       PANIC: "kernel BUG at drivers/md/raid5.c:3981!"
crash> bt
PID: 1845   TASK: ffff8802bd2b4f10  CPU: 12  COMMAND: "md6_raid6"
 #0 [ffff8804177437f0] machine_kexec at ffffffff8105c58b
 #1 [ffff880417743850] __crash_kexec at ffffffff81106742
 #2 [ffff880417743920] crash_kexec at ffffffff81106830
 #3 [ffff880417743938] oops_end at ffffffff816b0aa8
 #4 [ffff880417743960] die at ffffffff8102e87b
 #5 [ffff880417743990] do_trap at ffffffff816b01f0
 #6 [ffff8804177439e0] do_invalid_op at ffffffff8102b174
 #7 [ffff880417743a90] invalid_op at ffffffff816bd1ae
    [exception RIP: handle_parity_checks6+783]
    RIP: ffffffffc0888f6f  RSP: ffff880417743b40  RFLAGS: 00010297
    RAX: 000000000000202b  RBX: 0000000000000000  RCX: 0000000000000004
    RDX: ffff880417743bf0  RSI: ffff880229789070  RDI: ffff880434075828
    RBP: ffff880417743ca8   R8: 0000000000000001   R9: 0000000000000006
    R10: 0000000000000004  R11: 0000000000000000  R12: 0000000000000000
    R13: ffff880434075800  R14: 0000000000000000  R15: ffff880229789070
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
 #8 [ffff880417743b48] apic_timer_interrupt at ffffffff816b9cbc
 #9 [ffff880417743bb0] apic_timer_interrupt at ffffffff816b9c61
#10 [ffff880417743cb0] handle_active_stripes at ffffffffc08959dd [raid456]
#11 [ffff880417743d60] raid5d at ffffffffc0896058 [raid456]
#12 [ffff880417743e50] md_thread at ffffffff8150ff55
#13 [ffff880417743ec8] kthread at ffffffff810b252f
#14 [ffff880417743f50] ret_from_fork at ffffffff816b8798
crash> 

dmesg log:

[ 9935.394023] ------------[ cut here ]------------
[ 9935.414807] kernel BUG at drivers/md/raid5.c:3981!
[ 9935.436455] invalid opcode: 0000 [#1] SMP 
[ 9935.454977] Modules linked in: raid456 async_raid6_recov async_memcpy async_pq raid6_pq async_xor xor async_tx dm_service_time scsi_debug rpcsec_gss_krb5 nfsv4 dns_resolver nfs fscache rpcrdma ib_isert iscsi_target_mod ib_iser ib_srpt target_core_mod ib_srp scsi_transport_srp scsi_tgt ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm ib_core amd64_edac_mod edac_mce_amd edac_core kvm_amd kvm irqbypass crc32_pclmul ghash_clmulni_intel ipmi_ssif aesni_intel lrw gf128mul glue_helper ablk_helper cryptd be2iscsi iscsi_boot_sysfs libiscsi scsi_transport_iscsi sg pcspkr ipmi_si sp5100_tco i2c_piix4 fam15h_power k10temp ipmi_devintf ipmi_msghandler hpilo hpwdt shpchp dm_multipath pcc_cpufreq acpi_power_meter nfsd auth_rpcgss nfs_acl lockd grace sunrpc ip_tables xfs libcrc32c sd_mod crc_t10dif

[ 9935.779397]  crct10dif_generic ata_generic pata_acpi mgag200 i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm ahci libahci pata_atiixp libata hpsa crct10dif_pclmul be2net crct10dif_common crc32c_intel serio_raw i2c_core scsi_transport_sas dm_mirror dm_region_hash dm_log dm_mod
[ 9935.897859] CPU: 12 PID: 1845 Comm: md6_raid6 Not tainted 3.10.0-693.11.6.el7.x86_64 #1
[ 9935.933905] Hardware name: HP ProLiant BL465c Gen8, BIOS A26 02/06/2014
[ 9935.963728] task: ffff8802bd2b4f10 ti: ffff880417740000 task.ti: ffff880417740000
[ 9935.997436] RIP: 0010:[<ffffffffc0888f6f>]  [<ffffffffc0888f6f>] handle_parity_checks6.isra.33+0x30f/0x320 [raid456]
[ 9936.046521] RSP: 0018:ffff880417743b40  EFLAGS: 00010297
[ 9936.073189] RAX: 000000000000202b RBX: 0000000000000000 RCX: 0000000000000004
[ 9936.108456] RDX: ffff880417743bf0 RSI: ffff880229789070 RDI: ffff880434075828
[ 9936.140240] RBP: ffff880417743ca8 R08: 0000000000000001 R09: 0000000000000006
[ 9936.172330] R10: 0000000000000004 R11: 0000000000000000 R12: 0000000000000000
[ 9936.204374] R13: ffff880434075800 R14: 0000000000000000 R15: ffff880229789070
[ 9936.236509] FS:  00007fc736483740(0000) GS:ffff88043eb00000(0000) knlGS:0000000000000000
[ 9936.273056] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[ 9936.298904] CR2: 00007f5d83266000 CR3: 00000000019fa000 CR4: 00000000000407e0
[ 9936.331240] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 9936.363840] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[ 9936.395956] Call Trace:
[ 9936.406926]  [<ffffffffc0894e2d>] ? handle_stripe+0x1c5d/0x2460 [raid456]
[ 9936.437580]  [<ffffffff816b9cbc>] ? apic_timer_interrupt+0x1bc/0x240
[ 9936.466298]  [<ffffffff816b9cb5>] ? apic_timer_interrupt+0x1b5/0x240
[ 9936.495028]  [<ffffffff816b9cae>] ? apic_timer_interrupt+0x1ae/0x240
[ 9936.523575]  [<ffffffff816b9ca7>] ? apic_timer_interrupt+0x1a7/0x240
[ 9936.553732]  [<ffffffff816b9ca0>] ? apic_timer_interrupt+0x1a0/0x240
[ 9936.585772]  [<ffffffff816b9c99>] ? apic_timer_interrupt+0x199/0x240
[ 9936.616770]  [<ffffffff816b9c92>] ? apic_timer_interrupt+0x192/0x240
[ 9936.645423]  [<ffffffff816b9c8b>] ? apic_timer_interrupt+0x18b/0x240
[ 9936.674044]  [<ffffffff816b9c84>] ? apic_timer_interrupt+0x184/0x240
[ 9936.702616]  [<ffffffff816b9c7d>] ? apic_timer_interrupt+0x17d/0x240
[ 9936.731259]  [<ffffffff816b9c68>] ? apic_timer_interrupt+0x168/0x240
[ 9936.759890]  [<ffffffff816b9c61>] ? apic_timer_interrupt+0x161/0x240
[ 9936.788516]  [<ffffffff816b9c5a>] ? apic_timer_interrupt+0x15a/0x240
[ 9936.817200]  [<ffffffff816b9c53>] ? apic_timer_interrupt+0x153/0x240
[ 9936.845562]  [<ffffffff816b9c4c>] ? apic_timer_interrupt+0x14c/0x240
[ 9936.874301]  [<ffffffff816b9c45>] ? apic_timer_interrupt+0x145/0x240
[ 9936.903287]  [<ffffffffc08959dd>] handle_active_stripes.isra.55+0x3ad/0x530 [raid456]
[ 9936.938554]  [<ffffffffc0896058>] raid5d+0x4f8/0x7f0 [raid456]
[ 9936.964891]  [<ffffffff8102954d>] ? __switch_to+0xcd/0x500
[ 9936.989573]  [<ffffffff8150ff55>] md_thread+0x155/0x1a0
[ 9937.013089]  [<ffffffff810b34b0>] ? wake_up_atomic_t+0x30/0x30
[ 9937.039388]  [<ffffffff8150fe00>] ? find_pers+0x80/0x80
[ 9937.064449]  [<ffffffff810b252f>] kthread+0xcf/0xe0
[ 9937.088839]  [<ffffffff810b2460>] ? insert_kthread_work+0x40/0x40
[ 9937.119281]  [<ffffffff816b8798>] ret_from_fork+0x58/0x90
[ 9937.143638]  [<ffffffff810b2460>] ? insert_kthread_work+0x40/0x40
[ 9937.171135] Code: 1b c7 46 60 02 00 00 00 b8 02 00 00 00 c7 86 d8 00 00 00 00 00 00 00 e9 0f fe ff ff c7 46 60 03 00 00 00 b0 03 eb e6 0f 0b 0f 0b <0f> 0b 0f 0b 0f 1f 00 66 2e 0f 1f 84 00 00 00 00 00 66 66 66 66 
[ 9937.256916] RIP  [<ffffffffc0888f6f>] handle_parity_checks6.isra.33+0x30f/0x320 [raid456]
[ 9937.293667]  RSP <ffff880417743b40>



Confirm that this bug has been fixed in kernel-3.10.0-1037.el7, so move the bug status to "VERIFIED".

Thanks,
Changhui

Comment 81 errata-xmlrpc 2019-08-06 12:10:33 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2019:2029

Comment 82 Nigel Croxon 2019-10-29 13:38:39 UTC
clearing needinfo


Note You need to log in before you can comment on or make changes to this bug.