Bug 1462437 - libata - Marvell 88SM9715 SATA port multiplier problem
libata - Marvell 88SM9715 SATA port multiplier problem
Status: CLOSED EOL
Product: Fedora
Classification: Fedora
Component: kernel (Show other bugs)
25
x86_64 Linux
unspecified Severity medium
: ---
: ---
Assigned To: Kernel Maintainer List
Fedora Extras Quality Assurance
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-06-17 13:33 EDT by Joseph A. Farmer
Modified: 2017-12-12 05:36 EST (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-12-12 05:36:10 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
dmesg showing problem. (18.91 KB, application/x-gzip)
2017-06-17 13:33 EDT, Joseph A. Farmer
no flags Details
new dmesg (1.84 KB, application/x-gzip)
2017-06-17 13:40 EDT, Joseph A. Farmer
no flags Details

  None (edit)
Description Joseph A. Farmer 2017-06-17 13:33:02 EDT
Created attachment 1288575 [details]
dmesg showing problem.

Description of problem:  When a drive unit with Marvell 88SM9715 port multipliers is used spanning across two of them creates problems.


Version-Release number of selected component (if applicable):
All

How reproducible:
Every time

Steps to Reproduce:
1. With a drive enclosure having 8 bays, 4 on each Marvell 88SM9715, creating an array with drives on both multiplier creates problem under load.
2. Install 5 drives - 4 on the first 9715 and 1 on the second.  Put machine under load.  Drives start dropping out and then coming back.
3.  Change it around but always have the RAID span the Marvells - problems.

Actual results:
Drives disappear and reappear.

Expected results:
Drives shouldn't disappear.

Additional info:
I've attached my dmesg.  There is a lot of other stuff in there.  In any event I originally thought the problem was cable related.  It's clearly not.  I created an array with drives on just the first multiplier and don't have the problem.  With drives on both it does.  I pulled those drives (5) and put 3 new drives in - 2 on one 9715 and 1 on the other.  Same problems - different bays.  With two arrays (4 on the one 9715 and 4 on the other) the problem disappears.  Ergo it's spanning the two 9715 under load.  I booted a kernel with noncq and that did nothing.  Booted with ncq queueing enabled but the drive speeds limited to 3G and that didn't help either.  Ensuring they don't span Marvell's fixes the problem.  Again, it's generally under load that it does it.

Note:  I had an array with five 3TB drives.  4 on one channel and 1 on the other.  That has had problems since day one.  In the dmesg you'll see other stuff.  I bought three 6TB drives.  I removed the 3TB drives to set up the new (3x6TB) array.  Not knowing the problem I spanned the 9713s.  Same problem.  Moved all three 6TB drives to same channel - problem gone.  I'm in the process of knocking that 5x3TB array down to 4x3TB to make the problem disappear.  (Marvell #1: 4 3TB drives, Marvell #2: 3 6TB drives - two arrays).

Marvell 9713 didn't show up in pci.ids.  I submitted it yesterday to the pci.ids project.

Controller is PocketRaid 642L.

Changing bays and drives doesn't help - it's the spanning of the Marvells that does it.  That was clear from extensive testing.  Put the 3 6TBs on the top Marvell (4 bays) - no problem.  Put them on the bottom (4 bay) - no problem.  Span the Marvells - problems.

Thanks.  Not a huge deal for me as I have a workaround.  Just an odd problem so I figured I'd post a bug for others who may be encountering it.  Generally people point to that error being cables.  It's clearly not.
Comment 1 Joseph A. Farmer 2017-06-17 13:40 EDT
Created attachment 1288576 [details]
new dmesg

ATA 7 was 4 of the 5 3TB drives on /dev/md0
ATA 8 had 1 of the 3TB drives on /dev/md0
ATA 8 had 3 of the 6TB drives on /dev/md1.

Powered them down.  Pulled the 6TBs.  Removed the 5th 3TB (ATA 8).

Powered up the enclosure (4 3TB drives on Marvell #1).

Created array.

No problems.  In this dmesg.

Thanks.
Comment 2 Fedora End Of Life 2017-11-16 14:37:20 EST
This message is a reminder that Fedora 25 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 25. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as EOL if it remains open with a Fedora  'version'
of '25'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version'
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not
able to fix it before Fedora 25 is end of life. If you would still like
to see this bug fixed and are able to reproduce it against a later version
of Fedora, you are encouraged  change the 'version' to a later Fedora
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's
lifetime, sometimes those efforts are overtaken by events. Often a
more recent Fedora release includes newer upstream software that fixes
bugs or makes them obsolete.
Comment 3 Fedora End Of Life 2017-12-12 05:36:10 EST
Fedora 25 changed to end-of-life (EOL) status on 2017-12-12. Fedora 25 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.

Note You need to log in before you can comment on or make changes to this bug.