Bug 1398808 - SSD with barrier slows down issue. And huge difference in IOPS/response time connected to motherboard SATA and HBA [NEEDINFO]
Summary: SSD with barrier slows down issue. And huge difference in IOPS/response time ...
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Fedora
Classification: Fedora
Component: kernel
Version: 25
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: Kernel Maintainer List
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-11-26 09:43 UTC by Patrick Dung
Modified: 2019-01-09 12:54 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-04-28 17:11:01 UTC
Type: Bug
jforbes: needinfo?


Attachments (Terms of Use)

Description Patrick Dung 2016-11-26 09:43:45 UTC
Hello,

I have encounter similar slow IOPS issue with SSD (Samsung SM843T)

For my case: I have performed benchmark with FIO:
Case 1) SSD direct connect to raid card in HBA mode (LSI megaraid)
Case 2) SSD direct connect to motherboard

The IOPS difference is huge.
18000, 600 (R,W) IOPS for case 1
600, 220 (R,W) IOPS for case 2

The only difference I noticed is that the SSD does not have DPO/FUA support in SATA.
It have DPO/FUA support when it is connected to the HBA.

Comment 1 Patrick Dung 2016-12-02 08:28:56 UTC
New findings:

1) Enabling DPO/FUA support detection (onboard SATA) only slightly improved the IOPS.

2) In the last comment, the test was done using fio fsync=1.
If fsync=0, the IOPS is very high/normal.

Comment 2 Patrick Dung 2016-12-02 16:16:03 UTC
Newest findings:

I had found that SATA SSD have huge difference when connected to onboard motherboard SATA or SAS HBA.

All filesystems are ext4:

Case 1. SSD #1 connected to motherboard AHCI-SATA (non SAS HBA)
using default mount option
The IOPS is very low and the fsync response time is in the range of 5000 usec/ops

Case 2. SSD #1 connected to motherboard AHCI-SATA (non SAS HBA)
turn off barrier in the mount option
The IOPS is high and the fsync response time is in the range of 160 usecs/ops

Case 3. SSD #1 connected to Megaraid RAID adapter configured as HBA
using default mount option
The IOPS is high and the fsync response time is in the range of 400 usecs/ops

Case 4. SSD #1 connected to Megaraid RAID adapter configured as HBA
turn off barrier in the mount option
The IOPS is very high and the fsync response time is in the range of 60 usecs/ops

Case 5. PCI-e adapter based SSD (AHCI), normal mount option
The IOPS is low and the fsync response time is in the range of 3000 usecs/ops

Case 6. PCI-e adapter based SSD (AHCI), turned off barrier in mount option
The IOPS is very high and the fsync response time is in the range of 50 usecs/ops

So the findings shows:
I) For AHCI-SSD, when using the default mount option (barrier is on), the latency is very high.
II) When barrier is on, it does not have huge performance impact when the SATA SSD is connected to SAS HBA.

Any comments and ideas are welcome.

Comment 3 Laura Abbott 2017-01-17 01:13:50 UTC
*********** MASS BUG UPDATE **************
We apologize for the inconvenience.  There is a large number of bugs to go through and several of them have gone stale.  Due to this, we are doing a mass bug update across all of the Fedora 25 kernel bugs.
 
Fedora 25 has now been rebased to 4.9.3-200.fc25.  Please test this kernel update (or newer) and let us know if you issue has been resolved or if it is still present with the newer kernel.
 
If you have moved on to Fedora 26, and are still experiencing this issue, please change the version to Fedora 26.
 
If you experience different issues, please open a new bug report for those.

Comment 4 Patrick Dung 2017-01-17 05:35:56 UTC
1.
I have tested with kernel 4.9.3-200.fc25.

There is still difference between barrier and no barrier.
It may be normal.

2.
For the response time (with barrier) between onboard motherboard SATA and connect to MegaRAID as HBA:
The difference is not huge now.

Comment 5 Justin M. Forbes 2017-04-11 14:39:49 UTC
*********** MASS BUG UPDATE **************

We apologize for the inconvenience.  There is a large number of bugs to go through and several of them have gone stale.  Due to this, we are doing a mass bug update across all of the Fedora 25 kernel bugs.

Fedora 25 has now been rebased to 4.10.9-200.fc25.  Please test this kernel update (or newer) and let us know if you issue has been resolved or if it is still present with the newer kernel.

If you have moved on to Fedora 26, and are still experiencing this issue, please change the version to Fedora 26.

If you experience different issues, please open a new bug report for those.

Comment 6 Justin M. Forbes 2017-04-28 17:11:01 UTC
*********** MASS BUG UPDATE **************
This bug is being closed with INSUFFICIENT_DATA as there has not been a response in 2 weeks. If you are still experiencing this issue, please reopen and attach the 
relevant data from the latest kernel you are running and any data that might have been requested previously.


Note You need to log in before you can comment on or make changes to this bug.