Bug 828949 - bad ext4 sync performance
bad ext4 sync performance
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: kernel (Show other bugs)
x86_64 Linux
unspecified Severity high
: rc
: ---
Assigned To: Red Hat Kernel Manager
Red Hat Kernel QE team
Depends On:
  Show dependency treegraph
Reported: 2012-06-05 11:28 EDT by Ing. Christoph Pirchl
Modified: 2014-06-18 09:55 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2014-06-02 09:17:18 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Ing. Christoph Pirchl 2012-06-05 11:28:47 EDT
Description of problem:

We do have a HP BL 465c G5 BladeServer with 32 GB RAM in a 6 Node RHEL Cluster attached to our Cisco SAN and there are several LUN's presented over FalconStor SAN Virtualization.

When we write with dd on a 8 TB ext4 LUN we see a really bad sync performance.

[root@iconode01-sr1 data13]# df -h |grep data01
                      7.7T  7.2T  352G  96% /archive/data01

[root@iconode01-sr1 data01]# /usr/bin/time bash -c "dd if=/dev/zero of=test_10GB.file bs=1M count=10000 && sync"
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 810.616 seconds, 12.9 MB/s
0.02user 413.71system 13:34.55elapsed 50%CPU (0avgtext+0avgdata 6512maxresident)k
0inputs+0outputs (0major+756minor)pagefaults 0swaps

Writing on an 2 TB ext3 Partition on a HP Storage Systems is working fine

[root@iconode01-sr1 data13]# df -h |grep data13
                      2.0T   12G  2.0T   1% /eva8100-sr1/data13

[root@iconode01-sr1 data13]# /usr/bin/time bash -c "dd if=/dev/zero of=test_10GB.file bs=1M count=10000 && sync"
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 37.2399 seconds, 282 MB/s
0.02user 34.03system 1:04.57elapsed 52%CPU (0avgtext+0avgdata 6512maxresident)k
0inputs+0outputs (0major+757minor)pagefaults 0swaps

It`s like in Bug 572930 but we have a newer kernel where these issues should be fixed.

Version-Release number of selected component (if applicable):

[root@iconode01-sr1 data01]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 5.8 (Tikanga)

[root@iconode01-sr1 data01]# uname -a
Linux iconode01-sr1.tilak.ibk 2.6.18-308.el5 #1 SMP Fri Jan 27 17:17:51 EST 2012 x86_64 x86_64 x86_64 GNU/Linux

How reproducible:


Steps to Reproduce:
1. create 8 TB LUN
2. create GPT Partition with parted
3. mkfs.ext4 /dev/mapper/AIM_Archive_DATA01p1
Actual results:

Bad Performance

Expected results:

Should be much faster

Additional info:

Is there a Problem with full Partitions, the ext4 Partition is full to 96% ?
On other ext4 Partition, who are not "so full", the performance is slightly better.
Comment 1 Ric Wheeler 2012-06-06 23:13:25 EDT
Can you please open a support request through your official Red Hat support channel?

Our support team do first line support, Red Hat bugzilla is not intended to be a path around that.

If you do not have a Red Hat support entitlement, you should take your issues to the upstream mailing list for community support (linux-ext4 list for this issue).

Best regards,

Comment 2 RHEL Product and Program Management 2014-03-07 08:42:45 EST
This bug/component is not included in scope for RHEL-5.11.0 which is the last RHEL5 minor release. This Bugzilla will soon be CLOSED as WONTFIX (at the end of RHEL5.11 development phase (Apr 22, 2014)). Please contact your account manager or support representative in case you need to escalate this bug.
Comment 3 RHEL Product and Program Management 2014-06-02 09:17:18 EDT
Thank you for submitting this request for inclusion in Red Hat Enterprise Linux 5. We've carefully evaluated the request, but are unable to include it in RHEL5 stream. If the issue is critical for your business, please provide additional business justification through the appropriate support channels (https://access.redhat.com/site/support).

Note You need to log in before you can comment on or make changes to this bug.