Bug 828949 - bad ext4 sync performance
Summary: bad ext4 sync performance
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: kernel
Version: 5.8
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: Red Hat Kernel Manager
QA Contact: Red Hat Kernel QE team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-06-05 15:28 UTC by Ing. Christoph Pirchl
Modified: 2014-06-18 13:55 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-06-02 13:17:18 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Ing. Christoph Pirchl 2012-06-05 15:28:47 UTC
Description of problem:

We do have a HP BL 465c G5 BladeServer with 32 GB RAM in a 6 Node RHEL Cluster attached to our Cisco SAN and there are several LUN's presented over FalconStor SAN Virtualization.

When we write with dd on a 8 TB ext4 LUN we see a really bad sync performance.

[root@iconode01-sr1 data13]# df -h |grep data01
                      7.7T  7.2T  352G  96% /archive/data01

[root@iconode01-sr1 data01]# /usr/bin/time bash -c "dd if=/dev/zero of=test_10GB.file bs=1M count=10000 && sync"
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 810.616 seconds, 12.9 MB/s
0.02user 413.71system 13:34.55elapsed 50%CPU (0avgtext+0avgdata 6512maxresident)k
0inputs+0outputs (0major+756minor)pagefaults 0swaps

Writing on an 2 TB ext3 Partition on a HP Storage Systems is working fine

[root@iconode01-sr1 data13]# df -h |grep data13
                      2.0T   12G  2.0T   1% /eva8100-sr1/data13

[root@iconode01-sr1 data13]# /usr/bin/time bash -c "dd if=/dev/zero of=test_10GB.file bs=1M count=10000 && sync"
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 37.2399 seconds, 282 MB/s
0.02user 34.03system 1:04.57elapsed 52%CPU (0avgtext+0avgdata 6512maxresident)k
0inputs+0outputs (0major+757minor)pagefaults 0swaps

It`s like in Bug 572930 but we have a newer kernel where these issues should be fixed.

Version-Release number of selected component (if applicable):

[root@iconode01-sr1 data01]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 5.8 (Tikanga)

[root@iconode01-sr1 data01]# uname -a
Linux iconode01-sr1.tilak.ibk 2.6.18-308.el5 #1 SMP Fri Jan 27 17:17:51 EST 2012 x86_64 x86_64 x86_64 GNU/Linux

How reproducible:

always

Steps to Reproduce:
1. create 8 TB LUN
2. create GPT Partition with parted
3. mkfs.ext4 /dev/mapper/AIM_Archive_DATA01p1
  
Actual results:

Bad Performance

Expected results:

Should be much faster

Additional info:

Is there a Problem with full Partitions, the ext4 Partition is full to 96% ?
On other ext4 Partition, who are not "so full", the performance is slightly better.

Comment 1 Ric Wheeler 2012-06-07 03:13:25 UTC
Can you please open a support request through your official Red Hat support channel?

Our support team do first line support, Red Hat bugzilla is not intended to be a path around that.

If you do not have a Red Hat support entitlement, you should take your issues to the upstream mailing list for community support (linux-ext4 list for this issue).

Best regards,

Ric

Comment 2 RHEL Program Management 2014-03-07 13:42:45 UTC
This bug/component is not included in scope for RHEL-5.11.0 which is the last RHEL5 minor release. This Bugzilla will soon be CLOSED as WONTFIX (at the end of RHEL5.11 development phase (Apr 22, 2014)). Please contact your account manager or support representative in case you need to escalate this bug.

Comment 3 RHEL Program Management 2014-06-02 13:17:18 UTC
Thank you for submitting this request for inclusion in Red Hat Enterprise Linux 5. We've carefully evaluated the request, but are unable to include it in RHEL5 stream. If the issue is critical for your business, please provide additional business justification through the appropriate support channels (https://access.redhat.com/site/support).


Note You need to log in before you can comment on or make changes to this bug.