Bug 520655
Summary: | I/O to DASD partitions appears to be forced sync/direct | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 5 | Reporter: | Bryn M. Reeves <bmr> |
Component: | kernel | Assignee: | Hendrik Brueckner <brueckner> |
Status: | CLOSED WONTFIX | QA Contact: | Red Hat Kernel QE team <kernel-qe> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 5.3 | CC: | bhinson, brueckner, coughlan, hpicht, peterm, tao |
Target Milestone: | rc | ||
Target Release: | 5.6 | ||
Hardware: | s390x | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2011-10-17 00:25:47 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 690968 |
Description
Bryn M. Reeves
2009-09-01 16:26:15 UTC
I noticed a difference between the DASD device I was using for testing and others on the guest: dasd_devmap: turning on fixed buffer mode dasd(eckd): 0.0.0100: 3390/0A(CU:3990/01) Cyl:3338 Head:15 Sec:224 dasd(eckd): 0.0.0100: (4kB blks): 2403360kB at 48kB/trk compatible disk layout dasda:VOL1/ 0X0100: dasda1 dasda2 dasda3 dasd(eckd): 0.0.0101: 3390/0A(CU:3990/01) Cyl:3338 Head:15 Sec:224 dasd(eckd): 0.0.0101: (4kB blks): 2403360kB at 48kB/trk compatible disk layout dasdb:VOL1/ 0X0101: dasdb1 dasd(eckd): 0.0.0150: 3390/0A(CU:3990/01) Cyl:3338 Head:15 Sec:224 dasd(eckd): 0.0.0150: (4kB blks): 2403360kB at 48kB/trk compatible disk layout dasdc:(nonl)/ : dasdc1 dasdc doesn't have a valid volume label. The dasd driver seems to fake a partition spanning the whole device in this case. Running fdasd on the device confirms there's no label: # fdasd /dev/dasdc reading volume label ..: no known label Should I create a new one? (y/n): n I'm not able to reproduce the large performance difference for reads on dasda or dasdb on this system: # dd if=/dev/dasda1 of=/dev/null bs=4k count=10000 10000+0 records in 10000+0 records out 40960000 bytes (41 MB) copied, 0.063291 seconds, 647 MB/s # dd if=/dev/dasda of=/dev/null bs=4k count=10000 10000+0 records in 10000+0 records out 40960000 bytes (41 MB) copied, 0.06125 seconds, 669 MB/s # dd if=/dev/dasdb of=/dev/null bs=4k count=10000 10000+0 records in 10000+0 records out 40960000 bytes (41 MB) copied, 0.0614 seconds, 667 MB/s # dd if=/dev/dasdb1 of=/dev/null bs=4k count=10000 10000+0 records in 10000+0 records out 40960000 bytes (41 MB) copied, 0.063253 seconds, 648 MB/s Just adding a label to dasdc doesn't change the situation; I/O via the partition device node doesn't appear to be cached for either reads or writes. Both dasda and dasdb are in use as LVM2 physical volumes on the system and provide segments to the root file system. Creating and mounting a file system on dasdc1 appears to cause dd's I/O to be cached as with the other devices: # dd if=/dev/dasdc of=/dev/null bs=4k count=10000 10000+0 records in 10000+0 records out 40960000 bytes (41 MB) copied, 0.062391 seconds, 657 MB/s # dd if=/dev/dasdc1 of=/dev/null bs=4k count=10000 10000+0 records in 10000+0 records out 40960000 bytes (41 MB) copied, 0.061607 seconds, 665 MB/s Writes to the partition device node also appear to be cached in this case (even if not very useful ;): # dd if=/dev/zero of=/dev/dasdc1 bs=4k count=10000 10000+0 records in 10000+0 records out 40960000 bytes (41 MB) copied, 0.073299 seconds, 559 MB/s Bryn, can you probably re-run your tests using the dd oflag=sync option? Regards, Hans Hans, Adding oflag=sync does further change the I/O performance although it does appear to make it consistent between the partition and the whole disk device nodes: # dd oflag=sync if=/dev/zero of=/dev/dasdc1 bs=4k count=10000 10000+0 records in 10000+0 records out 40960000 bytes (41 MB) copied, 8.83633 seconds, 4.6 MB/s # dd oflag=sync if=/dev/zero of=/dev/dasdc1 bs=4k count=10000 10000+0 records in 10000+0 records out 40960000 bytes (41 MB) copied, 8.83221 seconds, 4.6 MB/s # dd oflag=sync if=/dev/zero of=/dev/dasdc1 bs=4k count=10000 10000+0 records in 10000+0 records out 40960000 bytes (41 MB) copied, 8.36947 seconds, 4.9 MB/s # dd oflag=sync if=/dev/zero of=/dev/dasdc bs=4k count=10000 10000+0 records in 10000+0 records out 40960000 bytes (41 MB) copied, 8.74207 seconds, 4.7 MB/s # dd oflag=sync if=/dev/zero of=/dev/dasdc bs=4k count=10000 10000+0 records in 10000+0 records out 40960000 bytes (41 MB) copied, 8.41363 seconds, 4.9 MB/s # dd oflag=sync if=/dev/zero of=/dev/dasdc bs=4k count=10000 10000+0 records in 10000+0 records out 40960000 bytes (41 MB) copied, 8.42589 seconds, 4.9 MB/s Regards, Bryn Still working on this issue upstream, moving out to R5.6. |