Bug 877470
| Summary: | Growisofs almost stops when second instance is invoked | ||
|---|---|---|---|
| Product: | [Fedora] Fedora | Reporter: | JOhn Westerdale <john.westerdale> |
| Component: | kernel | Assignee: | Kernel Maintainer List <kernel-maint> |
| Status: | NEW --- | QA Contact: | Fedora Extras Quality Assurance <extras-qa> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | rawhide | CC: | chrischavez, gansalmon, hhorak, itamar, jonathan, kernel-maint, madhu.chinakonda |
| Target Milestone: | --- | Keywords: | Reopened |
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2014-03-10 14:43:00 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Attachments: | |||
|
Description
JOhn Westerdale
2012-11-16 15:50:23 UTC
Created attachment 657937 [details]
Here is the outout from 2 parallal growisofs commands.
I started the first Growisofs in one shell, let it run for a bit and then launched a second instance.
the first one quickly goes to 10x burn rate (success!).
As soon as I start another growfs in other window, the UBU buffer dries up and the burn rate goes way way down.
As the first burn completes, the second burn takes off again at good speed.
Here are the devices:
[westerj@peachy FH-Scratch]$ hdparm -i /dev/sr0
/dev/sr0:
HDIO_DRIVE_CMD(identify) failed: Bad address
Model=Optiarc DVD RW AD-7280S, FwRev=1.01, SerialNo=
Config={ Fixed Removeable DTR<=5Mbs DTR>10Mbs nonMagnetic }
RawCHS=0/0/0, TrkSize=0, SectSize=0, ECCbytes=0
BuffType=unknown, BuffSize=unknown, MaxMultSect=0
(maybe): CurCHS=0/0/0, CurSects=0, LBA=yes, LBAsects=0
IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
PIO modes: pio0 pio1 pio2 pio3 pio4
DMA modes: mdma0 mdma1 mdma2
UDMA modes: udma0 udma1 udma2 udma3 udma4 *udma5
AdvancedPM=no
Drive conforms to: Unspecified: ATA/ATAPI-3,4,5,6,7
* signifies the current active mode
[westerj@peachy FH-Scratch]$ hdparm -i /dev/sr1
/dev/sr1:
HDIO_DRIVE_CMD(identify) failed: Bad address
Model=ATAPI iHAS124 D, FwRev=8L03, SerialNo=3524552 218231503139
Config={ Fixed Removeable DTR<=5Mbs DTR>10Mbs nonMagnetic }
RawCHS=0/0/0, TrkSize=0, SectSize=0, ECCbytes=0
BuffType=unknown, BuffSize=unknown, MaxMultSect=0
(maybe): CurCHS=0/0/0, CurSects=0, LBA=yes, LBAsects=0
IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
PIO modes: pio0 pio3 pio4
DMA modes: mdma0 mdma1 mdma2
UDMA modes: udma0 udma1 udma2 udma3 udma4 *udma5
AdvancedPM=no
Drive conforms to: Unspecified: ATA/ATAPI-4,5,6,7
* signifies the current active mode
[westerj@peachy FH-Scratch]$
Thanks very much - Hohn Westerdale
Created attachment 657964 [details]
This is the iostat profile when its burning a single image
This capture of iostat shows up to 20288.00 kB_read/s (for the read device) and burn rate up > 16x.
Created attachment 657966 [details]
Iostat profile when burning 2 in parallel
This shows the iostat profile for device sdb during the read. its peaking around 8400 kB_read/s.
In both cases the RBU stays high, but the UBU plummets to very low values.
The Disk seems to have more I/O,
I will test this with an SSD next to host the source to alleviate any storage contention.
Please advise of any tuning that might help keep write speed up, or any other diagnostic methods to identify starvation.
Thanks
John
Created attachment 659191 [details]
Burn Thread 1 - Solo
This is thread 1 of the serial burn. One burn at a time. This shows excellent thruput, as the source is from SSD. A+ for individual thruput. Note that its 60 lines long.
Created attachment 659192 [details]
Burn Thread 2 - Solo
After Burn Thread 1 had completed. the second burner kicks in, and shows what it can do w/o restraints. This one finishes in 69 lines.... Ok so.. its not quite as fast as the other burner! But it still finishes quickly, hitting up to 12.8X on the last segment of the burn.
Created attachment 659193 [details]
Burn Thread 1 - in parallel
Here is the output from Growisofs when feeding in parallel from ISO images on SSD.
There are 228 lines, and it took much much longer in parallel then the solo runs.
Burn rates were 2x-3x, no where near the 16x that the Solo burns were able to acheive.
Created attachment 659194 [details]
Burn Thread 2 - In Parallel
Here is the last output from Growfs when feeding from an SSD, and burning in parallel to a second SATA burner.This one is 230 lines long.
It shows burn rates up to 3.8x nominally. One can see that the IO profile is way compromised.
Created attachment 659195 [details]
Iostat profile while running 2 burns in a series, and then in parallel.
This iostat segment shows the profile while Drive1 burned an .iso, and then drive2 burned the same .iso. Field hockey ! you can see block rates go very high, up to 24,664 .
Once they ran in parallel, the parallel block rate maxed out around 12,500 kB_read/Sec.
One would hope the IO rate would hit the limit of the SSD, which should be much higher. Next Attachement will hilite the devices at work here.
Created attachment 659196 [details]
HDParm for SSD and 2 burners
Here is the Hdparm -i output for the SSD and hte two DVD burners.
SSD should be the limiting factor.
Created attachment 659197 [details]
HDParm for SSD and 2 burners
This identifies the 2 DVD burners and the SSD drive.
Kernel is Linux peachy 3.6.7-4.fc16.x86_64 #1 SMP Tue Nov 20 20:33:31 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
bash-4.2$ uname -a
Linux peachy 3.6.7-4.fc16.x86_64 #1 SMP Tue Nov 20 20:33:31 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
bash-4.2$ locate growisofs
/usr/bin/growisofs
/usr/lib64/brasero3/plugins/libbrasero-growisofs.so
/usr/share/kino/scripts/dvdauthor/growisofs.sh
/usr/share/man/man1/growisofs.1.gz
/usr/share/zsh/4.3.17/functions/_growisofs
bash-4.2$ rpm -qf /usr/bin/growisofs
dvd+rw-tools-7.1-9.fc16.x86_64
bash-4.2$
Ok.. Enough from me. Let me know if this is a tunable thing, or what is starving.. or how to test it to find it, or.. its a known inconvenience. Not a hardware problem. Parallel growisofs on other OS (w7, with forgiveness) works well. Independent burning software (wodim) on Fedora17 shows similar slowdown. Call to "pwrite" seems to be the slow operation. Pwrite mentioned by mistake when copy-pasting into bugzilla, sorry. Is there a way to delete/edit bugzilla comment? This message is a reminder that Fedora 16 is nearing its end of life. Approximately 4 (four) weeks from now Fedora will stop maintaining and issuing updates for Fedora 16. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as WONTFIX if it remains open with a Fedora 'version' of '16'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version prior to Fedora 16's end of life. Bug Reporter: Thank you for reporting this issue and we are sorry that we may not be able to fix it before Fedora 16 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged to click on "Clone This Bug" and open it against that version of Fedora. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete. The process we are following is described here: http://fedoraproject.org/wiki/BugZappers/HouseKeeping Reproduced on Fedora 17. Are you still seeing this with 3.8.2 in updates-testing? If you can recreate it with that, I'd recommend working directly with upstream. Yes, still the same with 3.8.2-105.fc17.x86_64 Kernel mailing list archives offer explanation [1] and several problematic patches. Also, as a potential workaround: "> When was this regression introduced? I have a dedicated cd burning > machine running 2.6.35.x, and it works great with 4 drives at a time. BKL was replaced by sr_mutex in 2.6.37-rc1." [2] [1] http://marc.info/?l=linux-scsi&m=135705061804384&w=2 [2] https://lkml.org/lkml/2012/3/17/34 This bug appears to have been reported against 'rawhide' during the Fedora 19 development cycle. Changing version to '19'. (As we did not run this process for some time, it could affect also pre-Fedora 19 development cycle bugs. We are very sorry. It will help us with cleanup during Fedora 19 End Of Life. Thank you.) More information and reason for this action is here: https://fedoraproject.org/wiki/BugZappers/HouseKeeping/Fedora19 *********** MASS BUG UPDATE ************** We apologize for the inconvenience. There is a large number of bugs to go through and several of them have gone stale. Due to this, we are doing a mass bug update across all of the Fedora 19 kernel bugs. Fedora 19 has now been rebased to 3.11.1-200.fc19. Please test this kernel update and let us know if you issue has been resolved or if it is still present with the newer kernel. If you experience different issues, please open a new bug report for those. This bug is being closed with INSUFFICIENT_DATA as there has not been a response in 2 weeks. If you are still experiencing this issue, please reopen and attach the relevant data from the latest kernel you are running and any data that might have been requested previously. Confirmed with 3.11.2-201.fc19.x86_64. *********** MASS BUG UPDATE ************** We apologize for the inconvenience. There is a large number of bugs to go through and several of them have gone stale. Due to this, we are doing a mass bug update across all of the Fedora 19 kernel bugs. Fedora 19 has now been rebased to 3.12.6-200.fc19. Please test this kernel update (or newer) and let us know if you issue has been resolved or if it is still present with the newer kernel. If you have moved on to Fedora 20, and are still experiencing this issue, please change the version to Fedora 20. If you experience different issues, please open a new bug report for those. *********** MASS BUG UPDATE ************** This bug has been in a needinfo state for more than 1 month and is being closed with insufficient data due to inactivity. If this is still an issue with Fedora 19, please feel free to reopen the bug and provide the additional information requested. Reproduced with Fedora 20. *********** MASS BUG UPDATE ************** We apologize for the inconvenience. There is a large number of bugs to go through and several of them have gone stale. Due to this, we are doing a mass bug update across all of the Fedora 20 kernel bugs. Fedora 20 has now been rebased to 3.14.4-200.fc20. Please test this kernel update (or newer) and let us know if you issue has been resolved or if it is still present with the newer kernel. If you experience different issues, please open a new bug report for those. reproduced *********** MASS BUG UPDATE ************** We apologize for the inconvenience. There is a large number of bugs to go through and several of them have gone stale. Due to this, we are doing a mass bug update across all of the Fedora 20 kernel bugs. Fedora 20 has now been rebased to 3.17.2-200.fc20. Please test this kernel update (or newer) and let us know if you issue has been resolved or if it is still present with the newer kernel. If you have moved on to Fedora 21, and are still experiencing this issue, please change the version to Fedora 21. If you experience different issues, please open a new bug report for those. reproduced *********** MASS BUG UPDATE ************** We apologize for the inconvenience. There is a large number of bugs to go through and several of them have gone stale. Due to this, we are doing a mass bug update across all of the Fedora 20 kernel bugs. Fedora 20 has now been rebased to 3.18.7-100.fc20. Please test this kernel update (or newer) and let us know if you issue has been resolved or if it is still present with the newer kernel. If you have moved on to Fedora 21, and are still experiencing this issue, please change the version to Fedora 21. If you experience different issues, please open a new bug report for those. reproduced *********** MASS BUG UPDATE ************** We apologize for the inconvenience. There is a large number of bugs to go through and several of them have gone stale. Due to this, we are doing a mass bug update across all of the Fedora 20 kernel bugs. Fedora 20 has now been rebased to 3.19.5-100.fc20. Please test this kernel update (or newer) and let us know if you issue has been resolved or if it is still present with the newer kernel. If you have moved on to Fedora 21, and are still experiencing this issue, please change the version to Fedora 21. If you experience different issues, please open a new bug report for those. still broken This message is a reminder that Fedora 20 is nearing its end of life. Approximately 4 (four) weeks from now Fedora will stop maintaining and issuing updates for Fedora 20. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as EOL if it remains open with a Fedora 'version' of '20'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version. Thank you for reporting this issue and we are sorry that we were not able to fix it before Fedora 20 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior this bug is closed as described in the policy above. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete. still broken |