Bug 877470 - Growisofs almost stops when second instance is invoked
Summary: Growisofs almost stops when second instance is invoked
Keywords:
Status: NEW
Alias: None
Product: Fedora
Classification: Fedora
Component: kernel
Version: rawhide
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Kernel Maintainer List
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-11-16 15:50 UTC by JOhn Westerdale
Modified: 2019-05-01 21:53 UTC (History)
7 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2014-03-10 14:43:00 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)
Here is the outout from 2 parallal growisofs commands. (31.77 KB, text/plain)
2012-12-05 03:12 UTC, JOhn Westerdale
no flags Details
This is the iostat profile when its burning a single image (11.27 KB, text/plain)
2012-12-05 04:42 UTC, JOhn Westerdale
no flags Details
Iostat profile when burning 2 in parallel (26.97 KB, text/plain)
2012-12-05 04:47 UTC, JOhn Westerdale
no flags Details
Burn Thread 1 - Solo (4.45 KB, text/plain)
2012-12-07 03:41 UTC, JOhn Westerdale
no flags Details
Burn Thread 2 - Solo (5.10 KB, text/plain)
2012-12-07 03:44 UTC, JOhn Westerdale
no flags Details
Burn Thread 1 - in parallel (16.82 KB, text/plain)
2012-12-07 03:50 UTC, JOhn Westerdale
no flags Details
Burn Thread 2 - In Parallel (16.95 KB, text/plain)
2012-12-07 03:53 UTC, JOhn Westerdale
no flags Details
Iostat profile while running 2 burns in a series, and then in parallel. (47.51 KB, text/plain)
2012-12-07 04:00 UTC, JOhn Westerdale
no flags Details
HDParm for SSD and 2 burners (1.95 KB, text/plain)
2012-12-07 04:04 UTC, JOhn Westerdale
no flags Details
HDParm for SSD and 2 burners (1.95 KB, text/plain)
2012-12-07 04:11 UTC, JOhn Westerdale
no flags Details

Description JOhn Westerdale 2012-11-16 15:50:23 UTC
Description of problem:

When I run one instance of growisofs on sata device, it goes OK.

 2716729344/3036989440 (89.5%) @13.0x, remaining 0:24 RBU 100.0% UBU  99.2%
 2777284608/3036989440 (91.4%) @13.1x, remaining 0:19 RBU 100.0% UBU  98.6%
 2838331392/3036989440 (93.5%) @13.2x, remaining 0:14 RBU  99.8% UBU  99.2%
 2896265216/3036989440 (95.4%) @12.5x, remaining 0:10 RBU 100.0% UBU  90.8%
 2958295040/3036989440 (97.4%) @13.4x, remaining 0:05 RBU  99.8% UBU  99.2%
 3020783616/3036989440 (99.5%) @13.5x, remaining 0:01 RBU  96.7% UBU  99.0%

When I start a second instance both slow to a crawl;

First is SATA device:

 1476984832/3036989440 (48.6%) @0.8x, remaining 16:25 RBU 100.0% UBU  47.1%
 1478033408/3036989440 (48.7%) @0.2x, remaining 16:28 RBU 100.0% UBU 100.0%
 1479868416/3036989440 (48.7%) @0.4x, remaining 16:29 RBU 100.0% UBU  12.7%
 1482129408/3036989440 (48.8%) @0.5x, remaining 16:29 RBU 100.0% UBU  59.6%
 1485144064/3036989440 (48.9%) @0.7x, remaining 16:29 RBU  99.8% UBU  40.8%
 1496219648/3036989440 (49.3%) @2.4x, remaining 16:18 RBU 100.0% UBU   3.3%
 

and second is a 2x USB burner

 1996849152/3036989440 (65.8%) @2.0x, remaining 6:33 RBU 100.0% UBU 100.0%
 2006122496/3036989440 (66.1%) @2.0x, remaining 6:29 RBU 100.0% UBU 100.0%
 2015363072/3036989440 (66.4%) @2.0x, remaining 6:26 RBU 100.0% UBU  92.1%
 2024636416/3036989440 (66.7%) @2.0x, remaining 6:22 RBU 100.0% UBU 100.0%
 2033876992/3036989440 (67.0%) @2.0x, remaining 6:18 RBU 100.0% UBU  73.7%
 2043117568/3036989440 (67.3%) @2.0x, remaining 6:15 RBU 100.0% UBU 100.0%
 2050686976/3036989440 (67.5%) @1.6x, remaining 6:12 RBU 100.0% UBU  10.5%
 2059960320/3036989440 (67.8%) @2.0x, remaining 6:09 RBU 100.0% UBU 100.0%
 2069200896/3036989440 (68.1%) @2.0x, remaining 6:05 RBU 100.0% UBU 100.0%
 2078474240/3036989440 (68.4%) @2.0x, remaining 6:02 RBU  99.8% UBU 100.0%


Version-Release number of selected component (if applicable):

  [root@peachy first-half]# rpm -qf /usr/bin/growisofs
  dvd+rw-tools-7.1-9.fc16.x86_64

  [root@peachy first-half]# rpm -qa | grep dvd+
  dvd+rw-tools-7.1-9.fc16.x86_64

How reproducible:

  very reproduceable.

Steps to Reproduce:

1.  Launch 2 burns at once. (one target is USB, other sata)
[root@peachy first-half]# ps -ef | grep iso
root      5684  2931  1 09:56 pts/3    00:00:10 growisofs -Z /dev/sr2 CNU-MSU-NCAA-2nd-Round.iso
root      5888  2886  0 09:58 pts/2    00:00:07 growisofs -Z /dev/sr1 CNU-MSU-NCAA-2nd-Round.iso

2.

  Check output above.. USB goes along OK, but the SATA one drops < 1.00 speed.

3.
  
Actual results:

  Finished, but the second (USB) instance greatly slowed the SATA instance.

Expected results:

   Had expected SATA channel to not suffer such a drop.

Additional info:

[root@peachy first-half]# uname -a
Linux peachy 3.6.6-1.fc16.x86_64 #1 SMP Mon Nov 5 16:56:43 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
 

[root@peachy 13-Oct-TCNJ]# modinfo usb_storage
filename:       /lib/modules/3.6.6-1.fc16.x86_64/kernel/drivers/usb/storage/usb-storage.ko
license:        GPL
description:    USB Mass Storage driver for Linux
....vermagic:       3.6.6-1.fc16.x86_64 SMP mod_unload 
parm:           option_zero_cd:ZeroCD mode (1=Force Modem (default), 2=Allow CD-Rom (uint)
parm:           swi_tru_install:TRU-Install mode (1=Full Logic (def), 2=Force CD-Rom, 3=Force Modem) (uint)

Comment 1 JOhn Westerdale 2012-12-05 03:12:40 UTC
Created attachment 657937 [details]
Here is the outout from 2 parallal growisofs commands.

I started the first Growisofs in one shell, let it run for a bit and then launched a second instance.

the first one quickly goes to 10x burn rate (success!).

As soon as I start another growfs in other window, the UBU buffer dries up and the burn rate goes way way down.

As the first burn completes, the second burn takes off again at good speed.

Here are the devices:

[westerj@peachy FH-Scratch]$ hdparm -i /dev/sr0

/dev/sr0:
 HDIO_DRIVE_CMD(identify) failed: Bad address

 Model=Optiarc DVD RW AD-7280S, FwRev=1.01, SerialNo=
 Config={ Fixed Removeable DTR<=5Mbs DTR>10Mbs nonMagnetic }
 RawCHS=0/0/0, TrkSize=0, SectSize=0, ECCbytes=0
 BuffType=unknown, BuffSize=unknown, MaxMultSect=0
 (maybe): CurCHS=0/0/0, CurSects=0, LBA=yes, LBAsects=0
 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
 PIO modes:  pio0 pio1 pio2 pio3 pio4 
 DMA modes:  mdma0 mdma1 mdma2 
 UDMA modes: udma0 udma1 udma2 udma3 udma4 *udma5 
 AdvancedPM=no
 Drive conforms to: Unspecified:  ATA/ATAPI-3,4,5,6,7

 * signifies the current active mode

[westerj@peachy FH-Scratch]$ hdparm -i /dev/sr1

/dev/sr1:
 HDIO_DRIVE_CMD(identify) failed: Bad address

 Model=ATAPI   iHAS124   D, FwRev=8L03, SerialNo=3524552 218231503139
 Config={ Fixed Removeable DTR<=5Mbs DTR>10Mbs nonMagnetic }
 RawCHS=0/0/0, TrkSize=0, SectSize=0, ECCbytes=0
 BuffType=unknown, BuffSize=unknown, MaxMultSect=0
 (maybe): CurCHS=0/0/0, CurSects=0, LBA=yes, LBAsects=0
 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
 PIO modes:  pio0 pio3 pio4 
 DMA modes:  mdma0 mdma1 mdma2 
 UDMA modes: udma0 udma1 udma2 udma3 udma4 *udma5 
 AdvancedPM=no
 Drive conforms to: Unspecified:  ATA/ATAPI-4,5,6,7

 * signifies the current active mode

[westerj@peachy FH-Scratch]$ 


Thanks very much - Hohn Westerdale

Comment 2 JOhn Westerdale 2012-12-05 04:42:47 UTC
Created attachment 657964 [details]
This is the iostat profile when its burning a single image

This capture of iostat shows up to 20288.00 kB_read/s (for the read device) and burn rate up > 16x.

Comment 3 JOhn Westerdale 2012-12-05 04:47:26 UTC
Created attachment 657966 [details]
Iostat profile when burning 2 in parallel

This shows the iostat profile for device sdb during the read. its peaking around 8400 kB_read/s.  

In both cases the RBU stays high, but the UBU plummets to very low values.

The Disk seems to have more I/O, 

I will test this with an SSD next to host the source to alleviate any storage contention.

Please advise of any tuning that might help keep write speed up, or any other diagnostic methods to identify starvation.

Thanks

John

Comment 4 JOhn Westerdale 2012-12-07 03:41:27 UTC
Created attachment 659191 [details]
Burn Thread 1 - Solo

This is thread 1 of the serial burn. One burn at a time.  This shows excellent thruput, as the source is from SSD.  A+ for individual thruput. Note that its 60 lines long.

Comment 5 JOhn Westerdale 2012-12-07 03:44:00 UTC
Created attachment 659192 [details]
Burn Thread 2 - Solo

After Burn Thread 1 had completed. the second burner kicks in, and shows what it can do w/o restraints.  This one finishes in 69 lines.... Ok so.. its not quite as fast as the other burner!  But it still finishes quickly, hitting up to 12.8X on the last segment of the burn.

Comment 6 JOhn Westerdale 2012-12-07 03:50:39 UTC
Created attachment 659193 [details]
Burn Thread 1 - in parallel

Here is the output from Growisofs when feeding in parallel from ISO images on SSD.
There are 228 lines, and it took much much longer in parallel then the solo runs.
  Burn rates were 2x-3x, no where near the 16x that the Solo burns were able to acheive.

Comment 7 JOhn Westerdale 2012-12-07 03:53:14 UTC
Created attachment 659194 [details]
Burn Thread 2 - In Parallel

Here is the last output from Growfs when feeding from an SSD, and burning in parallel to a second SATA burner.This one is 230 lines long.
It shows burn rates up to 3.8x nominally.  One can see that the IO profile is way compromised.

Comment 8 JOhn Westerdale 2012-12-07 04:00:41 UTC
Created attachment 659195 [details]
Iostat profile while running 2 burns in a series, and then in parallel.

This iostat segment shows the profile while Drive1 burned an .iso, and then drive2 burned the same .iso.  Field hockey !  you can see block rates go very high, up to 24,664 .

Once they ran in parallel, the parallel block rate maxed out around 12,500 kB_read/Sec.

One would hope the IO rate would hit the limit of the SSD, which should be much higher.  Next Attachement will hilite the devices at work here.

Comment 9 JOhn Westerdale 2012-12-07 04:04:00 UTC
Created attachment 659196 [details]
HDParm for SSD and 2 burners

Here is the Hdparm -i output for the SSD and hte two DVD burners.

SSD should be the limiting factor.

Comment 10 JOhn Westerdale 2012-12-07 04:11:57 UTC
Created attachment 659197 [details]
HDParm for SSD and 2 burners

This identifies the 2 DVD burners and the SSD drive.

Kernel is Linux peachy 3.6.7-4.fc16.x86_64 #1 SMP Tue Nov 20 20:33:31 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux


bash-4.2$ uname -a
Linux peachy 3.6.7-4.fc16.x86_64 #1 SMP Tue Nov 20 20:33:31 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
bash-4.2$ locate growisofs
/usr/bin/growisofs
/usr/lib64/brasero3/plugins/libbrasero-growisofs.so
/usr/share/kino/scripts/dvdauthor/growisofs.sh
/usr/share/man/man1/growisofs.1.gz
/usr/share/zsh/4.3.17/functions/_growisofs
bash-4.2$ rpm -qf /usr/bin/growisofs
dvd+rw-tools-7.1-9.fc16.x86_64
bash-4.2$

Comment 11 JOhn Westerdale 2012-12-07 04:14:10 UTC
Ok.. Enough from me. Let me know if this is a tunable thing, or what is starving.. or how to test it to find it, or.. its a known inconvenience.

Comment 12 Frantisek Kluknavsky 2012-12-21 11:32:30 UTC
Not a hardware problem. Parallel growisofs on other OS (w7, with forgiveness) works well. Independent burning software (wodim) on Fedora17 shows similar slowdown. Call to "pwrite" seems to be the slow operation.

Comment 13 Frantisek Kluknavsky 2012-12-21 12:00:05 UTC
Pwrite mentioned by mistake when copy-pasting into bugzilla, sorry. Is there a way to delete/edit bugzilla comment?

Comment 14 Fedora End Of Life 2013-01-16 14:34:38 UTC
This message is a reminder that Fedora 16 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 16. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as WONTFIX if it remains open with a Fedora 
'version' of '16'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version prior to Fedora 16's end of life.

Bug Reporter: Thank you for reporting this issue and we are sorry that 
we may not be able to fix it before Fedora 16 is end of life. If you 
would still like to see this bug fixed and are able to reproduce it 
against a later version of Fedora, you are encouraged to click on 
"Clone This Bug" and open it against that version of Fedora.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

The process we are following is described here: 
http://fedoraproject.org/wiki/BugZappers/HouseKeeping

Comment 15 Frantisek Kluknavsky 2013-01-16 14:56:39 UTC
Reproduced on Fedora 17.

Comment 16 Josh Boyer 2013-03-12 18:56:08 UTC
Are you still seeing this with 3.8.2 in updates-testing?  If you can recreate it with that, I'd recommend working directly with upstream.

Comment 17 Frantisek Kluknavsky 2013-03-20 15:33:49 UTC
Yes, still the same with 3.8.2-105.fc17.x86_64

Kernel mailing list archives offer explanation [1] and several problematic patches.

Also, as a potential workaround: "> When was this regression introduced? I have a dedicated cd burning
> machine running 2.6.35.x, and it works great with 4 drives at a time.
BKL was replaced by sr_mutex in 2.6.37-rc1." [2]

[1] http://marc.info/?l=linux-scsi&m=135705061804384&w=2
[2] https://lkml.org/lkml/2012/3/17/34

Comment 18 Fedora End Of Life 2013-04-03 15:41:59 UTC
This bug appears to have been reported against 'rawhide' during the Fedora 19 development cycle.
Changing version to '19'.

(As we did not run this process for some time, it could affect also pre-Fedora 19 development
cycle bugs. We are very sorry. It will help us with cleanup during Fedora 19 End Of Life. Thank you.)

More information and reason for this action is here:
https://fedoraproject.org/wiki/BugZappers/HouseKeeping/Fedora19

Comment 19 Josh Boyer 2013-09-18 20:45:46 UTC
*********** MASS BUG UPDATE **************

We apologize for the inconvenience.  There is a large number of bugs to go through and several of them have gone stale.  Due to this, we are doing a mass bug update across all of the Fedora 19 kernel bugs.

Fedora 19 has now been rebased to 3.11.1-200.fc19.  Please test this kernel update and let us know if you issue has been resolved or if it is still present with the newer kernel.

If you experience different issues, please open a new bug report for those.

Comment 20 Josh Boyer 2013-10-08 17:36:43 UTC
This bug is being closed with INSUFFICIENT_DATA as there has not been a response in 2 weeks. If you are still experiencing this issue, please reopen and attach the relevant data from the latest kernel you are running and any data that might have been requested previously.

Comment 21 Frantisek Kluknavsky 2013-10-09 08:16:18 UTC
Confirmed with 3.11.2-201.fc19.x86_64.

Comment 22 Justin M. Forbes 2014-01-03 22:06:01 UTC
*********** MASS BUG UPDATE **************

We apologize for the inconvenience.  There is a large number of bugs to go through and several of them have gone stale.  Due to this, we are doing a mass bug update across all of the Fedora 19 kernel bugs.

Fedora 19 has now been rebased to 3.12.6-200.fc19.  Please test this kernel update (or newer) and let us know if you issue has been resolved or if it is still present with the newer kernel.

If you have moved on to Fedora 20, and are still experiencing this issue, please change the version to Fedora 20.

If you experience different issues, please open a new bug report for those.

Comment 23 Justin M. Forbes 2014-03-10 14:43:00 UTC
*********** MASS BUG UPDATE **************

This bug has been in a needinfo state for more than 1 month and is being closed with insufficient data due to inactivity. If this is still an issue with Fedora 19, please feel free to reopen the bug and provide the additional information requested.

Comment 24 Frantisek Kluknavsky 2014-03-10 15:15:22 UTC
Reproduced with Fedora 20.

Comment 25 Justin M. Forbes 2014-05-21 19:38:09 UTC
*********** MASS BUG UPDATE **************

We apologize for the inconvenience.  There is a large number of bugs to go through and several of them have gone stale.  Due to this, we are doing a mass bug update across all of the Fedora 20 kernel bugs.

Fedora 20 has now been rebased to 3.14.4-200.fc20.  Please test this kernel update (or newer) and let us know if you issue has been resolved or if it is still present with the newer kernel.

If you experience different issues, please open a new bug report for those.

Comment 26 Frantisek Kluknavsky 2014-05-22 12:12:10 UTC
reproduced

Comment 27 Justin M. Forbes 2014-11-13 15:57:34 UTC
*********** MASS BUG UPDATE **************

We apologize for the inconvenience.  There is a large number of bugs to go through and several of them have gone stale.  Due to this, we are doing a mass bug update across all of the Fedora 20 kernel bugs.

Fedora 20 has now been rebased to 3.17.2-200.fc20.  Please test this kernel update (or newer) and let us know if you issue has been resolved or if it is still present with the newer kernel.

If you have moved on to Fedora 21, and are still experiencing this issue, please change the version to Fedora 21.

If you experience different issues, please open a new bug report for those.

Comment 28 Frantisek Kluknavsky 2014-11-14 11:18:31 UTC
reproduced

Comment 29 Fedora Kernel Team 2015-02-24 16:23:12 UTC
*********** MASS BUG UPDATE **************

We apologize for the inconvenience.  There is a large number of bugs to go through and several of them have gone stale.  Due to this, we are doing a mass bug update across all of the Fedora 20 kernel bugs.

Fedora 20 has now been rebased to 3.18.7-100.fc20.  Please test this kernel update (or newer) and let us know if you issue has been resolved or if it is still present with the newer kernel.

If you have moved on to Fedora 21, and are still experiencing this issue, please change the version to Fedora 21.

If you experience different issues, please open a new bug report for those.

Comment 30 Frantisek Kluknavsky 2015-02-24 17:14:34 UTC
reproduced

Comment 31 Fedora Kernel Team 2015-04-28 18:30:55 UTC
*********** MASS BUG UPDATE **************

We apologize for the inconvenience.  There is a large number of bugs to go through and several of them have gone stale.  Due to this, we are doing a mass bug update across all of the Fedora 20 kernel bugs.

Fedora 20 has now been rebased to 3.19.5-100.fc20.  Please test this kernel update (or newer) and let us know if you issue has been resolved or if it is still present with the newer kernel.

If you have moved on to Fedora 21, and are still experiencing this issue, please change the version to Fedora 21.

If you experience different issues, please open a new bug report for those.

Comment 32 Frantisek Kluknavsky 2015-04-30 08:53:14 UTC
still broken

Comment 33 Fedora End Of Life 2015-05-29 08:48:57 UTC
This message is a reminder that Fedora 20 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 20. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as EOL if it remains open with a Fedora  'version'
of '20'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not 
able to fix it before Fedora 20 is end of life. If you would still like 
to see this bug fixed and are able to reproduce it against a later version 
of Fedora, you are encouraged  change the 'version' to a later Fedora 
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

Comment 34 Frantisek Kluknavsky 2015-05-29 10:25:03 UTC
still broken


Note You need to log in before you can comment on or make changes to this bug.