Red Hat Bugzilla – Bug 866166
USB issue with latest kernel
Last modified: 2013-02-25 09:02:05 EST
Description of problem:
First of all, this happens only with kernel 3.6.1-1 (latest), while it was working fine with the previous 3.5.5-2 (and before).
I've a desktop PC with connected two cascaded USB HUBs. This means HUB A is connected to HUB B, which, in turn, is connected to the PC.
These HUBs hold some storage devices (USB HDDs).
When accessing the HDDs, one by one, everything seems fine (at least that was the last test), but when accessing multiple HDDs in parallel, then quickly everything stops working with errors (in /var/log/messages):
/var/log/messages-20121014:Oct 13 00:01:04 lazy kernel: [ 559.748606] usb 1-4: device not accepting address 23, error -110
/var/log/messages-20121014:Oct 13 00:01:14 lazy kernel: [ 570.274506] usb 1-4: device not accepting address 23, error -110
/var/log/messages-20121014:Oct 13 00:01:29 lazy kernel: [ 585.519967] usb 1-4: device descriptor read/64, error -110
/var/log/messages-20121014:Oct 13 00:19:00 lazy kernel: [ 878.865762] hub 1-5.3:1.0: cannot reset port 1 (err = -110)
/var/log/messages-20121014:Oct 13 00:19:01 lazy kernel: [ 879.876207] hub 1-5.3:1.0: cannot reset port 1 (err = -110)
/var/log/messages-20121014:Oct 13 00:19:02 lazy kernel: [ 880.888118] hub 1-5.3:1.0: cannot reset port 1 (err = -110)
/var/log/messages-20121014:Oct 13 00:19:03 lazy kernel: [ 881.900167] hub 1-5.3:1.0: cannot reset port 1 (err = -110)
/var/log/messages-20121014:Oct 13 00:19:04 lazy kernel: [ 882.911619] hub 1-5.3:1.0: cannot reset port 1 (err = -110)
/var/log/messages-20121014:Oct 13 00:19:05 lazy kernel: [ 883.922536] hub 1-5.3:1.0: cannot disable port 1 (err = -110)
Those appear several times until the HUB (I guess the second one) is disabled, and the HDDs there connected, lost. I think it's the second, because these HDDs seem to be the missing ones.
The issue does not appear immediately, that is, some access to the HDDs is performed, but very quickly, I would say almost immediately, the connection crashes.
Again, all previous kernels were, and are, working fine (I'm now using 3.5.5-2 and no problem appeared).
Version-Release number of selected component (if applicable):
I would say always, even if, as mentioned above, accessing one HDD at time does not seem to trigger the issue.
Steps to Reproduce:
Connected cascaded HUBs to a PC with some USB HDDs (RAID configured will do the trick very nicely).
Assemble the RAID (this works almost always, but not always)
Access the HDDs in parallel (using the RAID, as wrote above, works).
Almost immediately the HDDs do not respond anymore and the above error messages are present in the logs.
As per previous kernel, this setup should work without troubles.
The current configuration I've is really in RAID, so the access pattern of the HDDs is the one of the "md" kernel driver. This should not make a difference across kernels, I suppose, nevertheless it might be worth to mention.
I like to stress again :-) that the previous kernels were working, so this seems a regression of 3.6.1-1, that's why I set the "Severity" to "Urgent", if you think it is inappropriate, please feel free to change it.
just one addition, kernel 3.5.6-1 is working too (I'm using this one now, not the 3.5.5-2, which was also working, of course), so the 3.6 seems to have introduced something.
Tried with latest kernel 3.6.2-4.fc17.x86_64, same result.
Furthermore, after waiting for the complete reset of the USB HUBs, these are not working anymore even if complete powered off and on (og the HUBs and related devices) happens. I guess, the kernel driver is, somehow, locked.
Any clue? Some fix on the horizon?
Kernel 3.6.3-1.fc17.x86_64 has same issue.
Anybody looking into this?
Do you need some more information or support in order to get a fix?
Kernel 3.6.6-1.fc17.x86_64 shows same issue.
Anyone out there can give some hint?
Forgot to add, that I opened a bug in the kernel bugzilla:
Furthermore, the problem is tracked in the USB mailing list.
kernel-3.7.9-101.fc17.x86_64 from updates-testing has the patch to fix this issue.
I'm running it right now and the problem seems to be gone, as expected.
I think you can close this bug.
Thank you for letting us know.