Bug 133730 - nautilus sftp eventually gets stuck with 100% CPU
nautilus sftp eventually gets stuck with 100% CPU
Status: CLOSED WORKSFORME
Product: Fedora
Classification: Fedora
Component: nautilus (Show other bugs)
rawhide
All Linux
medium Severity medium
: ---
: ---
Assigned To: Alexander Larsson
:
Depends On:
Blocks: FC3Target
  Show dependency treegraph
 
Reported: 2004-09-26 18:21 EDT by Warren Togami
Modified: 2007-11-30 17:10 EST (History)
0 users

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2004-09-28 06:15:43 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
cancel_open.png (22.07 KB, image/jpeg)
2004-09-26 18:21 EDT, Warren Togami
no flags Details

  None (edit)
Description Warren Togami 2004-09-26 18:21:07 EDT
After using sftp:// with nautilus for a minute or two, attempting to
enter a directory causes nautilus to lockup with 100% CPU usage.  A
dialog asking "Cancel Open?" (attached below) pops up.  Clicking
Cancel does not recover.  Killing nautilus seems to be the only way to
recover.

Description of problem:
poll([{fd=4, events=POLLIN}, {fd=3, events=POLLIN}, {fd=8,
events=POLLIN|POLLPRI}, {fd=10, events=POLLIN}, {fd=12,
events=POLLIN|POLLPRI}, {fd=16, events=POLLIN}, {fd=38,
events=POLLIN}], 7, 0) = 0
ioctl(3, FIONREAD, [0])                 = 0
gettimeofday({1096236610, 778163}, NULL) = 0
poll([{fd=4, events=POLLIN}, {fd=3, events=POLLIN}, {fd=8,
events=POLLIN|POLLPRI}, {fd=10, events=POLLIN}, {fd=12,
events=POLLIN|POLLPRI}, {fd=16, events=POLLIN}, {fd=38,
events=POLLIN}], 7, 0) = 0
brk(0x8f96000)                          = 0x8f96000
brk(0x8f86000)                          = 0x8f86000 brk(0x8f74000)   
                      = 0x8f74000
brk(0x8f6e000)                          = 0x8f6e000
brk(0x8f6b000)                          = 0x8f6b000
ioctl(3, FIONREAD, [0])                 = 0 gettimeofday({1096236610,
783878}, NULL) = 0
poll([{fd=4, events=POLLIN}, {fd=3, events=POLLIN}, {fd=8,
events=POLLIN|POLLPRI}, {fd=10, events=POLLIN}, {fd=12,
events=POLLIN|POLLPRI}, {fd=16, events=POLLIN}, {fd=38,
events=POLLIN}], 7, 0) = 0
futex(0x896a1e4, FUTEX_WAKE, 1)         = 1
futex(0x896a1e0, FUTEX_WAKE, 1)         = 1
futex(0x896a1c0, FUTEX_WAKE, 1)         = 1
ioctl(3, FIONREAD, [0])                 = 0
gettimeofday({1096236610, 786155}, NULL) = 0
poll([{fd=4, events=POLLIN}, {fd=3, events=POLLIN}, {fd=8,
events=POLLIN|POLLPRI}, {fd=10, events=POLLIN}, {fd=12,
events=POLLIN|POLLPRI}, {fd=16, events=POLLIN}, {fd=38,
events=POLLIN}], 7, 0) = 0
ioctl(3, FIONREAD, [0])                 = 0
gettimeofday({1096236610, 786354}, NULL) = 0
poll([{fd=4, events=POLLIN}, {fd=3, events=POLLIN}, {fd=8,
events=POLLIN|POLLPRI}, {fd=10, events=POLLIN}, {fd=12,
events=POLLIN|POLLPRI}, {fd=16, events=POLLIN}, {fd=38,
events=POLLIN}], 7, 0) = 0
brk(0x8f96000)                          = 0x8f96000
brk(0x8f86000)                          = 0x8f86000
brk(0x8f74000)                          = 0x8f74000
brk(0x8f6e000)                          = 0x8f6e000
brk(0x8f6b000)                          = 0x8f6b000
ioctl(3, FIONREAD, [0])                 = 0
gettimeofday({1096236610, 792392}, NULL) = 0

Version-Release number of selected component (if applicable):
nautilus-2.8.0-1
Comment 1 Warren Togami 2004-09-26 18:21:45 EDT
Created attachment 104338 [details]
cancel_open.png
Comment 2 Alexander Larsson 2004-09-28 05:35:27 EDT
Need a stacktrace for all threads of nautilus when this happens, with
nautilus and gnome-vfs2 debuginfo packages installed.
Comment 3 Warren Togami 2004-09-28 06:04:04 EDT
I was able to reproduce this readily while connected to my old RH
box... either 7.3 or 8.0.  Unfortunately I have since formatted that
box.  Tests connecting to FC3, FC2, RHEL3, RHEL2.1, Solaris with
commerical sshd seem to not exhibit this problem.

Should I keep this bug open?
Comment 4 Warren Togami 2004-09-28 06:15:43 EDT
Closing for now until I am able to reproduce it again.

Note You need to log in before you can comment on or make changes to this bug.