Bug 133730 - nautilus sftp eventually gets stuck with 100% CPU
Summary: nautilus sftp eventually gets stuck with 100% CPU
Status: CLOSED WORKSFORME
Alias: None
Product: Fedora
Classification: Fedora
Component: nautilus   
(Show other bugs)
Version: rawhide
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Alexander Larsson
QA Contact:
URL:
Whiteboard:
Keywords:
Depends On:
Blocks: FC3Target
TreeView+ depends on / blocked
 
Reported: 2004-09-26 22:21 UTC by Warren Togami
Modified: 2007-11-30 22:10 UTC (History)
0 users

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2004-09-28 10:15:43 UTC
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
cancel_open.png (22.07 KB, image/jpeg)
2004-09-26 22:21 UTC, Warren Togami
no flags Details

Description Warren Togami 2004-09-26 22:21:07 UTC
After using sftp:// with nautilus for a minute or two, attempting to
enter a directory causes nautilus to lockup with 100% CPU usage.  A
dialog asking "Cancel Open?" (attached below) pops up.  Clicking
Cancel does not recover.  Killing nautilus seems to be the only way to
recover.

Description of problem:
poll([{fd=4, events=POLLIN}, {fd=3, events=POLLIN}, {fd=8,
events=POLLIN|POLLPRI}, {fd=10, events=POLLIN}, {fd=12,
events=POLLIN|POLLPRI}, {fd=16, events=POLLIN}, {fd=38,
events=POLLIN}], 7, 0) = 0
ioctl(3, FIONREAD, [0])                 = 0
gettimeofday({1096236610, 778163}, NULL) = 0
poll([{fd=4, events=POLLIN}, {fd=3, events=POLLIN}, {fd=8,
events=POLLIN|POLLPRI}, {fd=10, events=POLLIN}, {fd=12,
events=POLLIN|POLLPRI}, {fd=16, events=POLLIN}, {fd=38,
events=POLLIN}], 7, 0) = 0
brk(0x8f96000)                          = 0x8f96000
brk(0x8f86000)                          = 0x8f86000 brk(0x8f74000)   
                      = 0x8f74000
brk(0x8f6e000)                          = 0x8f6e000
brk(0x8f6b000)                          = 0x8f6b000
ioctl(3, FIONREAD, [0])                 = 0 gettimeofday({1096236610,
783878}, NULL) = 0
poll([{fd=4, events=POLLIN}, {fd=3, events=POLLIN}, {fd=8,
events=POLLIN|POLLPRI}, {fd=10, events=POLLIN}, {fd=12,
events=POLLIN|POLLPRI}, {fd=16, events=POLLIN}, {fd=38,
events=POLLIN}], 7, 0) = 0
futex(0x896a1e4, FUTEX_WAKE, 1)         = 1
futex(0x896a1e0, FUTEX_WAKE, 1)         = 1
futex(0x896a1c0, FUTEX_WAKE, 1)         = 1
ioctl(3, FIONREAD, [0])                 = 0
gettimeofday({1096236610, 786155}, NULL) = 0
poll([{fd=4, events=POLLIN}, {fd=3, events=POLLIN}, {fd=8,
events=POLLIN|POLLPRI}, {fd=10, events=POLLIN}, {fd=12,
events=POLLIN|POLLPRI}, {fd=16, events=POLLIN}, {fd=38,
events=POLLIN}], 7, 0) = 0
ioctl(3, FIONREAD, [0])                 = 0
gettimeofday({1096236610, 786354}, NULL) = 0
poll([{fd=4, events=POLLIN}, {fd=3, events=POLLIN}, {fd=8,
events=POLLIN|POLLPRI}, {fd=10, events=POLLIN}, {fd=12,
events=POLLIN|POLLPRI}, {fd=16, events=POLLIN}, {fd=38,
events=POLLIN}], 7, 0) = 0
brk(0x8f96000)                          = 0x8f96000
brk(0x8f86000)                          = 0x8f86000
brk(0x8f74000)                          = 0x8f74000
brk(0x8f6e000)                          = 0x8f6e000
brk(0x8f6b000)                          = 0x8f6b000
ioctl(3, FIONREAD, [0])                 = 0
gettimeofday({1096236610, 792392}, NULL) = 0

Version-Release number of selected component (if applicable):
nautilus-2.8.0-1

Comment 1 Warren Togami 2004-09-26 22:21:45 UTC
Created attachment 104338 [details]
cancel_open.png

Comment 2 Alexander Larsson 2004-09-28 09:35:27 UTC
Need a stacktrace for all threads of nautilus when this happens, with
nautilus and gnome-vfs2 debuginfo packages installed.

Comment 3 Warren Togami 2004-09-28 10:04:04 UTC
I was able to reproduce this readily while connected to my old RH
box... either 7.3 or 8.0.  Unfortunately I have since formatted that
box.  Tests connecting to FC3, FC2, RHEL3, RHEL2.1, Solaris with
commerical sshd seem to not exhibit this problem.

Should I keep this bug open?

Comment 4 Warren Togami 2004-09-28 10:15:43 UTC
Closing for now until I am able to reproduce it again.


Note You need to log in before you can comment on or make changes to this bug.