Bug 2896

Summary: removing eth cable kills networking until cold boot
Product: [Retired] Red Hat Linux Reporter: karrde
Component: telnetAssignee: Jay Turner <jturner>
Status: CLOSED NOTABUG QA Contact:
Severity: high Docs Contact:
Priority: medium    
Version: 6.0CC: srevivo
Target Milestone: ---   
Target Release: ---   
Hardware: i386   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 1999-05-18 14:34:47 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description karrde 1999-05-17 23:55:41 UTC
I was changing out the ethernet cable on a running 6.0
machine (upgraded from 5.2) and when the new cable was in
(very fast switch), I had no traffic across eth0. ifconfig
looked fine and was up. Lights on the nic and the hub, no
traffic lights. ifconfiged eth0 down and up, no go. Did a
'reboot' and it still was not passing traffic across the
port. Did a 'shutdown -h now', then brought it up and was
fine and passing traffic again.

Comment 1 Jeff Johnson 1999-05-18 14:34:59 UTC
This sounds like a hardware problem.

Comment 2 Adam Thompson 1999-05-19 22:30:59 UTC
Some (poorly designed, IMHO) cards do this.  It is almost certainly a
hardware issue.  Possible workaround: rmmod followed by insmod.

Generally, forcing the driver to reinitialize the card will solve
this, but there are some *really* bad cards out there.

The chipset issue is usually something like:
- cable removed
- chipset detects no ethernet carrier; automatically tries to
renegotiate 10/100/HD/FD/whatever/TP/coax/ whatever it does;
- fails to find an ethernet connection; locks into some default mode
that may not be supported.  Default mode is often 10b2 or 10b5 (coax
and AUI, respectively) -- but most cards don't have these physical
ports.