Description of problem:
If I disconnect all the connections, /etc/resolv.conf should either be reset to the way it was before I connected or reset to be empty (not sure I care which, though others may).
Obviously, when multiple connections are in play, /etc/resolv.conf should reflect the correct setup for the remaining connections (which it does). It's just that when all the connections are disconnected (or one is in the middle of reconnecting) it can retain the old state, making debugging/testing of NM more complex, because the state of /etc/resolv.conf may be deceptive (because it can be difficult to tell if it's been filled in for this connection, or leftover from the last one).
This applies to NetworkManager-0.8.0-9.git20100429.fc12.x86_64
When I disconnect everything, the /etc/resolv.conf resets to empty.
However, in a more complex scenario it doesn't work perfectly:
I start out disconnected. /etc/resolv.conf is empty
I then connect to wireless (auto/auto). /etc/resolv.conf has both IPv4 addresses and IPv6 addresses (from RA RDNSS).
I then connect to wired (auto/auto). /etc/resolv.conf has both IPv4 addresses and IPv6 addresses (from RA RDNSS). (These are both the same network, incidentally.)
I then disconnect the wired (via cable). /etc/resolv.conf only contains IPv4 addresses -- the IPv6 address is lost.
If I disconnect manually (from the menu), the IPv6 address stays.
Ok, I tried disconnecting the cable again, and the IPv6 address stayed, so this is non-deterministic. Every time I've tried since, it seems to work, but I'm also unable to get NM to respond to plugging in the Ethernet cable anymore. That's either a bug, or by design (to deal with iffy Ethernet cables, which aren't uncommon).
Using rf kill to disconnect the wireless network doesn't result in a loss of IPv6 addresses from /etc/resolv.conf.
So I'd be willing to call this bug fixed. The other issue is likely a different bug.
I believe this is fixed with git20100503 and later... I tried the exact steps in Comment #1 and could not reproduce the issue. Please re-open if you run into this again, thanks!