Red Hat Bugzilla – Bug 197548
auto-mounting of external disks fails with device-mapper mappings
Last modified: 2007-11-30 17:11:36 EST
Description of problem:
The new device-mapper-based device naming seems to have broken gnome's ability
to mount external hard disks, direct use of partitions in such external disks
for raid devices and possibly more. In the case of mdadm, I can work around the
inability to reference /dev/sda* by referencing /dev/mapper/someuglyname*, but
that's not quite as intuitive and will probably break lots of existing scripts.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1.Plug in external USB disk while logged into gnome
The external disk won't be mounted automatically.
It should be.
and why is this a bug of udev and not the gnome automounting system?
Because the KDE automounting system is broken as well and there's other fallout
from the change in (presumably) udev that introduced all of this dm-based magic.
Even if it's not a bug in udev per se, it was the change in (presumably) udev
that caused all of it, so at the very least you need one tracking bug like this
to dupe others into and/or clone into other fallouts.
There was no change in udev, besides of switching to udev in the first place.
Could you please explain
"the inability to reference /dev/sda* by referencing /dev/mapper/someuglyname*"
a little bit more. I'm not sure, if I understand the problem completly.
# mount /dev/sdc1 /media/DV5000
mount: /dev/sdc1 already mounted or /media/DV5000 busy
# mount /dev/mapper/20010b92000d5b665p1 /media/DV5000
/dev/sdc1 is no longer available. I can't mount it, and I can't add it as a
raid member. I have to use the ugly /dev/mapper name. I'm not even sure that
unmounting the external disk is enough to be able to disconnect it safely,
without leaving dangling device-mapper devices around.
1. /dev/sdc1 is available as a device node?
2. /proc/mounts does not show sdc1 already mounted?
Yes, it is available, and it is in use only by device-mapper.
so why is this a problem/bug of udev?
I don't know, it was just my best guess. I thought it was udev that had the
rules that caused the newly-available disks to be turned into device-mapper
devices. If not, please reassign this to the appropriate package.
As it turns out, it is device-mapper-multipath that decides that any external
disk gets multipath dm devices created for it, rendering their original /dev
devices busy and unavailable for the raid subsystem.
You don't say which version of device-mapper-multipath package you used! If
it's prior to 0.4.7-2.0 then please update it and retest.
Latest test was with 0.4.7-3.1.
FWIW, one way to avoid the problem is to get at least one of the disk partitions
to become busy before device-mapper-multipath's magic strikes, e.g., by having
raidautorun by initrd's /init script to activate one of the raid members present
in the external disk. In this case, the external disk has another partition
mounted successfully at log in. Booting without the disk plugged in, and then
plugging it in after udev is already running, however, restores the broken
See bug 197547 for more information on other problems this magic is causing.
What's in your /etc/multipath.conf file?
It should have the following lines at the top:
# Blacklist all devices by default. Remove this to enable multipathing
# on the default devices.
This is the file that is installed by the rpm unless there already is an
/etc/multipath.conf file on the machine. With these lines in my configuration
file, I don't see any multipathed devices.
If you have these lines in you file, and they aren't commented out, and you are
getting multipathed devices anyway. Could you please plug in external device,
run multipath -F to remove all the multipath mappings, and then run multipath
-v4 and put the output in this bugzilla.
Created attachment 132560 [details]
Output of multipath -v4, as requested
# grep -v ^# /etc/multipath.conf
I've attached the output of multipath -v4 after multipath -F. Thanks for
looking into this.
Well, it seems like I was the one looking at outdated code. Change the config
And your problems should go away. I'll get a new package built that fixes the
default config file.
There's a new build 0.4.7-4.0 that has a better default config file.
Unfortunately, like I mentioned earlier, the new config file only gets installed
if there isn't a config file already there, so you have to uninstall the old rpm,
or simply delete /etc/multipath.conf before you upgrade.
Wouldn't it be better if the new config file got installed only if the config
file in place hadn't been changed? I'm pretty sure there's a simple way to
accomplish this with rpm, I'm just not sure what it is.
Anyhow, the original problem is fixed, feel free to CLOSE/RAWHIDE.