Description of problem:
iSCSI volume isn't mounted after system boot
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. boot RHEL6 into runlevel 3; /etc/fstab contains iSCSI volume
FS not mounted
FS should be mounted
running mount command after boot or adding
"mount /dev/mapper/SANVOL1 /home/exports/"
into rc.local work.
Please attach /etc/fstab, and any other relevant configuration. Do you get any errors on boot?
/dev/mapper/vg.01-rootvol / ext4 defaults 1 1
UUID=853a6f55-29a6-462a-bc2e-692cbf865f9d /boot ext4 defaults 1 2
UUID=68a30fa3-1bc1-46cf-9aed-6f6a72444d30 swap swap defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
/dev/mapper/SANVOL1 /home/exports ext4 defaults 1 1
I'm attaching /var/log/messages.
eth4 and eth5 are using for multi-path iSCSI.
Created attachment 468470 [details]
part of /var/log/messages during boot
What happens if you add '_netdev' to your fstab line?
I changed relevant fstab line to:
/dev/mapper/SANVOL1 /home/exports ext4 defaults,_netdev 0 0
but it didn't help
Can you mount it manually as root after boot via:
or is that failing as well?
Thanks & regards, Phil
Yes, as I wrote into my bugreport, manual mount or running "mount" at the end of boot sequence, in rc.local, works well.
/dev/mapper/SANVOL1 exists after the multipathd initscript has been started.
(In reply to comment #9)
> /dev/mapper/SANVOL1 exists after the multipathd initscript has been started.
ok, not true. Seems like it is triggered by the uevent of an iscsi disk.
Dec 12 07:06:23 rhel6-server kernel: sd 11:0:0:0: [sde] Attached SCSI disk
Dec 12 07:06:23 rhel6-server multipathd: sde: add path (uevent)
Dec 12 07:06:23 rhel6-server multipathd: SANVOL1: load table [0 629145600 multipath 1 queue_if_no_path 0 1 1 round-robin 0 1 1 8:64 1000]
Dec 12 07:06:23 rhel6-server multipathd: SANVOL1: event checker started
Dec 12 07:06:23 rhel6-server multipathd: sde path added to devmap SANVOL1
Dec 12 07:06:23 rhel6-server multipathd: dm-3: add map (uevent)
Dec 12 07:06:23 rhel6-server multipathd: dm-3: devmap already registered
So, does there need to be some sort of udev settling after starting iscsid? (Ick.)
Actually, that won't even work, since I doubt there's any way to know of the disks to come.
(In reply to comment #12)
> Actually, that won't even work, since I doubt there's any way to know of the
> disks to come.
right... you might workaround with
modprobe scsi_wait_scan && rmmod scsi_wait_scan
but that's no guarantee.
In the end the proper solution would be David's stc.
The only way to fix this in the current framework would be to make iscsi & multipath synchronous so that they block on boot until all devices are discovered.
That seems impractical, but reassigning in case there's something doable.
iscsi itself should be fine here. iscsi will do target and device discovery/scanning in parallel with other iscsi targets/disks, but iscsiadm/iscsistart (service iscsi start) does not return until discovery/scans are done. So there is no need for hacks like the scsi_wait_scan one (scsi_wait_scan is only needed for drivers that initiate the scsi scans from the kernel - iscsi does it from userspace due to how its login code is implemented in iscsid/iscsistart).
The problem is that at this point multipathd is running and handling the setup of multipath devices asynchronously.
We have this problem with other disks like fcoe - we hit this problem anytime the driver is not loaded from the initramfs actually. Should we just add another multipath script (multipath_wait) or some sort of script that is run before lvm (we have this problem for lvm too I think) and the fstab/netfs script to handle all disks?
Since RHEL 6.3 External Beta has begun, and this bug remains
unresolved, it has been rejected as it is not proposed as
exception or blocker.
Red Hat invites you to ask your support representative to
propose this request, if appropriate and relevant, in the
next release of Red Hat Enterprise Linux.
It seems this issue was fixed in RHEL 7 via bz 864036. Is there a fix for RHEL 6 ? A customer for SFDC case 01625127 requires this fix on his RHEL 6.5 dist.
When Red Hat shipped 6.8 on May 10, 2016 RHEL 6 entered Production Phase 2.
That means only "Critical and Important Security errata advisories (RHSAs) and Urgent Priority Bug Fix errata advisories (RHBAs) may be released"
This BZ is now going to be closed as it does not appear to meet Phase 2 criteria.
If this BZ is deemed critical to the customer please open a support case in the Red Hat Customer Portal and ask that this BZ be re-opened.