Bug 662437

Summary: iSCSI multipath volume isn't mounted after system boot
Product: Red Hat Enterprise Linux 6 Reporter: Vitaly Karasik <linux.il>
Component: iscsi-initiator-utilsAssignee: Chris Leech <cleech>
Status: CLOSED WONTFIX QA Contact: Bruno Goncalves <bgoncalv>
Severity: medium Docs Contact:
Priority: low    
Version: 6.0CC: bgoncalv, bmarzins, coughlan, cww, fge, harald, mchristi, nkshirsa, rvokal
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-08-04 18:33:26 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 767187    
Attachments:
Description Flags
part of /var/log/messages during boot none

Description Vitaly Karasik 2010-12-12 16:34:49 UTC
Description of problem:
iSCSI volume isn't mounted after system boot

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. boot RHEL6 into runlevel 3; /etc/fstab contains iSCSI volume 

  
Actual results:
FS  not mounted

Expected results:
FS should be mounted 


Additional info: 
running mount command after boot or adding

"mount /dev/mapper/SANVOL1 /home/exports/"

 into rc.local work.

Comment 2 Bill Nottingham 2010-12-13 15:33:23 UTC
Please attach /etc/fstab, and any other relevant configuration. Do you get any errors on boot?

Comment 3 Vitaly Karasik 2010-12-13 20:06:01 UTC
/etc/fstab:

/dev/mapper/vg.01-rootvol /                       ext4    defaults        1 1
UUID=853a6f55-29a6-462a-bc2e-692cbf865f9d /boot                   ext4    defaults        1 2
UUID=68a30fa3-1bc1-46cf-9aed-6f6a72444d30 swap                    swap    defaults        0 0
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0

/dev/mapper/SANVOL1    /home/exports            ext4    defaults        1 1


I'm attaching /var/log/messages.

eth4 and eth5 are using for multi-path  iSCSI.

Comment 4 Vitaly Karasik 2010-12-13 20:06:54 UTC
Created attachment 468470 [details]
part of /var/log/messages during boot

Comment 5 Bill Nottingham 2010-12-13 20:39:17 UTC
What happens if you add '_netdev' to your fstab line?

Comment 6 Vitaly Karasik 2010-12-14 07:27:37 UTC
I changed relevant fstab line to:

/dev/mapper/SANVOL1    /home/exports            ext4    defaults,_netdev        0 0



but it didn't help

Comment 7 Phil Knirsch 2010-12-22 12:07:39 UTC
Can you mount it manually as root after boot via:

mount /home/exports

or is that failing as well?

Thanks & regards, Phil

Comment 8 Vitaly Karasik 2010-12-22 12:20:52 UTC
Yes, as I wrote into my bugreport,  manual mount or running "mount" at the end of boot sequence, in rc.local, works well.

Comment 9 Harald Hoyer 2011-01-03 09:17:29 UTC
/dev/mapper/SANVOL1 exists after the multipathd initscript has been started.

Comment 10 Harald Hoyer 2011-01-03 09:21:21 UTC
(In reply to comment #9)
> /dev/mapper/SANVOL1 exists after the multipathd initscript has been started.

ok, not true. Seems like it is triggered by the uevent of an iscsi disk.

Dec 12 07:06:23 rhel6-server kernel: sd 11:0:0:0: [sde] Attached SCSI disk
Dec 12 07:06:23 rhel6-server multipathd: sde: add path (uevent)
Dec 12 07:06:23 rhel6-server multipathd: SANVOL1: load table [0 629145600 multipath 1 queue_if_no_path 0 1 1 round-robin 0 1 1 8:64 1000]
Dec 12 07:06:23 rhel6-server multipathd: SANVOL1: event checker started
Dec 12 07:06:23 rhel6-server multipathd: sde path added to devmap SANVOL1
Dec 12 07:06:23 rhel6-server multipathd: dm-3: add map (uevent)
Dec 12 07:06:23 rhel6-server multipathd: dm-3: devmap already registered

Comment 11 Bill Nottingham 2011-01-03 16:34:25 UTC
So, does there need to be some sort of udev settling after starting iscsid? (Ick.)

Comment 12 Bill Nottingham 2011-01-03 16:34:52 UTC
Actually, that won't even work, since I doubt there's any way to know of the disks to come.

Comment 14 Harald Hoyer 2011-01-20 09:02:16 UTC
(In reply to comment #12)
> Actually, that won't even work, since I doubt there's any way to know of the
> disks to come.

right... you might workaround with

modprobe scsi_wait_scan && rmmod scsi_wait_scan
udevadm settle

but that's no guarantee.

In the end the proper solution would be David's stc.

Comment 15 Bill Nottingham 2011-06-09 20:00:43 UTC
The only way to fix this in the current framework would be to make iscsi & multipath synchronous so that they block on boot until all devices are discovered.

That seems impractical, but reassigning in case there's something doable.

Comment 16 Mike Christie 2011-06-09 22:38:19 UTC
ccing Ben.

iscsi itself should be fine here. iscsi will do target and device discovery/scanning in parallel with other iscsi targets/disks, but iscsiadm/iscsistart (service iscsi start) does not return until discovery/scans are done. So there is no need for hacks like the scsi_wait_scan one (scsi_wait_scan is only needed for drivers that initiate the scsi scans from the kernel - iscsi does it from userspace due to how its login code is implemented in iscsid/iscsistart).

The problem is that at this point multipathd is running and handling the setup of multipath devices asynchronously.

We have this problem with other disks like fcoe - we hit this problem anytime the driver is not loaded from the initramfs actually. Should we just add another multipath script (multipath_wait) or some sort of script that is run before lvm (we have this problem for lvm too I think) and the fstab/netfs script to handle all disks?

Comment 19 RHEL Program Management 2012-05-03 04:36:56 UTC
Since RHEL 6.3 External Beta has begun, and this bug remains
unresolved, it has been rejected as it is not proposed as
exception or blocker.

Red Hat invites you to ask your support representative to
propose this request, if appropriate and relevant, in the
next release of Red Hat Enterprise Linux.

Comment 20 nikhil kshirsagar 2016-04-28 03:35:41 UTC
It seems this issue was fixed in RHEL 7 via bz 864036. Is there a fix for RHEL 6 ? A customer for SFDC case 01625127 requires this fix on his RHEL 6.5 dist.

Comment 21 Chris Williams 2016-08-04 18:33:26 UTC
When Red Hat shipped 6.8 on May 10, 2016 RHEL 6 entered Production Phase 2. 
https://access.redhat.com/support/policy/updates/errata#Production_2_Phase
That means only "Critical and Important Security errata advisories (RHSAs) and Urgent Priority Bug Fix errata advisories (RHBAs) may be released"
This BZ is now going to be closed as it does not appear to meet Phase 2 criteria. 
If this BZ is deemed critical to the customer please open a support case in the Red Hat Customer Portal and ask that this BZ be re-opened.