Bug 590597

Summary: RFE add suport for GNBD storage (Red Hat Cluster Suite) to anaconda
Product: Red Hat Enterprise Linux 6 Reporter: Hans de Goede <hdegoede>
Component: anacondaAssignee: Anaconda Maintenance Team <anaconda-maint-list>
Status: CLOSED WONTFIX QA Contact: Release Test Team <release-test-team-automation>
Severity: medium Docs Contact:
Priority: low    
Version: 6.0CC: justin, vchepkov
Target Milestone: rcKeywords: FutureFeature
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: 590578 Environment:
Last Closed: 2010-08-13 18:34:36 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Hans de Goede 2010-05-10 09:44:46 UTC
+++ This bug was initially created as a clone of Bug #590578 +++

Description of problem:

At present during installation of RHEL 6 (first public beta) there appears to be only two advanced block storage types available.

 + iSCSI
 + FCoE

As RHEL 6 itself supports additional types (SRP and GNBD come to mind), these should be added.

 + SRP would be conditional upon an Infiniband or iWARP capable 10GbE interface being detected, and would then require the loading of the ib_srp kernel module (already part of the RHEL 6 kernel package).

 + GNBD would likely be conditional upon a network interface being available, in a similar vein to iSCSI, and would require loading of the "gnbd" kernel module.

--- Additional comment from hdegoede on 2010-05-10 04:44:56 EDT ---

Hi,

Thanks for the bug report.

(In reply to comment #0)
> Description of problem:
> 
> At present during installation of RHEL 6 (first public beta) there appears to
> be only two advanced block storage types available.
> 
>  + iSCSI
>  + FCoE
> 
> As RHEL 6 itself supports additional types (SRP and GNBD come to mind), these
> should be added.
> 
>  + SRP would be conditional upon an Infiniband or iWARP capable 10GbE interface
> being detected, and would then require the loading of the ib_srp kernel module
> (already part of the RHEL 6 kernel package).
> 

How can we see from userspace if such an interface has been detected, does it export any specific sysfs attributes for example ?

And is loading the kernel driver all that is need, will it then automatically configure itself and bring up any attached "disks" as scsi disks ?
If so how long does this configuration take ?

>  + GNBD would likely be conditional upon a network interface being available,
> in a similar vein to iSCSI, and would require loading of the "gnbd" kernel
> module.    

When you say "in a similar vein to iSCSI" does this mean that this nic needs to
be configured for IP traffic ?

And is loading the kernel driver all that is need, will it then automatically configure itself and bring up any attached "disks" as scsi disks ?
If so how long does this configuration take ?

Thanks & Regards,

Hans

--- Additional comment from justin on 2010-05-10 05:08:08 EDT ---

(In reply to comment #1)
<snip>
> How can we see from userspace if such an interface has been detected, does it
> export any specific sysfs attributes for example ?

Good point.

This is from a RHEL 6 beta server (freshly installed), with an Infiniband card in it:

$ pwd
/sys/class/infiniband
$ ls -la
total 0
drwxr-xr-x.  2 root root 0 May 10 18:18 .
drwxr-xr-x. 45 root root 0 May 10 18:40 ..
lrwxrwxrwx.  1 root root 0 May 10 18:18 mthca0 -> ../../devices/pci0000:00/0000:00:03.0/0000:0a:00.0/infiniband/mthca0
$

This card uses the ib_mthca Infiniband driver, already part of the RHEL 6 kernel package.


> And is loading the kernel driver all that is need, will it then automatically
> configure itself and bring up any attached "disks" as scsi disks ?
> If so how long does this configuration take ?

Good question.  Once the ib_srp kernel module is loaded, it will then "see" any nodes presenting SRP storage to it.

However, there may need to be some work done using ibsrpdm and/or multipathing in order for this to be correctly mapped as available disk.

Probably best to ask Doug Ledford (Red Hat), the maintainer for the existing RHEL Infiniband and SRP packages (ie srptools), for his thoughts here.

Speed wise, configuration setup is very fast.  Sub-second here anyway.


> >  + GNBD would likely be conditional upon a network interface being available,
> > in a similar vein to iSCSI, and would require loading of the "gnbd" kernel
> > module.    
> 
> When you say "in a similar vein to iSCSI" does this mean that this nic needs to be configured for IP traffic ?

Unlike SRP (which doesn't need IP up and running), GNBD needs IP to be up and running in the same way that iSCSI does.  It's another IP based block storage protocol, used in clustering, expecially in regards to the old Red Hat Cluster Suite.


> And is loading the kernel driver all that is need, will it then automatically
> configure itself and bring up any attached "disks" as scsi disks ?
> If so how long does this configuration take ?

No, it needs to be configured with gnbd_import.

Actually, this is probably easier to take a look at:

  http://www.redhat.com/docs/manuals/csgfs/admin-guide/ch-gnbd.html
  11.1.2. Importing a GNBD on a Client

--- Additional comment from pm-rhel on 2010-05-10 05:33:48 EDT ---

Since this issue was entered in bugzilla, the release flag has been
set to ? to ensure that it is properly evaluated for this release.

--- Additional comment from hdegoede on 2010-05-10 05:43:15 EDT ---

I'm go to split this bug in 2 for tracking the 2 different requests it contains. I'm going to use this one for tracking SRP over infiniband support.

Comment 1 Hans de Goede 2010-05-10 09:47:02 UTC
This looks like something which we may be able to do for 6.1, depending on what is needed to properly test it. It is definitively to late for 6.0 .

Comment 2 Vadym Chepkov 2010-05-22 15:57:34 UTC
I can't even find GNBD on Redhat 6:


# yum repolist
Loaded plugins: rhnplugin
This system is not registered with RHN.
RHN support will be disabled.
repo id                                                      repo name                                                                                      status
rhel-beta                                                    Red Hat Enterprise Linux 6 Beta - x86_64                                                       3,595
rhel-beta-optional                                           Red Hat Enterprise Linux 6 Beta (Optional) - x86_64                                            2,374
repolist: 5,969


# yum search gnbd
Loaded plugins: rhnplugin
This system is not registered with RHN.
RHN support will be disabled.
Warning: No matches found for: gnbd
No Matches found

Comment 3 RHEL Program Management 2010-08-13 18:34:36 UTC
Development Management has reviewed and declined this request.  You may appeal
this decision by reopening this request.