Bugzilla will be upgraded to version 5.0 on a still to be determined date in the near future. The original upgrade date has been delayed.
Bug 590597 - RFE add suport for GNBD storage (Red Hat Cluster Suite) to anaconda
RFE add suport for GNBD storage (Red Hat Cluster Suite) to anaconda
Status: CLOSED WONTFIX
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: anaconda (Show other bugs)
6.0
All Linux
low Severity medium
: rc
: ---
Assigned To: Anaconda Maintenance Team
Release Test Team
: FutureFeature
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2010-05-10 05:44 EDT by Hans de Goede
Modified: 2010-08-13 14:34 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: 590578
Environment:
Last Closed: 2010-08-13 14:34:36 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Hans de Goede 2010-05-10 05:44:46 EDT
+++ This bug was initially created as a clone of Bug #590578 +++

Description of problem:

At present during installation of RHEL 6 (first public beta) there appears to be only two advanced block storage types available.

 + iSCSI
 + FCoE

As RHEL 6 itself supports additional types (SRP and GNBD come to mind), these should be added.

 + SRP would be conditional upon an Infiniband or iWARP capable 10GbE interface being detected, and would then require the loading of the ib_srp kernel module (already part of the RHEL 6 kernel package).

 + GNBD would likely be conditional upon a network interface being available, in a similar vein to iSCSI, and would require loading of the "gnbd" kernel module.

--- Additional comment from hdegoede@redhat.com on 2010-05-10 04:44:56 EDT ---

Hi,

Thanks for the bug report.

(In reply to comment #0)
> Description of problem:
> 
> At present during installation of RHEL 6 (first public beta) there appears to
> be only two advanced block storage types available.
> 
>  + iSCSI
>  + FCoE
> 
> As RHEL 6 itself supports additional types (SRP and GNBD come to mind), these
> should be added.
> 
>  + SRP would be conditional upon an Infiniband or iWARP capable 10GbE interface
> being detected, and would then require the loading of the ib_srp kernel module
> (already part of the RHEL 6 kernel package).
> 

How can we see from userspace if such an interface has been detected, does it export any specific sysfs attributes for example ?

And is loading the kernel driver all that is need, will it then automatically configure itself and bring up any attached "disks" as scsi disks ?
If so how long does this configuration take ?

>  + GNBD would likely be conditional upon a network interface being available,
> in a similar vein to iSCSI, and would require loading of the "gnbd" kernel
> module.    

When you say "in a similar vein to iSCSI" does this mean that this nic needs to
be configured for IP traffic ?

And is loading the kernel driver all that is need, will it then automatically configure itself and bring up any attached "disks" as scsi disks ?
If so how long does this configuration take ?

Thanks & Regards,

Hans

--- Additional comment from justin@salasaga.org on 2010-05-10 05:08:08 EDT ---

(In reply to comment #1)
<snip>
> How can we see from userspace if such an interface has been detected, does it
> export any specific sysfs attributes for example ?

Good point.

This is from a RHEL 6 beta server (freshly installed), with an Infiniband card in it:

$ pwd
/sys/class/infiniband
$ ls -la
total 0
drwxr-xr-x.  2 root root 0 May 10 18:18 .
drwxr-xr-x. 45 root root 0 May 10 18:40 ..
lrwxrwxrwx.  1 root root 0 May 10 18:18 mthca0 -> ../../devices/pci0000:00/0000:00:03.0/0000:0a:00.0/infiniband/mthca0
$

This card uses the ib_mthca Infiniband driver, already part of the RHEL 6 kernel package.


> And is loading the kernel driver all that is need, will it then automatically
> configure itself and bring up any attached "disks" as scsi disks ?
> If so how long does this configuration take ?

Good question.  Once the ib_srp kernel module is loaded, it will then "see" any nodes presenting SRP storage to it.

However, there may need to be some work done using ibsrpdm and/or multipathing in order for this to be correctly mapped as available disk.

Probably best to ask Doug Ledford (Red Hat), the maintainer for the existing RHEL Infiniband and SRP packages (ie srptools), for his thoughts here.

Speed wise, configuration setup is very fast.  Sub-second here anyway.


> >  + GNBD would likely be conditional upon a network interface being available,
> > in a similar vein to iSCSI, and would require loading of the "gnbd" kernel
> > module.    
> 
> When you say "in a similar vein to iSCSI" does this mean that this nic needs to be configured for IP traffic ?

Unlike SRP (which doesn't need IP up and running), GNBD needs IP to be up and running in the same way that iSCSI does.  It's another IP based block storage protocol, used in clustering, expecially in regards to the old Red Hat Cluster Suite.


> And is loading the kernel driver all that is need, will it then automatically
> configure itself and bring up any attached "disks" as scsi disks ?
> If so how long does this configuration take ?

No, it needs to be configured with gnbd_import.

Actually, this is probably easier to take a look at:

  http://www.redhat.com/docs/manuals/csgfs/admin-guide/ch-gnbd.html
  11.1.2. Importing a GNBD on a Client

--- Additional comment from pm-rhel@redhat.com on 2010-05-10 05:33:48 EDT ---

Since this issue was entered in bugzilla, the release flag has been
set to ? to ensure that it is properly evaluated for this release.

--- Additional comment from hdegoede@redhat.com on 2010-05-10 05:43:15 EDT ---

I'm go to split this bug in 2 for tracking the 2 different requests it contains. I'm going to use this one for tracking SRP over infiniband support.
Comment 1 Hans de Goede 2010-05-10 05:47:02 EDT
This looks like something which we may be able to do for 6.1, depending on what is needed to properly test it. It is definitively to late for 6.0 .
Comment 2 Vadym Chepkov 2010-05-22 11:57:34 EDT
I can't even find GNBD on Redhat 6:


# yum repolist
Loaded plugins: rhnplugin
This system is not registered with RHN.
RHN support will be disabled.
repo id                                                      repo name                                                                                      status
rhel-beta                                                    Red Hat Enterprise Linux 6 Beta - x86_64                                                       3,595
rhel-beta-optional                                           Red Hat Enterprise Linux 6 Beta (Optional) - x86_64                                            2,374
repolist: 5,969


# yum search gnbd
Loaded plugins: rhnplugin
This system is not registered with RHN.
RHN support will be disabled.
Warning: No matches found for: gnbd
No Matches found
Comment 3 RHEL Product and Program Management 2010-08-13 14:34:36 EDT
Development Management has reviewed and declined this request.  You may appeal
this decision by reopening this request.

Note You need to log in before you can comment on or make changes to this bug.