Bug 1016886 - KVM-libgfapi parameter settings need to be defaults
KVM-libgfapi parameter settings need to be defaults
Status: CLOSED EOL
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
2.1
Unspecified Unspecified
high Severity high
: ---
: ---
Assigned To: Bug Updates Notification Mailing List
storage-qa-internal@redhat.com
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-10-08 17:03 EDT by Ben England
Modified: 2015-12-03 12:13 EST (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-12-03 12:13:04 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Ben England 2013-10-08 17:03:48 EDT
Description of problem:

In order for libgfapi to work with KVM (and that means RHEV + OpenStack + libvirt), we need 3 things to be changed:

1) "option rpc-auth-allow-insecure on" line needs to be in file:

/etc/glusterfs/glusterd.vol

Otherwise you cannot even create a volume because glusterd processes can't even communicate with one another.

2) "stat-prefetch=on" record needs to be added to /var/lib/glusterd/groups/virt file .  Otherwise a STAT FOP round-trip to the server will be performed for every READ FOP, limiting read performance.

3) "server.allow-insecure=on" record needs to be added to /var/lib/glusterd/groups/virt file

Otherwise libgfapi processes cannot communicate to all Gluster servers.

Version-Release number of selected component (if applicable):

RHS 2.1 Gold (GA) = glusterfs-server-3.4.0.33rhs

How reproducible:

Every time.

Steps to Reproduce:
Try to bring up Gluster volume and then create libgfapi-backed guests, see if you can do this and get guests to read at line speed without these settings.

Actual results:

You can't get KVM/Gluster to work reliably without these commands

Expected results:

RHS should work without issuing these commands, since they are needed at all sites.

Additional info:
Comment 2 Ben England 2013-10-09 14:51:44 EDT
1) see https://bugzilla.redhat.com/show_bug.cgi?id=1016881#c2 concerning /etc/glusterfs/glusterd.vol parameter.

2) Still needed.

3) It sounds like eventually in Gluster 3.5 the need for this volume parameter will go away, but in the meantime  we need to document and train SAs about it.
Comment 3 Ben England 2014-04-02 10:49:04 EDT
This impacts OpenStack, oVirt, virtually anything that uses libgfapi.  Gluster needs to just come up and work securely, with high performance, and reliably, without extensive manual tweaking. 

1) above is an undocumented bug, I mean feature, there are no error messages hinting at the root cause, etc.  It's been over 1/2 year, so I raised the priority and going to make a very loud noise about it until this gets fixed.  Security by port number, seriously?  That is the reason given for not changing rpc-auth-allow-insecure default to on.   Why don't we use PKI (i.e. public keys) instead?
Comment 4 Ben England 2015-05-18 17:10:45 EDT
I think in glusterfs-3.7 you've fixed rpc-auth-allow-insecure and server.allow-insecure defaults! duplicate of 1057292, see comment 7 there.  Not sure how this was fixed, was it SSL sockets?

If stat-prefetch issue was addressed, then we should be ok.  I see default for stat-prefetch is "on" on my glusterfs-3.7 volume.
Comment 5 Ben England 2015-07-13 16:02:22 EDT
looking for clarification of how this was fixed and whether it's fixed in RHGS 3.1.
Comment 6 Deepak C Shetty 2015-07-14 05:58:15 EDT
(In reply to Ben England from comment #4)
> I think in glusterfs-3.7 you've fixed rpc-auth-allow-insecure and
> server.allow-insecure defaults! duplicate of 1057292, see comment 7 there. 
> Not sure how this was fixed, was it SSL sockets?
> 
> If stat-prefetch issue was addressed, then we should be ok.  I see default
> for stat-prefetch is "on" on my glusterfs-3.7 volume.

I beg to differ.

Pls see https://bugzilla.redhat.com/show_bug.cgi?id=1057292#c9
Comment 7 Vivek Agarwal 2015-12-03 12:13:04 EST
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.

Note You need to log in before you can comment on or make changes to this bug.