Bug 1472249 - MariaDB adjustment parameter missing
MariaDB adjustment parameter missing
Status: ASSIGNED
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo-heat-templates (Show other bugs)
13.0 (Queens)
Unspecified Unspecified
medium Severity medium
: ---
: ---
Assigned To: Michael Bayer
Gurenko Alex
: FutureFeature
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-07-18 06:04 EDT by Aviv Guetta
Modified: 2017-08-04 09:46 EDT (History)
9 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Launchpad 1704978 None None None 2017-07-18 06:04 EDT

  None (edit)
Description Aviv Guetta 2017-07-18 06:04:16 EDT
Description of problem:

MariaDB adjustment settings were pushed out of controller.yaml template to /puppet/services/database/mysql.yaml[1].

Unfortunately, 'MysqlInnodbBufferPoolSize' parameter wasn't moved as the other parameters so now it's missing.

'MysqlInnodbBufferPoolSize' should be added and properly exposed at mysql.pp[2].

[1] http://git.openstack.org/cgit/openstack/tripleo-heat-templates/commit/?id=58bf3932a86f2f5582937e3da8cb74dfd29c116b
[2] https://github.com/openstack/puppet-tripleo/blob/stable/newton/manifests/profile/pacemaker/database/mysql.pp

Version-Release number of selected component (if applicable):
Red Hat OpenStack Platform 10
Comment 1 Fabio Massimo Di Nitto 2017-07-19 04:53:00 EDT
Mike can you please take a look at it?

this sounds like a regression to me, unless hiding the parameter was done on purpose.
Comment 2 Michael Bayer 2017-07-19 10:43:52 EDT
I'd look into adding it to puppet-tripleo in the same way I added innodb_flush_log_at_trx_commit in https://review.openstack.org/#/c/479849/.   I'm assuming we don't need the .yaml flag here in tripleo-heat-templates as it will be accessible via a hiera variable / ControllerExtraConfig.
Comment 3 Michael Bayer 2017-07-19 11:01:54 EDT
I'm being told by @dprince that the rationale for the above removal was that because this setting only added to hiera, but was otherwise unconsumed, that the MysqlInnodbBufferPoolSize heat template setting had no actual effect.   If true, that would mean you'd not see this setting within the customer's galera.cnf / my.cnf.d/* settings and then this would not be a regression.   Is this something that can be easily confirmed on the customer side (e.g. can we see their /etc/my.cnf* ? )
Comment 4 Michael Bayer 2017-07-19 14:54:06 EDT
So I've confirmed w/ two different tripleo engineers that this configuration variable never did anything and tripleo has never had the ability to change this admittedly important setting.  So here we need to pursue this via the RFE process and it would be targeted first at Queens.
Comment 10 Anthony Herr 2017-08-02 08:52:42 EDT
What is the customers tolerance for upgrading to OSP12/13 if this change is made in the product at that time?
Comment 12 Anthony Herr 2017-08-02 11:23:49 EDT
Is the customer willing to continue to perform manual operation with the expectation that this will be enhanced in OSP 12/13?  I understand that the customer is of high strategic value.  The issue is backporting new features, as much as there was an expectation that the feature was in place we understand that it was not, there is the potential that the enhancement will break something else.  The reason we go through exhaustive testing during the release is to ensure new features do not impact old ones.  I am reluctant to authorize this if this is a currently supported work around, especially if the customer is comfortable with that work around.
Comment 15 Radosław Śmigielski 2017-08-03 05:00:46 EDT
Anthony, let me disagree on the backporting, MysqlInnodbBufferPoolSize was in OSP 8.0 (mitaka) and it's gone now. I mean it looks like it was slowly vanishing over time in OSP 9.0 and is missing in OSP 10.0. So for me it looks like a regression and not like new feature backporting.
This is what git grep shows on OSP 8.0:

❯ git grep MysqlInnodbBufferPoolSize
deprecated/overcloud-source.yaml:  MysqlInnodbBufferPoolSize:
deprecated/overcloud-source.yaml:          innodb_buffer_pool_size: {get_param: MysqlInnodbBufferPoolSize}
deprecated/undercloud-source.yaml:  MysqlInnodbBufferPoolSize:
deprecated/undercloud-source.yaml:          innodb_buffer_pool_size: {get_param: MysqlInnodbBufferPoolSize}
os-apply-config/controller.yaml:  MysqlInnodbBufferPoolSize:
os-apply-config/controller.yaml:        mysql_innodb_buffer_pool_size: {get_param: MysqlInnodbBufferPoolSize}
overcloud.yaml:  MysqlInnodbBufferPoolSize:
overcloud.yaml:          MysqlInnodbBufferPoolSize: {get_param: MysqlInnodbBufferPoolSize}
puppet/controller.yaml:  MysqlInnodbBufferPoolSize:
puppet/controller.yaml:        mysql_innodb_buffer_pool_size: {get_param: MysqlInnodbBufferPoolSize}



OSP 10 (Newton) is Red Hat long support release and we working on a support version of product which is base on OSP 10 so going with 12/13 now is not an option for us.

The default MariaDB innoDB buffer size is 128MB, with this size you can't  really scale OC to more than 20 computes and with that number controllers will have really hard time to handle user's load. This is for me a really major problem.
Comment 16 Fabio Massimo Di Nitto 2017-08-03 08:27:07 EDT
(In reply to Radosław Śmigielski from comment #15)
> Anthony, let me disagree on the backporting, MysqlInnodbBufferPoolSize was
> in OSP 8.0 (mitaka) and it's gone now.

Based on our information, this is not correct either.

The option was there in OSP8 but was never functional and has never been working upstream or downstream in OSP8. According to the information provided to us by Yolanda, you had a forked version of puppet modules to handle it internally.
Upstream removed the feature and since it was not functional in the first place it did not cause any regression, except in your environment.

We proposed a patch upstream to re-include the feature as mentioned above.

Also, with this default setting we have been able to deploy 3 controllers with over 300 compute nodes without any problem.
So keeping aside the backport request, it would be also interesting to understand why you are hitting this limit of 20 computes nodes. The RCA might be completely different and the mysql option only masking / hiding the problem.
Comment 17 Michael Bayer 2017-08-03 10:31:46 EDT
There have also been lots of database mis-configurations and inefficient programming patterns that have been corrected since earlier OSP versions, things like poor performance of the DB driver under high concurrency, connection pool settings that led to lots of requests waiting too long, haproxy settings that would time out connections too early leading to disconnects and transaction contention between galera nodes, things like that.  Openstack is not actually a database-intensive application from a MySQL perspective.
Comment 18 Michael Bayer 2017-08-03 12:09:48 EDT
this is proposed as a hiera parameter innodb_buffer_pool_size for pike, ocata and newton upstream: https://review.openstack.org/#/q/Iabdcb6f76510becb98cba35c95db550ffce44ff3,n,z
Comment 19 Radosław Śmigielski 2017-08-04 04:54:42 EDT
> We proposed a patch upstream to re-include the feature as mentioned above.
So I was the first one who gave +1 to that patch :) 
https://review.openstack.org/#/c/490046/

>> Also, with this default setting we have been able to deploy 3 controllers
>> with over 300 compute nodes without any problem.
I bet all your controllers were running on SSD? and not on traditional drives?

With small InnoDB buffer MariaDB needs to do much more fsync() and at some point it hits limit of IOPS of local drive. So having bigger buffer is really essential.
The default value 128MB of InnodbBufferPoolSize is very conservative and in general MariaDB/MySQL good practice is to give 80% of memory on dedicated server. We don't run MariaDb on dedicated server but still 128MB is way too low.
Comment 20 Michael Bayer 2017-08-04 09:46:27 EDT
(In reply to Radosław Śmigielski from comment #19)
> > We proposed a patch upstream to re-include the feature as mentioned above.
> So I was the first one who gave +1 to that patch :) 
> https://review.openstack.org/#/c/490046/
> 
> >> Also, with this default setting we have been able to deploy 3 controllers
> >> with over 300 compute nodes without any problem.
> I bet all your controllers were running on SSD? and not on traditional
> drives?
> 
> With small InnoDB buffer MariaDB needs to do much more fsync() and at some
> point it hits limit of IOPS of local drive. So having bigger buffer is
> really essential.

the innodb_buffer_pool_size is only about caching pages from disk files in memory for reads, whereas fsyncs are for flushing newly written data to the filesystem; there is no documented correlation between these two settings. If your problem is excess fsync() you want to be looking at innodb_flush_log_at_trx_commit=2 assuming a Galera cluster is in use, which is also a setting we've recently added to tripleo; this makes it so that the fsync() call occurs only once per second rather than once-per-commit and can provide extremely dramatic performance improvements immediately, at the cost of a slight degradation of durability, which is ameliorated by the fact that a galera cluster is replicating writesets to all nodes.

> The default value 128MB of InnodbBufferPoolSize is very conservative and in
> general MariaDB/MySQL good practice is to give 80% of memory on dedicated
> server. 

that also doesn't apply in this case because we bundle the galera/mysql instances on a controller that has dozens of other Python and C-based services running.    mysqld will still be the big memory user for the controller node in which galera is active but 80% would be way too much.

Note You need to log in before you can comment on or make changes to this bug.