Bug 1277598

Summary: innodb_file_per_table should be enable on undercloud and overcloud
Product: Red Hat OpenStack Reporter: Gaëtan Trellu <gtrellu>
Component: puppet-tripleoAssignee: RHOS Maint <rhos-maint>
Status: CLOSED ERRATA QA Contact: nlevinki <nlevinki>
Severity: medium Docs Contact:
Priority: medium    
Version: 7.0 (Kilo)CC: athomas, dciabrin, fdinitto, gcerami, gchamoul, hbrock, jcoufal, jjoyce, jschluet, jslagle, kejones, mbayer, mburns, mcornea, michele, mkrcmari, pmyers, rhel-osp-director-maint, royoung, rtweed, slinaber, tvignaud, vcojot
Target Milestone: rcKeywords: FutureFeature, Triaged
Target Release: 11.0 (Ocata)   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: puppet-tripleo-6.3.0-8.el7ost Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1440774 (view as bug list) Environment:
Last Closed: 2017-05-17 19:24:27 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Gaëtan Trellu 2015-11-03 15:58:20 UTC
Description of problem:

If innodb_file_per_table is not enable on the Galera cluster or on MySQL, the ibdata1 file can become very huge.

This issue affect the undercloud and the overcloud.

MariaDB [(none)]> show global variables like "%innodb_file_per_table%";
+-----------------------+-------+
| Variable_name         | Value |
+-----------------------+-------+
| innodb_file_per_table | OFF   |
+-----------------------+-------+

To solve this issue, we just have to set "innodb_file_per_table = 1" in the Puppet manifest.

Version-Release number of selected component (if applicable):
python-rdomanager-oscplugin-0.0.8-44.el7ost.noarch
puddle images 2015-07-30.1

How reproducible:
Deploy a stack and let the stack grown.

Steps to Reproduce:
1. Deploy an undercloud or an overcloud
2. Use the stack during few days
3. Check the size of /var/lib/mysql/ibdata1 file

Actual results:
/var/lib/mysql/ibdata1 file bigger than 30G !

Expected results:
/var/lib/mysql/ibdata1 smaller by using an InnoDB file per table

Comment 5 Gabriele Cerami 2016-02-26 10:25:29 UTC
we may want to consider carefully the disadvantages of enabling file_per_table as discussed here 
https://dev.mysql.com/doc/refman/5.5/en/innodb-multiple-tablespaces.html

Comment 6 Gabriele Cerami 2016-02-26 11:09:14 UTC
tentatively proposed as reviews
https://review.openstack.org/285227 for undercloud
https://review.openstack.org/285224 for overcloud

Comment 7 Damien Ciabrini 2016-02-26 15:55:10 UTC
From a galera standpoint, I second #c5, changing the way innodb stores tables on disk has several implications, the ones below come to mind:

* When to migrate an existing database layout on disk to the new "one file per table" layout? Enabling the option in OSPd should not impact existing deployment, or at least should not migrate layout automatically on upgrade.

* Does the option have an impact resource-wise on the system? I'm thinking e.g. of config "max_opened_files", does Innodb maintain more file descriptors opened concurrently, and can it exceeds the current limit? Maybe running tempest and monitoring the DB could shed some light.

* How do we reclaim space on disk in a cluster? SQL statements such as "OPTIMIZE TABLE" should be run in a precise way, to avoid unexpected behaviour when galera nodes resynchronize via SST (essentially rsync).

Comment 8 Angus Thomas 2016-03-02 15:51:50 UTC
Is the real issue here the volume of data and the rate at which the DB is growing? 

If the DB didn't reach 30GB in a matter of a days, it would be less of a problem that it is stored in a single file. 

I think we need need to address the amount of data being stored, and possibly include scripts for data pruning etc.. 

That's not likely to get resolved and implemented in time for OSP8, and isn't a potential release blocker.

Comment 9 Mike Burns 2016-04-07 20:54:03 UTC
This bug did not make the OSP 8.0 release.  It is being deferred to OSP 10.

Comment 11 Gaëtan Trellu 2016-04-18 14:27:55 UTC
Any updates about this ?

We had an issue this weekend related to an out of space on the undercloud.

Our ibdata1 size is 25Gb, the guilty guy is Keystone.

+--------+--------------------+-------+--------+-------+------------+---------+
| tables | table_schema       | rows  | data   | idx   | total_size | idxfrac |
+--------+--------------------+-------+--------+-------+------------+---------+
|     32 | keystone           | 0.82M | 16.73G | 0.38G | 17.10G     |    0.02 |
|     16 | heat               | 0.07M | 5.76G  | 0.02G | 5.78G      |    0.00 |
|     16 | ceilometer         | 0.05M | 0.01G  | 0.01G | 0.02G      |    0.62 |
|    109 | nova               | 0.00M | 0.01G  | 0.00G | 0.01G      |    0.41 |
|    157 | neutron            | 0.00M | 0.00G  | 0.00G | 0.00G      |    0.74 |
|      6 | tuskar             | 0.00M | 0.00G  | 0.00G | 0.00G      |    0.07 |
|     20 | glance             | 0.00M | 0.00G  | 0.00G | 0.00G      |    2.35 |
|      5 | ironic             | 0.00M | 0.00G  | 0.00G | 0.00G      |    0.29 |
|     24 | mysql              | 0.00M | 0.00G  | 0.00G | 0.00G      |    0.18 |
|     62 | information_schema | NULL  | 0.00G  | 0.00G | 0.00G      |    NULL |
+--------+--------------------+-------+--------+-------+------------+---------+

I think we having fragmentation due to the token generation.

Thanks

Comment 12 Gaëtan Trellu 2016-04-18 15:01:46 UTC
In MariaDB 5.6, innodb_file_per_table is enable by default.
https://dev.mysql.com/doc/refman/5.6/en/tablespace-enabling.html

Comment 13 Michele Baldessari 2016-05-04 14:14:42 UTC
So just a short update here. The undercloud change went in already and poses
no issues.

The overcloud story is a bit different, because of upgrades. This will require
careful testing and quite a bit of work, because once the switch is flipped
the rsync across the three galera nodes will sync all files (old and new) and
this will confuse galera and the RA.

Damien was taking a look at this specific problem for the overcloud

Comment 14 Michael Bayer 2016-05-05 01:53:17 UTC
(In reply to Gaëtan Trellu from comment #11)
> Any updates about this ?
> 
> We had an issue this weekend related to an out of space on the undercloud.
> 
> Our ibdata1 size is 25Gb, the guilty guy is Keystone.
> 
> +--------+--------------------+-------+--------+-------+------------+--------
> -+
> | tables | table_schema       | rows  | data   | idx   | total_size |
> idxfrac |
> +--------+--------------------+-------+--------+-------+------------+--------
> -+
> |     32 | keystone           | 0.82M | 16.73G | 0.38G | 17.10G     |   
> 0.02 |
> |     16 | heat               | 0.07M | 5.76G  | 0.02G | 5.78G      |   
> 0.00 |
> |     16 | ceilometer         | 0.05M | 0.01G  | 0.01G | 0.02G      |   
> 0.62 |
> |    109 | nova               | 0.00M | 0.01G  | 0.00G | 0.01G      |   
> 0.41 |
> |    157 | neutron            | 0.00M | 0.00G  | 0.00G | 0.00G      |   
> 0.74 |
> |      6 | tuskar             | 0.00M | 0.00G  | 0.00G | 0.00G      |   
> 0.07 |
> |     20 | glance             | 0.00M | 0.00G  | 0.00G | 0.00G      |   
> 2.35 |
> |      5 | ironic             | 0.00M | 0.00G  | 0.00G | 0.00G      |   
> 0.29 |
> |     24 | mysql              | 0.00M | 0.00G  | 0.00G | 0.00G      |   
> 0.18 |
> |     62 | information_schema | NULL  | 0.00G  | 0.00G | 0.00G      |   
> NULL |
> +--------+--------------------+-------+--------+-------+------------+--------
> -+
> 
> I think we having fragmentation due to the token generation.
> 
> Thanks

just as a note, you should be running the keystone token cleanup script to keep the size of this table small.   Also the direction Keystone is going in seems to be the use of "Fernet" tokens which are encrypted, non-persisted tokens that solve this whole issue.

Comment 15 Michael Bayer 2016-05-05 01:54:43 UTC
(In reply to Michele Baldessari from comment #13)
> So just a short update here. The undercloud change went in already and poses
> no issues.
> 
> The overcloud story is a bit different, because of upgrades. This will
> require
> careful testing and quite a bit of work, because once the switch is flipped
> the rsync across the three galera nodes will sync all files (old and new) and
> this will confuse galera and the RA.
> 
> Damien was taking a look at this specific problem for the overcloud

are we leaving the ibdata1 tables in place or are we trying to migrate them over to individual per-table files?  this would require a logical copy, drop and re-import of all the data so that they are written again from scratch.

Comment 22 Red Hat Bugzilla Rules Engine 2017-01-31 18:05:11 UTC
This bugzilla has been removed from the release and needs to be reviewed for targeting another release.

Comment 23 Michael Bayer 2017-01-31 18:18:25 UTC
https://review.openstack.org/#/c/285227/ for undercloud was merged, so undercloud is done upstream.   for overcloud I've re-proposed at https://review.openstack.org/#/c/427310/.

Comment 38 errata-xmlrpc 2017-05-17 19:24:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:1245