Bug 1277598
Summary: | innodb_file_per_table should be enable on undercloud and overcloud | |||
---|---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | Gaëtan Trellu <gtrellu> | |
Component: | puppet-tripleo | Assignee: | RHOS Maint <rhos-maint> | |
Status: | CLOSED ERRATA | QA Contact: | nlevinki <nlevinki> | |
Severity: | medium | Docs Contact: | ||
Priority: | medium | |||
Version: | 7.0 (Kilo) | CC: | athomas, dciabrin, fdinitto, gcerami, gchamoul, hbrock, jcoufal, jjoyce, jschluet, jslagle, kejones, mbayer, mburns, mcornea, michele, mkrcmari, pmyers, rhel-osp-director-maint, royoung, rtweed, slinaber, tvignaud, vcojot | |
Target Milestone: | rc | Keywords: | FutureFeature, Triaged | |
Target Release: | 11.0 (Ocata) | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | puppet-tripleo-6.3.0-8.el7ost | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1440774 (view as bug list) | Environment: | ||
Last Closed: | 2017-05-17 19:24:27 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: |
Description
Gaëtan Trellu
2015-11-03 15:58:20 UTC
we may want to consider carefully the disadvantages of enabling file_per_table as discussed here https://dev.mysql.com/doc/refman/5.5/en/innodb-multiple-tablespaces.html tentatively proposed as reviews https://review.openstack.org/285227 for undercloud https://review.openstack.org/285224 for overcloud From a galera standpoint, I second #c5, changing the way innodb stores tables on disk has several implications, the ones below come to mind: * When to migrate an existing database layout on disk to the new "one file per table" layout? Enabling the option in OSPd should not impact existing deployment, or at least should not migrate layout automatically on upgrade. * Does the option have an impact resource-wise on the system? I'm thinking e.g. of config "max_opened_files", does Innodb maintain more file descriptors opened concurrently, and can it exceeds the current limit? Maybe running tempest and monitoring the DB could shed some light. * How do we reclaim space on disk in a cluster? SQL statements such as "OPTIMIZE TABLE" should be run in a precise way, to avoid unexpected behaviour when galera nodes resynchronize via SST (essentially rsync). Is the real issue here the volume of data and the rate at which the DB is growing? If the DB didn't reach 30GB in a matter of a days, it would be less of a problem that it is stored in a single file. I think we need need to address the amount of data being stored, and possibly include scripts for data pruning etc.. That's not likely to get resolved and implemented in time for OSP8, and isn't a potential release blocker. This bug did not make the OSP 8.0 release. It is being deferred to OSP 10. Any updates about this ? We had an issue this weekend related to an out of space on the undercloud. Our ibdata1 size is 25Gb, the guilty guy is Keystone. +--------+--------------------+-------+--------+-------+------------+---------+ | tables | table_schema | rows | data | idx | total_size | idxfrac | +--------+--------------------+-------+--------+-------+------------+---------+ | 32 | keystone | 0.82M | 16.73G | 0.38G | 17.10G | 0.02 | | 16 | heat | 0.07M | 5.76G | 0.02G | 5.78G | 0.00 | | 16 | ceilometer | 0.05M | 0.01G | 0.01G | 0.02G | 0.62 | | 109 | nova | 0.00M | 0.01G | 0.00G | 0.01G | 0.41 | | 157 | neutron | 0.00M | 0.00G | 0.00G | 0.00G | 0.74 | | 6 | tuskar | 0.00M | 0.00G | 0.00G | 0.00G | 0.07 | | 20 | glance | 0.00M | 0.00G | 0.00G | 0.00G | 2.35 | | 5 | ironic | 0.00M | 0.00G | 0.00G | 0.00G | 0.29 | | 24 | mysql | 0.00M | 0.00G | 0.00G | 0.00G | 0.18 | | 62 | information_schema | NULL | 0.00G | 0.00G | 0.00G | NULL | +--------+--------------------+-------+--------+-------+------------+---------+ I think we having fragmentation due to the token generation. Thanks In MariaDB 5.6, innodb_file_per_table is enable by default. https://dev.mysql.com/doc/refman/5.6/en/tablespace-enabling.html So just a short update here. The undercloud change went in already and poses no issues. The overcloud story is a bit different, because of upgrades. This will require careful testing and quite a bit of work, because once the switch is flipped the rsync across the three galera nodes will sync all files (old and new) and this will confuse galera and the RA. Damien was taking a look at this specific problem for the overcloud (In reply to Gaëtan Trellu from comment #11) > Any updates about this ? > > We had an issue this weekend related to an out of space on the undercloud. > > Our ibdata1 size is 25Gb, the guilty guy is Keystone. > > +--------+--------------------+-------+--------+-------+------------+-------- > -+ > | tables | table_schema | rows | data | idx | total_size | > idxfrac | > +--------+--------------------+-------+--------+-------+------------+-------- > -+ > | 32 | keystone | 0.82M | 16.73G | 0.38G | 17.10G | > 0.02 | > | 16 | heat | 0.07M | 5.76G | 0.02G | 5.78G | > 0.00 | > | 16 | ceilometer | 0.05M | 0.01G | 0.01G | 0.02G | > 0.62 | > | 109 | nova | 0.00M | 0.01G | 0.00G | 0.01G | > 0.41 | > | 157 | neutron | 0.00M | 0.00G | 0.00G | 0.00G | > 0.74 | > | 6 | tuskar | 0.00M | 0.00G | 0.00G | 0.00G | > 0.07 | > | 20 | glance | 0.00M | 0.00G | 0.00G | 0.00G | > 2.35 | > | 5 | ironic | 0.00M | 0.00G | 0.00G | 0.00G | > 0.29 | > | 24 | mysql | 0.00M | 0.00G | 0.00G | 0.00G | > 0.18 | > | 62 | information_schema | NULL | 0.00G | 0.00G | 0.00G | > NULL | > +--------+--------------------+-------+--------+-------+------------+-------- > -+ > > I think we having fragmentation due to the token generation. > > Thanks just as a note, you should be running the keystone token cleanup script to keep the size of this table small. Also the direction Keystone is going in seems to be the use of "Fernet" tokens which are encrypted, non-persisted tokens that solve this whole issue. (In reply to Michele Baldessari from comment #13) > So just a short update here. The undercloud change went in already and poses > no issues. > > The overcloud story is a bit different, because of upgrades. This will > require > careful testing and quite a bit of work, because once the switch is flipped > the rsync across the three galera nodes will sync all files (old and new) and > this will confuse galera and the RA. > > Damien was taking a look at this specific problem for the overcloud are we leaving the ibdata1 tables in place or are we trying to migrate them over to individual per-table files? this would require a logical copy, drop and re-import of all the data so that they are written again from scratch. This bugzilla has been removed from the release and needs to be reviewed for targeting another release. https://review.openstack.org/#/c/285227/ for undercloud was merged, so undercloud is done upstream. for overcloud I've re-proposed at https://review.openstack.org/#/c/427310/. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:1245 |