Description of problem: When trying to deploy 3 controllers 1 compute on a baremetal setup got the error: http://pastebin.test.redhat.com/362092 Got the error: "OperationalError: (pymysql.err.OperationalError) (1040, u'Too many connections')" - http://pastebin.test.redhat.com/362093 Got the following on mariadb.log: 160404 15:14:45 [Note] /usr/libexec/mysqld (mysqld 5.5.47-MariaDB) starting as process 108818 ... 160404 15:14:45 [Warning] Changed limits: max_open_files: 1024 max_connections: 214 table_cache: 400 Version-Release number of selected component (if applicable): openstack-puppet-modules-7.0.17-1.el7ost.noarch openstack-tripleo-puppet-elements-0.0.5-1.el7ost.noarch How reproducible: 100% Steps to Reproduce: 1. Deploy ospd8 on BM 3 controllers 1 compute 2. 3. Actual results: deployment failed with OperationalError: (pymysql.err.OperationalError) (1040, u'Too many connections' Expected results: deployment passed Additional info: the actual max_connections value was 214. This might be related to the change that introduced in this puddle - https://github.com/openstack/instack-undercloud/commit/baf3a0b20b7df1545d0697672dd6293fad6e2991
This error is failing our CI. I see the same value of 214 connections: [stack@host15 ~]$ mysql -e "SHOW GLOBAL VARIABLES LIKE 'max_connections'" +-----------------+-------+ | Variable_name | Value | +-----------------+-------+ | max_connections | 214 | and yet /etc/my.cnf* files had the correct 4096 value set.
Created attachment 1143368 [details] mariadb.log
can i get access to the environment, or all undercloud logs somehow (at least the undercloud install log)? you could also try checking the connections, i know you said it was already set, but i'd like to see the config file: grep -rin connections /etc/my.cnf* or attach the contents of /etc/my.cnf.d/ also, mariadb versions: rpm -qa | grep maria
Verified: Environment: instack-undercloud-2.2.7-4.el7ost.noarch Successfully deployed 8.0 on BM: 3 controllers, 2 computes, 2 ceph nodes.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0637.html
*** Bug 1330803 has been marked as a duplicate of this bug. ***