Created attachment 1713428 [details] Memory usage on database server Description of problem: There seems to be a memory leak on mariadb 10.3.13. All our database server could see memory increase gradually until kernel kill process by OOM. On attachment you could see memory used graph on last 2 weeks. The growth is slow but present. - for dbvad file : its our more critical database in production Please note that we have a database server with mariadb running but app not in production which have the problem too, even if database is not requested We suspect the upgrade step between mariadb 5.5 and 10.3.13scl. One of our database was directly create in this version and the problem does not seem to be present (we have to confirm). Version-Release number of selected component (if applicable): rh-mariadb103-* How reproducible: - Create mariadb 5.5 database - upgrade database from 5.5 to 10.3.13 scl - wait & see memory usage increasing slowly Actual results: Memory server increase gradually Expected results: Memory server stay stable with time when no special big treatment in use. Additional info:
Hello, I expect to have a hard time reproducing this issue. > We suspect the upgrade step between mariadb 5.5 and 10.3.13scl. One of our database was directly create in this version and the problem does not seem to be present (we have to confirm). I'm very interested in hearing the results. If you would confirm the issue is NOT present when you created the DB directly in 10.3, instead of upgrading, I'd like to know your upgrade path. (Have you upgraded directly 5.5 -> 10.3 ?) I expect there have not been any suspicious warnings or errors in the systemd journal and the DB log prior reporting the issue. If there were, it might be good idea to start by solving them. --- If you're a Red Hat customer, please open a support case via https://access.redhat.com/support/cases/#/case/new/open-case?caseCreate=true (active subscription needed).
Hi, After some test we eliminated the suspicion on upgrade step. We exchanged with mariadb support, they told us that there is some memory leak problem on 10.3 not yet been corrected but never to the point to being killed by OOM. ( example give by the support : https://jira.mariadb.org/browse/MDEV-19287 https://jira.mariadb.org/browse/MDEV-20455 https://jira.mariadb.org/browse/MDEV-21447 https://jira.mariadb.org/browse/MDEV-20698) I can't explain why, but Mariadb process (mysqld) use more memory than maximum possible based on Mariadb support calculation. We maybe saw a kind of stabilization but largely above the maximum value obtained by the calculation of the worth situation (and we are very far away of this situation). For example, we have : -a server with 10Go RAM. -Mariadb buffer pool (innodb, myisam) : 4.3Go - Max memory per thread : 18.9Mo - Max thread possible : 250 - Max thread in cache : 10 With mariadb support calculation we could reach max 9.3Go, and now even with only 107 threads (9 in cache other connected) we are about 96% of server's RAM. I know that we should increase quantities of RAM based on the maximum theoretical memory, but in actual situation we should not have this value. Thanks, Dorian
Hi, some news about our problem, - We still have problem, i optimized everything possible and gave more memory to server that needed to save time before database restart is needed. - I suspect the origin of memory leak on thread connexion (the more a base is active, the greater the leak will be and on low active database there is no problem. But nothing certain. To see if you can reproduce it : - Install mariadb SCL 10.3.13 version : ( package : rh-mariadb103-mariadb-server rh-mariadb103-mariadb-server-utils rh-mariadb103-mariadb-backup rh-mariadb103-mariadb-syspaths ) - Create some database - Send min 200 query/sec (select & update) with min 150 threads in same time - Wait for it (min 1 or 2 weeks the leak is very minor) Mariadb support didn't help, they just told me that my database memory conf is good... they just said : please upgrade to last 10.3 version (10.3.27) and check if leak is still there. Thanks for your help Dorian
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.