Bug 1688374
Summary: | Test case failure: /CoreOS/mariadb55/Sanity/benchmark - missing rh-mariadb103-mariadb-bench | ||
---|---|---|---|
Product: | Red Hat Software Collections | Reporter: | Karel Volný <kvolny> |
Component: | mariadb | Assignee: | Michal Schorm <mschorm> |
Status: | CLOSED WONTFIX | QA Contact: | Lukáš Zachar <lzachar> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | rh-mariadb103 | CC: | databases-maint, hhorak, igkioka, jorton, ljavorsk, lzachar, mmuzila, mschorm |
Target Milestone: | --- | Keywords: | Triaged |
Target Release: | 3.6 | ||
Hardware: | Unspecified | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | No Doc Update | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2021-03-15 07:34:18 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Karel Volný
2019-03-13 16:09:14 UTC
Reverted in Git. -- Thank you for pointing it in upstream RPMs. I tried to search any article or changelog entry about removing the *-bench subpackage, but haven't found any :/ -- What should we do with this bug? I'd suggests "CLOSED WONTFIX", but I'd probabbly mention that in the 10.3 Docs I'm working on right now. couldn't we just build it but not ship it, as we do with other subpackages? (In reply to Karel Volný from comment #5) > couldn't we just build it but not ship it, as we do with other subpackages? Like with which subpackages? We don't build what we don't ship. (e.g. Number of storage engines) The only exception is the client library, which we need to build, beacuse it is needed for other parts of the DB to build (binaries mostly). The upstream does not have a way to not build that anyway. (In reply to Michal Schorm from comment #6) > (In reply to Karel Volný from comment #5) > > couldn't we just build it but not ship it, as we do with other subpackages? > > Like with which subpackages? What Karel mentioned was probably filter it from the compose, like the rh-mariadb103-build is filtered for example. With that approach, there were historically problems that filtering requires changes in distil, and it caused some issues in the history, that made us to change the mind and not filter packages proactively like that any more. Anyway, Karel, what would you like to achieve by running the bench during tests? My understanding was that the testcase verifies that the bench tool works, rather than that it would do some real benchmarking. For real benchmarking, we would need to run different builds on the same machine several times, to see some real output and comparisson. Without that, running the bench in beaker and not ship the package later seems like not very useful thing to do to me. Michal, I see this bug is in POST, but comment #4 mentions the change was reverted -- shouldn't the status be ASSIGNED then? (In reply to Honza Horak from comment #7) > Anyway, Karel, what would you like to achieve by running the bench during > tests? in the past (...), we thought that running benchmarks is a good idea to catch also possible performance regressions and issues I remember a case where a slowdown in MySQL revealed a problem with filesystem IO in kernel in our original errata workflow, a run with released version then another with new version was done, so it was very easy to just compare test times and see if there is no such problem (there were even some attempts to code the logic into the test) unfortunately, this is no longer true: a) most of the Beaker pool is virtualized; you need real hardware that runs only this one task for the comparison to make sense b) 'old' and 'new' runs are not scheduled within the same job, on the same machine ('old' is often not scheduled at all) so, scheduling such comparative runs needs manual work, and it takes more time to get through the queue in Beaker (plus the benchmark itself takes some time to complete), which shifts it from ordinary test to 'nice to have' category, and since there's always a lot of other things to work on, this 'nice to have' is what we usually don't have in the end :-( ... yet I don't want to throw the possibility away completely plus the benchmark itself covers some part of sanity testing of database operations, so it is useful even without the performance comparison Honza or Michal, was here any closure for this bug. I can see Karel's comment #8 that says it's nice to have, but not really effective. But there is no further plan for this. Should we close this with WONTFIX as Michal suggested in comment #4? Thanks for any info. I'm fine with closing as WONTFIX, but let's see whether Lukas (a new QE contact) is fine with that as well. I'm find with closing as WONTFIX for this collection. However for possible new collection in the future (if there will be any): Seems I can easily get the content from src.rpm, however to have it installed via rpm is easier. Could sql-bench be included in *-mariadb-test? (In reply to Lukáš Zachar from comment #13) > I'm find with closing as WONTFIX for this collection. > > However for possible new collection in the future (if there will be any): > Seems I can easily get the content from src.rpm, however to have it > installed via rpm is easier. > Could sql-bench be included in *-mariadb-test? That would effectively mean to maintain the code that is not maintained by upstream any more. What is more used these days is sysbench, that can be used not only for MariaDB, but also for PostgreSQL, MySQL and other non-DB benchmark testing. sysbench package is built in EPEL, so if it was possible to pull in from there, we can easily run it instead of sql-bench. Some more info about it is on MariaDB pages and elsewhere on the Internet: https://mariadb.com/kb/en/sysbench-benchmark-setup/ A simple example how it is used is this: #> yum -y install mariadb-server sysbench #> service mariadb start #> echo 'CREATE DATABASE sysbench;' | mysql #> sysbench select_random_points.lua --table-size=2000000 --num-threads=1 --rand-type=uniform --db-driver=mysql --mysql-db=sysbench --mysql-user=root prepare #> sysbench select_random_points.lua --table-size=2000000 --num-threads=1 --rand-type=uniform --db-driver=mysql --mysql-db=sysbench --mysql-user=root run #> sysbench /usr/share/sysbench/oltp_read_write.lua --table-size=20000000 --num-threads=1 --rand-type=uniform --db-driver=mysql --mysql-db=sysbench --mysql-user=root prepare #> sysbench /usr/share/sysbench/oltp_read_write.lua --table-size=20000000 --num-threads=1 --rand-type=uniform --db-driver=mysql --mysql-db=sysbench --mysql-user=root run But what is not clear to me is how to interpret the results. What I would do is to compare results of two different builds on a single machine -- but I doubt that is possible during the test. Just seeing absolute numbers might still test the database somehow, but without a comparison, it does not tell much IMHO. Any ideas how to approach this? (In reply to Honza Horak from comment #14) > Any ideas how to approach this? Maybe there are some performance tests already for other packages. That would be worth investigating. (In reply to Honza Horak from comment #14) > (In reply to Lukáš Zachar from comment #13) > > I'm find with closing as WONTFIX for this collection. > > > > However for possible new collection in the future (if there will be any): > > Seems I can easily get the content from src.rpm, however to have it > > installed via rpm is easier. > > Could sql-bench be included in *-mariadb-test? > > That would effectively mean to maintain the code that is not maintained by > upstream any more. What is more used these days is sysbench, that can be > used not only for MariaDB, but also for PostgreSQL, MySQL and other non-DB > benchmark testing. sysbench package is built in EPEL, so if it was possible > to pull in from there, we can easily run it instead of sql-bench. Okay, that makes sense. Are there any interesting scenarious in sql-bench though? > > Some more info about it is on MariaDB pages and elsewhere on the Internet: > https://mariadb.com/kb/en/sysbench-benchmark-setup/ > > A simple example how it is used is this: > > #> yum -y install mariadb-server sysbench > #> service mariadb start > #> echo 'CREATE DATABASE sysbench;' | mysql > > #> sysbench select_random_points.lua --table-size=2000000 --num-threads=1 > --rand-type=uniform --db-driver=mysql --mysql-db=sysbench --mysql-user=root > prepare > #> sysbench select_random_points.lua --table-size=2000000 --num-threads=1 > --rand-type=uniform --db-driver=mysql --mysql-db=sysbench --mysql-user=root > run > > #> sysbench /usr/share/sysbench/oltp_read_write.lua --table-size=20000000 > --num-threads=1 --rand-type=uniform --db-driver=mysql --mysql-db=sysbench > --mysql-user=root prepare > #> sysbench /usr/share/sysbench/oltp_read_write.lua --table-size=20000000 > --num-threads=1 --rand-type=uniform --db-driver=mysql --mysql-db=sysbench > --mysql-user=root run > > But what is not clear to me is how to interpret the results. What I would do > is to compare results of two different builds on a single machine -- but I > doubt that is possible during the test. Just seeing absolute numbers might > still test the database somehow, but without a comparison, it does not tell > much IMHO For python testing we run benchmark on the same machine, with package update in between. What we don't have yet is automated compare (which will fail test if new is (significantly) slower. After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened. |