Bug 1688374 - Test case failure: /CoreOS/mariadb55/Sanity/benchmark - missing rh-mariadb103-mariadb-bench
Summary: Test case failure: /CoreOS/mariadb55/Sanity/benchmark - missing rh-mariadb103...
Keywords:
Status: NEW
Alias: None
Product: Red Hat Software Collections
Classification: Red Hat
Component: mariadb
Version: rh-mariadb103
Hardware: Unspecified
OS: Linux
unspecified
unspecified
Target Milestone: ---
: 3.6
Assignee: Michal Schorm
QA Contact: Lukáš Zachar
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-03-13 16:09 UTC by Karel Volný
Modified: 2021-02-15 02:53 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed:
Target Upstream Version:


Attachments (Terms of Use)

Description Karel Volný 2019-03-13 16:09:14 UTC
Filed from caserun https://tcms.engineering.redhat.com/run/354598/#caserun_20426829

Version-Release number of selected component (if applicable):
rhel-7

Actual results: 
package rh-mariadb103-mariadb-bench is missing

so far, we had this subpackage:

# yum search mariadb-bench
Zavedené moduly: product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
================================================================================ N/S matched: mariadb-bench =================================================================================
mariadb-bench.x86_64 : MariaDB benchmark scripts and data
mariadb55-mariadb-bench.x86_64 : MariaDB benchmark scripts and data
rh-mariadb100-mariadb-bench.x86_64 : MariaDB benchmark scripts and data
rh-mariadb101-mariadb-bench.x86_64 : MariaDB benchmark scripts and data
rh-mariadb102-mariadb-bench.x86_64 : MariaDB benchmark scripts and data


and I see no mention in the update bug (#1582609) that it should be dropped, nor a single mention in rpm changelog for rh-mariadb103-mariadb-10.3.12-2.el7 that it is no longer built ...

apparently, in this commit:

http://pkgs.devel.redhat.com/cgit/rpms/mariadb/commit/mariadb.spec?h=rhscl-3.3-rh-mariadb103-rhel-7&id=e7b8344599fff3a570a578e50a23fc4a9280b68c

building the subpackage got limited to Fedora ... but there's no mention why that happened (or even that it happened!)

Comment 4 Michal Schorm 2019-03-20 13:25:14 UTC
Reverted in Git.

--

Thank you for pointing it in upstream RPMs.
I tried to search any article or changelog entry about removing the *-bench subpackage, but haven't found any :/

--

What should we do with this bug?
I'd suggests "CLOSED WONTFIX", but I'd probabbly mention that in the 10.3 Docs I'm working on right now.

Comment 5 Karel Volný 2019-03-21 15:17:46 UTC
couldn't we just build it but not ship it, as we do with other subpackages?

Comment 6 Michal Schorm 2019-03-29 13:03:16 UTC
(In reply to Karel Volný from comment #5)
> couldn't we just build it but not ship it, as we do with other subpackages?

Like with which subpackages?
We don't build what we don't ship. (e.g. Number of storage engines)

The only exception is the client library, which we need to build, beacuse it is needed for other parts of the DB to build (binaries mostly). The upstream does not have a way to not build that anyway.

Comment 7 Honza Horak 2019-04-16 08:02:49 UTC
(In reply to Michal Schorm from comment #6)
> (In reply to Karel Volný from comment #5)
> > couldn't we just build it but not ship it, as we do with other subpackages?
> 
> Like with which subpackages?

What Karel mentioned was probably filter it from the compose, like the rh-mariadb103-build is filtered for example. With that approach, there were historically problems that filtering requires changes in distil, and it caused some issues in the history, that made us to change the mind and not filter packages proactively like that any more.

Anyway, Karel, what would you like to achieve by running the bench during tests?

My understanding was that the testcase verifies that the bench tool works, rather than that it would do some real benchmarking. For real benchmarking, we would need to run different builds on the same machine several times, to see some real output and comparisson. Without that, running the bench in beaker and not ship the package later seems like not very useful thing to do to me.

Michal, I see this bug is in POST, but comment #4 mentions the change was reverted -- shouldn't the status be ASSIGNED then?

Comment 8 Karel Volný 2019-04-16 08:45:31 UTC
(In reply to Honza Horak from comment #7)
> Anyway, Karel, what would you like to achieve by running the bench during
> tests?

in the past (...), we thought that running benchmarks is a good idea to catch also possible performance regressions and issues

I remember a case where a slowdown in MySQL revealed a problem with filesystem IO in kernel

in our original errata workflow, a run with released version then another with new version was done, so it was very easy to just compare test times and see if there is no such problem (there were even some attempts to code the logic into the test)

unfortunately, this is no longer true:

a) most of the Beaker pool is virtualized; you need real hardware that runs only this one task for the comparison to make sense

b) 'old' and 'new' runs are not scheduled within the same job, on the same machine ('old' is often not scheduled at all)

so, scheduling such comparative runs needs manual work, and it takes more time to get through the queue in Beaker (plus the benchmark itself takes some time to complete), which shifts it from ordinary test to 'nice to have' category, and since there's always a lot of other things to work on, this 'nice to have' is what we usually don't have in the end :-(

... yet I don't want to throw the possibility away completely

plus the benchmark itself covers some part of sanity testing of database operations, so it is useful even without the performance comparison

Comment 11 Lukas Javorsky 2020-06-01 10:44:05 UTC
Honza or Michal, was here any closure for this bug.

I can see Karel's comment #8 that says it's nice to have, but not really effective. But there is no further plan for this.

Should we close this with WONTFIX as Michal suggested in comment #4?

Thanks for any info.

Comment 12 Honza Horak 2020-06-02 21:12:35 UTC
I'm fine with closing as WONTFIX, but let's see whether Lukas (a new QE contact) is fine with that as well.

Comment 13 Lukáš Zachar 2020-06-03 05:50:40 UTC
I'm find with closing as WONTFIX for this collection.

However for possible new collection in the future (if there will be any):
Seems I can easily get the content from src.rpm, however to have it installed via rpm is easier.
Could sql-bench be included in *-mariadb-test?

Comment 14 Honza Horak 2020-06-04 14:26:29 UTC
(In reply to Lukáš Zachar from comment #13)
> I'm find with closing as WONTFIX for this collection.
> 
> However for possible new collection in the future (if there will be any):
> Seems I can easily get the content from src.rpm, however to have it
> installed via rpm is easier.
> Could sql-bench be included in *-mariadb-test?

That would effectively mean to maintain the code that is not maintained by upstream any more. What is more used these days is sysbench, that can be used not only for MariaDB, but also for PostgreSQL, MySQL and other non-DB benchmark testing. sysbench package is built in EPEL, so if it was possible to pull in from there, we can easily run it instead of sql-bench.

Some more info about it is on MariaDB pages and elsewhere on the Internet:
https://mariadb.com/kb/en/sysbench-benchmark-setup/

A simple example how it is used is this:

#> yum -y install mariadb-server sysbench
#> service mariadb start
#> echo 'CREATE DATABASE sysbench;' | mysql

#> sysbench select_random_points.lua --table-size=2000000 --num-threads=1 --rand-type=uniform --db-driver=mysql --mysql-db=sysbench --mysql-user=root prepare
#> sysbench select_random_points.lua --table-size=2000000 --num-threads=1 --rand-type=uniform --db-driver=mysql --mysql-db=sysbench --mysql-user=root run

#> sysbench /usr/share/sysbench/oltp_read_write.lua --table-size=20000000 --num-threads=1 --rand-type=uniform --db-driver=mysql --mysql-db=sysbench --mysql-user=root prepare
#> sysbench /usr/share/sysbench/oltp_read_write.lua --table-size=20000000 --num-threads=1 --rand-type=uniform --db-driver=mysql --mysql-db=sysbench --mysql-user=root run

But what is not clear to me is how to interpret the results. What I would do is to compare results of two different builds on a single machine -- but I doubt that is possible during the test. Just seeing absolute numbers might still test the database somehow, but without a comparison, it does not tell much IMHO.

Any ideas how to approach this?

Comment 15 Honza Horak 2020-06-04 14:27:10 UTC
(In reply to Honza Horak from comment #14)
> Any ideas how to approach this?

Maybe there are some performance tests already for other packages. That would be worth investigating.

Comment 16 Lukáš Zachar 2020-06-04 14:37:32 UTC
(In reply to Honza Horak from comment #14)
> (In reply to Lukáš Zachar from comment #13)
> > I'm find with closing as WONTFIX for this collection.
> > 
> > However for possible new collection in the future (if there will be any):
> > Seems I can easily get the content from src.rpm, however to have it
> > installed via rpm is easier.
> > Could sql-bench be included in *-mariadb-test?
> 
> That would effectively mean to maintain the code that is not maintained by
> upstream any more. What is more used these days is sysbench, that can be
> used not only for MariaDB, but also for PostgreSQL, MySQL and other non-DB
> benchmark testing. sysbench package is built in EPEL, so if it was possible
> to pull in from there, we can easily run it instead of sql-bench.

Okay, that makes sense. 
Are there any interesting scenarious in sql-bench though?

> 
> Some more info about it is on MariaDB pages and elsewhere on the Internet:
> https://mariadb.com/kb/en/sysbench-benchmark-setup/
> 
> A simple example how it is used is this:
> 
> #> yum -y install mariadb-server sysbench
> #> service mariadb start
> #> echo 'CREATE DATABASE sysbench;' | mysql
> 
> #> sysbench select_random_points.lua --table-size=2000000 --num-threads=1
> --rand-type=uniform --db-driver=mysql --mysql-db=sysbench --mysql-user=root
> prepare
> #> sysbench select_random_points.lua --table-size=2000000 --num-threads=1
> --rand-type=uniform --db-driver=mysql --mysql-db=sysbench --mysql-user=root
> run
> 
> #> sysbench /usr/share/sysbench/oltp_read_write.lua --table-size=20000000
> --num-threads=1 --rand-type=uniform --db-driver=mysql --mysql-db=sysbench
> --mysql-user=root prepare
> #> sysbench /usr/share/sysbench/oltp_read_write.lua --table-size=20000000
> --num-threads=1 --rand-type=uniform --db-driver=mysql --mysql-db=sysbench
> --mysql-user=root run
> 
> But what is not clear to me is how to interpret the results. What I would do
> is to compare results of two different builds on a single machine -- but I
> doubt that is possible during the test. Just seeing absolute numbers might
> still test the database somehow, but without a comparison, it does not tell
> much IMHO

For python testing we run benchmark on the same machine, with package update in between.
What we don't have yet is automated compare (which will fail test if new is (significantly) slower.


Note You need to log in before you can comment on or make changes to this bug.