Filed from caserun https://tcms.engineering.redhat.com/run/354598/#caserun_20426829
Version-Release number of selected component (if applicable):
package rh-mariadb103-mariadb-bench is missing
so far, we had this subpackage:
# yum search mariadb-bench
Zavedené moduly: product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
================================================================================ N/S matched: mariadb-bench =================================================================================
mariadb-bench.x86_64 : MariaDB benchmark scripts and data
mariadb55-mariadb-bench.x86_64 : MariaDB benchmark scripts and data
rh-mariadb100-mariadb-bench.x86_64 : MariaDB benchmark scripts and data
rh-mariadb101-mariadb-bench.x86_64 : MariaDB benchmark scripts and data
rh-mariadb102-mariadb-bench.x86_64 : MariaDB benchmark scripts and data
and I see no mention in the update bug (#1582609) that it should be dropped, nor a single mention in rpm changelog for rh-mariadb103-mariadb-10.3.12-2.el7 that it is no longer built ...
apparently, in this commit:
building the subpackage got limited to Fedora ... but there's no mention why that happened (or even that it happened!)
Reverted in Git.
Thank you for pointing it in upstream RPMs.
I tried to search any article or changelog entry about removing the *-bench subpackage, but haven't found any :/
What should we do with this bug?
I'd suggests "CLOSED WONTFIX", but I'd probabbly mention that in the 10.3 Docs I'm working on right now.
couldn't we just build it but not ship it, as we do with other subpackages?
(In reply to Karel Volný from comment #5)
> couldn't we just build it but not ship it, as we do with other subpackages?
Like with which subpackages?
We don't build what we don't ship. (e.g. Number of storage engines)
The only exception is the client library, which we need to build, beacuse it is needed for other parts of the DB to build (binaries mostly). The upstream does not have a way to not build that anyway.
(In reply to Michal Schorm from comment #6)
> (In reply to Karel Volný from comment #5)
> > couldn't we just build it but not ship it, as we do with other subpackages?
> Like with which subpackages?
What Karel mentioned was probably filter it from the compose, like the rh-mariadb103-build is filtered for example. With that approach, there were historically problems that filtering requires changes in distil, and it caused some issues in the history, that made us to change the mind and not filter packages proactively like that any more.
Anyway, Karel, what would you like to achieve by running the bench during tests?
My understanding was that the testcase verifies that the bench tool works, rather than that it would do some real benchmarking. For real benchmarking, we would need to run different builds on the same machine several times, to see some real output and comparisson. Without that, running the bench in beaker and not ship the package later seems like not very useful thing to do to me.
Michal, I see this bug is in POST, but comment #4 mentions the change was reverted -- shouldn't the status be ASSIGNED then?
(In reply to Honza Horak from comment #7)
> Anyway, Karel, what would you like to achieve by running the bench during
in the past (...), we thought that running benchmarks is a good idea to catch also possible performance regressions and issues
I remember a case where a slowdown in MySQL revealed a problem with filesystem IO in kernel
in our original errata workflow, a run with released version then another with new version was done, so it was very easy to just compare test times and see if there is no such problem (there were even some attempts to code the logic into the test)
unfortunately, this is no longer true:
a) most of the Beaker pool is virtualized; you need real hardware that runs only this one task for the comparison to make sense
b) 'old' and 'new' runs are not scheduled within the same job, on the same machine ('old' is often not scheduled at all)
so, scheduling such comparative runs needs manual work, and it takes more time to get through the queue in Beaker (plus the benchmark itself takes some time to complete), which shifts it from ordinary test to 'nice to have' category, and since there's always a lot of other things to work on, this 'nice to have' is what we usually don't have in the end :-(
... yet I don't want to throw the possibility away completely
plus the benchmark itself covers some part of sanity testing of database operations, so it is useful even without the performance comparison